text
stringlengths 188
632k
|
---|
Although hundreds of thousands of people died fighting in the Civil War, perhaps the war's biggest casualty was the nation's legal order. A Nation of Rights explores the implications of this major change by bringing legal history into dialogue with the scholarship of other historical fields. Federal policy on slavery and race, particularly the three Reconstruction amendments, are the best-known legal innovations of the era. Change, however, permeated all levels of the legal system, altering Americans' relationship to the law and allowing them to move popular conceptions of justice into the ambit of government policy. The results linked Americans to the nation through individual rights, which were extended to more people and, as a result of new claims, were reimagined to cover a wider array of issues. But rights had limits in what they could accomplish, particularly when it came to the collective goals that so many ordinary Americans advocated. Ultimately, Laura F. Edwards argues that this new nation of rights offered up promises that would prove difficult to sustain.
Thursday, March 5, 2015
Edwards's "Legal History of the Civil War and Reconstruction"
Laura F. Edwards, the Peabody Family Professor of History at Duke University has published A Legal History of the Civil War and Reconstruction: A Nation of Rights, in the New Histories of American Law series of the Cambridge University Press: |
Researchers say that a form of oxytocin — the hormone correlated with human love — has a similar effect on fish, suggesting it is a key regulator of social behavior that has evolved and endured since ancient times.
The findings may help answer an evolutionary psychology question: why do some species develop complex social behaviors while others spend much of their lives alone? To find some clues, they examined the cichlid fish Neolamprologus pulcher, a highly social species found in Lake Tanganyika in Africa. These cichlids are unusual because they form permanent hierarchical social groups made up of a dominant breeding pair and many helpers that look after the young and defend their territory.
For the experiments, researchers injected the cichlids with either isotocin, a "fish version" of oxytocin — or a control saline solution. When placed in a simulated territorial competition with a single perceived rival, the isotocin-treated fish were more aggressive towards large opponents, regardless of their own size.
When placed in a larger group situation, isotocin-treated fish became more submissive when faced with aggression from more dominant group members. Such signals are important in this species because they placate the dominant members of the group, say researchers.
"The hormone increases responsiveness to social information and may act as an important social glue," says Adam Reddon, lead researcher and a graduate student in the Department of Psychology, Neuroscience&Behaviour at McMaster University. . "It ensures the fish handle conflict well and remain a cohesive group because they will have shorter, less costly fights."
"We already knew that this class of neuropeptides are ancient and are found in nearly all vertebrate groups," says professor Sigal Balshine. "What is especially exciting about these findings, is that they bolster the idea that function of these hormones, as modulators of social behaviour, has also been conserved."
Published in Animal Behaviour
- PHYSICAL SCIENCES
- EARTH SCIENCES
- LIFE SCIENCES
- SOCIAL SCIENCES
Subscribe to the newsletter
Stay in touch with the scientific world!
Know Science And Want To Write?
- Global 'Roadmap' Shows Where To Put Roads Without Costing The Earth
- Neil Tyson On The Politics Of Science Denial
- Synaptic Plasticity And Memory In Silent Neurons
- Low Carb Vs. Low Fat Diets: Which Is Better?
- Wockhardt Is First Indian Pharmaceutical Company To Get FDA QIDP Status
- Brain Size Matters When It Comes To Remembering
- How The Higgs Became The Target Of Run 2 At The Tevatron
- "So ignore the data that shows the Antarctic breaking ice extent records. Ignore the water displacement..."
- "You don't think science media is a lot better about calling out anti-science on both political..."
- "From what I have seen over the years on Science 2.0 I regard LagrangiansForBreakfast and Ed brown..."
- "Edison's superior lamp filament was made of carbon, not tungsten. In 1906, the General Electric..."
- "Those days are long gone seriously ?..."
- NYU study compares consequences of teen alcohol and marijuana use
- Discovery hints at why stress is more devastating for some
- Simple awareness campaign in general practice identifies new cases of AF
- ROCKET AF trial suggests that digoxin increases risk of death in AF patients
- Health structures explain nearly 20 percent of non-adherence to heart failure guidelines |
Kama kata bunkai video
Info about "Kama kata bunkai video"
The kama is a traditional farming sickle, and considered one of the hardest to learn due to the inherent danger in practicing with such a weapon. The point at which the blade and handle join in the “weapon” model normally has a nook with which a bo can be trapped, although this joint proved to be a weak point in the design, and modern day examples tend to have a shorter handle with a blade that begins following the line of the handle and then bends, though to a lesser degree; this form of the kama is known as the natagama.
The edge of a traditional rice sickle, such as one would purchase from a Japanese hardware store, continues to the handle without a notch, as this is unneeded for its intended use.
[ More here] |
Health Points: Friday
- This is very cool! New research claims meditation is good for the brain. HealthDay News reports:
Imaging technology shows that people who practice meditation that focuses on kindness and compassion actually undergo changes in areas of the brain that make them more in tune to what others are feeling.
"Potentially one can train oneself to behave in a way which is more benevolent and altruistic," said study co-author Antoine Lutz, an associate scientist at the University of Wisconsin-Madison.
How far this idea can be extrapolated remains in question, though.
- Merck’s allergy drug Singulair is being linked to suicide risk. More from the Associated Press:
FDA said it is reviewing reports of mood changes, suicidal behavior and suicide in patients who have taken the drug, which was Merck's best-selling product last year.
In the past year Merck has updated the drug's labeling four times to include information on tremors, anxiousness, depression and suicidal behavior reported in some patients.
- Gina Kolata of The New York Times has determined that a “runner’s high” is a real thing. Take a look:
The runner’s-high hypothesis proposed that there were real biochemical effects of exercise on the brain. Chemicals were released that could change an athlete’s mood, and those chemicals were endorphins, the brain’s naturally occurring opiates. Running was not the only way to get the feeling; it could also occur with most intense or endurance exercise.
The problem with the hypothesis was that it was not feasible to do a spinal tap before and after someone exercised to look for a flood of endorphins in the brain. Researchers could detect endorphins in people’s blood after a run, but those endorphins were part of the body’s stress response and could not travel from the blood to the brain. They were not responsible for elevating one’s mood. So for more than 30 years, the runner’s high remained an unproved hypothesis.
But now medical technology has caught up with exercise lore. Researchers in Germany, using advances in neuroscience, report in the current issue of the journal Cerebral Cortex that the folk belief is true: Running does elicit a flood of endorphins in the brain. The endorphins are associated with mood changes, and the more endorphins a runner’s body pumps out, the greater the effect.
- Sorry Santa Claus, a big belly is being linked to an increased risk of dementia. From The Seattle Times:
People who have big bellies in their 40s are much more likely to get Alzheimer's disease and other forms of dementia in their 70s, according to new research that links the middle-age spread to fading minds for the first time.
The study of more than 6,000 people found the more fat they had in their guts in their early- to mid-40s, the greater their chances of becoming forgetful or confused or showing other signs of senility as they aged. Those who had the most impressive midsections faced more than twice the risk of the leanest.
Dr. Carol Byrd-Bredbenner of Rutgers University in New Brunswick, New Jersey, and colleagues found that many college students engaged in eating behaviors that could make them sick, like eating raw homemade cookie dough or runny eggs.
While people are becoming increasingly aware of food safety issues, Byrd-Bredbenner and her team note, surveys still show a substantial proportion run the risk of food poisoning by eating raw eggs, undercooked hamburger and other foods that may harbor harmful bacteria.
- Surprise-surprise, big tobacco funds cancer studies—shocking! More from the Associated Press:
The disclosure of hidden tobacco money behind a big study suggesting that lung scans might help save smokers from cancer has shocked the research community and raised fresh concern about industry influence in important science.
Two medical journals that published studies by Weill Cornell Medical College researchers in 2006 are looking into tobacco cash and other financial ties that weren't revealed. The studies reported benefits from lung scans, which the Cornell team has long touted.
- Get this, hairdressers and barbers may be at increased cancer risk. More from the American Cancer Society:
The IARC has labeled these occupations as "probably carcinogenic to humans," a classification the agency reserves for those exposures backed by fairly strong evidence. In 1993, the IARC found that hairdressers and barbers were probably exposed to cancer-causing substances, but at that time, evidence of an increased cancer risk in this population was "inadequate." This week's report, published in the Lancet Oncology, is based on a review of epidemiological studies published since that time.
Some of the products used by hairdressers and barbers--such as dyes, pigments, rubber chemicals, and curing agents—have been found to cause tumors in rats in laboratory studies or have been known to cause bladder cancer in humans. In some studies, increased risk has been associated with permanent dyes and use of darker-colored hair dyes.
Trackbacks (0) Links to blogs that reference this article Trackback URL
Dr. Fuhrman's Executive Offices
4 Walter E. Foran Blvd.Flemington, NJ 08822 |
In response to his complaint as to why God had let the people of Israel continue to experience affliction, Moses received the divine assurance that he would see Pharaoh, by a “strong hand” (“by a strong hand” and a “raised arm”), or mighty divine power directed against him, forced to drive the people out of Egypt. God is then quoted as telling Moses: “I am YHWH, and I appeared to Abraham, to Isaac, and to Jacob as God Almighty, and by my name YHWH I did not make myself known to them.” (6:1-3; see the Notes section.)
In his dealings with Abraham, Isaac, and Jacob, God revealed himself as the Almighty. For example, he demonstrated his role as the Almighty One when he revived the reproductive powers of Abraham and Sarah, making it possible for Sarah to give birth to Isaac in her old age. Moreover, based on the blessings and protective care they experienced, Abraham, Isaac, and Jacob would have discerned that God was the Almighty One, the Sovereign. According to the Genesis account, Abraham, Isaac, Jacob, and persons who lived long before their time were acquainted with the name YHWH. Therefore, their not knowing the unique name appears to relate to their not knowing everything that it signified — the fuller knowledge of the Almighty as the God to whom people of all the nations must submit. It would be futile for individuals, tribes, and nations to resist God’s will.
For the Israelites in the time of Moses, the name YHWH would come to have greater significance than it did for their forefathers. This is evident from the quoted words of YHWH that follow. Abraham, Isaac, and Jacob received the promise that their descendants, the Israelites, would receive the land of Canaan as their possession, a land in which their forefathers lived as resident aliens. The Israelites would come to know YHWH as the fulfiller of his promise and as the God who was fully aware of their suffering and groaning in Egypt. He would not forget the covenant he had concluded with their forefathers, but would demonstrate that he remembered it by acting in harmony therewith. YHWH would display his mighty power (literally, his “outstretched arm”) and deliver the Israelites from Egyptian enslavement and oppression. They would witness the impressive judgments of YHWH in the form of ten devastating plagues upon the Egyptians. Furthermore, the descendants of Abraham, Isaac, and Jacob as a people would be brought into a special relationship with YHWH. They would come to be his own people, and he would be their God YHWH, their God under whose protection and care they would find themselves. The Israelites would come to know YHWH in a greatly expanded way because of his freeing them from the harsh bondage that the Egyptians had imposed on them. They would take possession of the land that he swore (literally, “lifted up his hand” [as when taking an oath]) to give to their forefathers (Abraham, Isaac, and Jacob). (6:3-8)
As at other times, YHWH likely used his representative angel to speak to Moses. Thereafter Moses related the words to the “sons [or people] of Israel.” They, however, did not “hear” or “listen” to Moses from the standpoint of their not believing his words. The people were disheartened or discouraged (literally, they experienced “shortness of spirit”) on account of the harsh bondage to which they had been submitted. When YHWH told Moses to go to Pharaoh and inform him that he should let the “sons [or people] of Israel” depart from his land, he objected that the “sons of Israel” had not listened to him. So how could it be that Pharaoh would listen? Moses then referred to his lack of eloquence, saying, “And I am a man of uncircumcised lips” (as if a man with a speech impediment who could not express himself well). Nevertheless, YHWH gave Moses and Aaron the charge that applied both to the “sons of Israel” and to Pharaoh. That charge was for the “sons [or people] of Israel” to be led out of Egypt. According to the Septuagint, God instructed Moses and Aaron to inform Pharaoh that he should “send the sons of Israel out of the land of Egypt.” (6:9-13; see the Notes section.)
At this point in the narrative, the heads of three paternal houses of the people of Israel are listed. They are: The sons of Reuben (Rouben [LXX]) the firstborn of Israel — Hanoch, Pallu, Hezron, and Carmi (Enoch, Phallous, Asron, and Charmi [LXX]); the sons of Simeon (Symeon [LXX]) — Jemuel, Jamin, Ohad, Jachin, Zohar, and Shaul the son of a Canaanite woman (Iemouel, Iamin, Aod, Iachin, Saar, and Saoul the one from the Phoenician [LXX]); the sons of Levi (Leui [LXX]) — Gershon, Kohath, and Merari (Gedson [Gerson], Kaath, and Merari [LXX]). In view of the role of Moses and Aaron, the family line of Levi is continued. Levi died at the age of 137. The sons of Gershon (Gedson [Gerson], LXX) were Libni and Shimei (Lobeni and Semei [LXX]). Kohath (Kaath [LXX] had four sons (Amram, Izhar, Hebron, and Uzziel [Ambram, Issaar, Chebron, and Oziel (LXX)]) and lived 133 (130 [LXX]) years. The sons of Merari were Mahli and Mushi (Mooli and Omousi [LXX]). (6:14-19)
Amram (Ambram [LXX]) married Jochebed (Iochabed [LXX]) the daughter of his father’s brother or his aunt. According to the Septuagint, however, Jochebed was Amram’s cousin (the daughter of his father’s brother). Jochebed gave birth to the sons Aaron and Moses (Moyses [LXX]) and their sister Miriam (Mariam [LXX]). Her husband Amram died at the age of 137. (6:20; for additional comments, see Exodus 2:1 and the Notes section.)
The sons of Izhar (Issaar [LXX]) were Korah, Nepheg, and Zichri (Kore, Naphek, and Zechri [LXX]). Uzziel (Oziel [LXX]) had three sons (Mishael, Elzaphan, and Sithri [Elisaphan and Setri (LXX); Mishael is omitted in Rahlfs’ text of the Septuagint]). Aaron the brother of Moses married Elisheba (Elisabeth [LXX]) the daughter of Amminadab (Aminadab [LXX]) and the sister of Nahshon (Naasson [LXX]). She gave birth to four sons (Nadab, Abihu [Abioud (LXX), Eleazar, and Ithamar. The sons of Korah (Kore [LXX]) were Assir (Asir [LXX]), Elkanah (Elkana [LXX]), and Abiasaph. Aaron’s son Eleazar married one of the daughters of Putiel (Phoutiel [LXX]), and she gave birth to Phinehas (Phinees [LXX]). (6:21-25)
The more extensive listing of the family line of Levi through Kohath served to identify the two brothers Aaron and Moses as the ones whom YHWH had commissioned to lead the Israelites out of Egypt. The two brothers informed Pharaoh regarding this. At the time YHWH, probably through his angel, spoke to Moses, he said, “I am YHWH; speak to Pharaoh the king of Egypt everything that I speak to you.” Considering himself to be not well-suited for the task, Moses objected, “Look, I am uncircumcised of lips [weak-voiced (LXX)]. How then will Pharaoh listen to me?” (6:26-30; see the Notes section.)
It appears that Josephus (Antiquities, II, xii, 4) believed that the revelation of the name YHWH did not precede the time of Moses. Also in modern times, many basically have agreed with this interpretation of the words of Exodus 6:3. The literal view of these words would require interpreting the references to the name YHWH in the Genesis account as reflecting what the Israelites knew at the time the account came to be in its final written form and not what individuals knew about God’s name YHWH and their use of the name in earlier centuries.
The Septuagint does not use the expression “uncircumcised of lips.” In verse 12, the rendering is alogós, and this word commonly means “unreasonable.” Possibly the thought is that Moses lacked eloquence or the ability to express himself well as would be characteristic of a person lacking reasonableness. In verse 30, the Septuagint reads, ischnóphonós (weak-voiced).
Before the Israelites received the law at Mount Sinai, marriage to an aunt was not prohibited. If the Hebrew reading of Exodus 6:20 preserves the original text, Amram married his aunt Jochebed. Manuscripts of the Septuagint vary about the age at which Amram died (132, 136, 137). |
The average weight of a chimpanzee brain is 384 grams (13.5 ounces); our brains are three and a half times heavier. That’s only part of the story though, because different species have different brain structures. Rodents’ brain cells, for example, are much less efficiently packed and for a rat to have the same number of brain cells as us, it would need a brain at least 35 times larger than ours. However it seems that chimps are close enough relatives that they have similar brain structures so they do actually have about three and a half times fewer brain cells than us – at roughly 49 billion. Chimps have less brain dedicated to white matter in the temporal cortex, which means they have fewer neural connections and so a lesser ability to process data.
Answered by Luis Villazon |
Value-based pricing is a method of arriving at an amount to charge for goods or services through assessing their perceived value to the purchaser. The value-based model contrasts with cost-based pricing strategies, such as cost-plus.
Generally businesses use value-based pricing as a means to a higher profit margin. In the consumer market, customers are often willing to pay more than a cost based pricing model, especially with emotional purchases. Customers may assess one company's product to be of greater value than a competitor's for many reasons including brand image, design, packaging, marketing, warranties, previous experiences and word of mouth. Apple, for example, has traditionally been able to achieve a higher profit margin because of the perceived cachet of its products and brand.
Companies that set good value-based pricing take into account how customers see their product in the context of competitor's offerings. Once an objective assessment and comparison of the strengths and weaknesses of the products are made, a realistic value to the customer can be estimated for each difference and the estimated values can be used to determine a reasonable value-based price for the product. |
How does the
The UV-Aire installs inside the main ductwork or plenum of a forced
air heating or air conditioning system. As air passes the lamp, it is
treated with powerful UV-C rays, which reduce airborne contamination.
2. What is UV-C light and how does
it kill bacteria?
UV-C is the invisible, ultraviolet, C-band radiation that makes up
part of the sun's light spectrum. UV-C light prevents growth and
germination of microorganisms by altering the DNA and RNA and effectively
sterilizing the organisms. Once sterilized, they cannot reproduce and with
their short life cycles, they are effectively killed.
3. How long has UV been used in air
Since 1936, UV has been used to sterilize air. It was first used to
purify air in a surgical operating room. UV has been used in schools to
decrease the growth of epidemics such as measles and tuberculosis. Other
applications include: barber shops, restaurants, incubation rooms,
veterinary clinics, and hospitals.
4. Why use a UV light product?
There are two primary benefits to using UV light. The first is to use
UV light to radiate a surface to keep mold from growing in that area. The
other use is, disinfecting the air stream as it passes through the HVAC
system. A high disinfection rate is not generally accomplished in the air
stream in one pass over the UV lamp. However, a significant disinfection
rate is accomplished with repeated circulation of air through the system,
making use of UV light very beneficial.
5. What is the importance of UV
People spend over 90% of their time indoors. With little or no
ventilation, concentrations of microorganisms will increase indoors,
potentially spreading a number of diseases. With increased cases of deaths
being caused by various bacterial diseases, controlling the growth and
spread of pathogens is of major concern in indoor environments. According
to indoor air quality experts, controlling airborne microorganisms is the
next major challenge of the HVAC industry.
6. How does UV-Aire differ from
other UV-C devices?
UV-C energy has been successfully used in many indoor environments.
UV-Aire was developed specifically for use in HVAC systems. It creates a
consistent, high output of UV-C energy. The UV-Aire's intensity output
maximizes microorganism disinfection and ensures cleaner indoor air.
7. Is the UV-Aire harmful in any
Direct exposure to UV light is not recommended, as it may cause damage to
skin and eyes. UV light does not pass through solid materials such as
plastic, glass or metal ductwork. Properly installed inside the duct, the
UV-Aire is a safe and practical product.
8. What effects will UV-C rays have
on plastics such as coil pans?
If the plastic is not UV resistant, UV-C can cause a breakdown of the
material over time. Based on lab tests, positioning the lamp 30 inches or
more away from plastic surfaces will eliminate any measurable breakdown of
9. Is it affordable?
Compared to high costs of medical treatment and missed work as a result of
poor indoor air quality, the UV-Aire pays for itself quickly. The UV-Aire
costs only pennies a day to operate, consuming about the same amount of
energy as a 30-watt light bulb.
10. Is the product suitable for
people with severe allergy or asthma problems?
Yes. The UV-Aire can offer relief to many allergy and asthma sufferers by
reducing airborne contamination. However, the device is not exclusively
for people with respiratory disorders. Your entire family can benefit from
breathing healthier air.
11. Does the UV-Aire produce a fresh
Many smells are not addressed by the UV-Aire. However, some unpleasant
smells develop from the growth of microorganisms. The UV-Aire works to
reduce mold and other common household germs, in many cases resulting in a
fresher smelling environment.
12. Is the technology proven?
Yes! UV-Aire and other UV products have been installed in all types of
buildings including: homes, hospitals, offices, public buildings, food
preparation plants, electric utilities companies, and more. Users
consistently report improvements in air quality and reduced respiratory
13. Does UV light take the place of
No. The UV-Aire should be used in conjunction with a filter. HVAC filters
trap airborne particles based on their size, allowing most microorganisms
to pass through undeterred. The Ultraviolet light attacks microorganisms.
It is recommended to install the UV-Aire downstream of the air filter.
14. How can I get the product
installed in my home?
Please contact our office to schedule an installation date.
15. How long does installation take?
Installation generally takes less than an hour.
16. Why does the unit remain on at
Microorganisms like moist, dark places. When the light remains on, the
reproduction of these organisms may be reduced. It is also
more energy efficient to leave the lamp on continuously. Similar to
fluorescent lights, the energy required to start the UV lamp is high,
while operating energy is low.
17. Should the HVAC appliance fan or
blower run continuously?
No. This is not necessary. During normal operation of the heating or air
conditioning, the blower will circulate the air over the UV lamp from
50-75 times a day, which is sufficient. During moderate weather, when
neither the heating or air conditioning is on, it is recommended to open
the windows to allow for fresh air infiltration and/or to operate the
blower continuously (turn on the fan) to circulate air over the UV light.
18. What kind of maintenance is
It is recommended to periodically inspect the lamp's operation through the
view port to ensure the lamp is on. It is also necessary to change the UV
lamp annually, as the intensity of the lamp's output diminishes over time.
The changing of the UV lamp should be done during annual air conditioning
or furnace Inspection.
19. Does the lamp require cleaning?
Dirt and oil on the surface of the lamp reduce output intensity. Upon
installation, the lamp should be wiped with the alcohol swab provided. The
lamp may need to be cleaned every 3 to 6 months depending on its operating
environment. Simply remove the lamp and wipe with alcohol. Avoid touching
the lamp with bare hands. The oils in your hands may reduce UV output.
20. Why does the lamp require a
After 9,000 hours of operation, 375 days, the lamp starts to become "solarized".
The UV-C output is reduced to around 80% of its original intensity, which
steadily declines thereafter. The lamp will still be illuminated,
producing visible light. However, UV-C light will diminish reducing
21. How much electricity does the
Approximately 30 watts with 1 lamp and 60 watts with 2 lamps.
22. What is the best location for
Install the UV-Aire in either the supply or return plenum of the warm air
heating system. With air conditioning systems, the best location is over
the air conditioning coil.
23. Why above the air conditioning
Moisture accumulates on the air conditioning coil, creating a damp surface
for growth of mold and microorganisms. With the UV lamp over the air
conditioning coil, the lamp constantly soaks the coil with its rays,
effectively disinfecting the air as well as keeping the coil clean.
24. Can the UV-Aire be installed in
the return air?
Yes! An optional location is the return air. We suggest the unit be
located upstream or prior to the humidifier. This should prevent the lamp
from getting water spots, which reduce UV output.
25. How long is the warranty?
The warranty is one year from the date of purchase for the unit and 30
days for the lamp.
26. Why use eye protection?
UV light cannot be seen. When you look at a UV lamp, you are seeing the
visible light, not UV light. There are several bands in the UV light
spectrum. UV-C is used to control mold and microorganisms. UV-C light will
damage human tissue following continuous exposure and can severely burn
the eyes. A glance from a distance may not be a problem. But, looking at a
UV-C lamp close up for 5-10 seconds could damage the eyes. Protecting the
eyes with plastic protective goggles is recommended.
27. What is the meaning of
microwatts per centimeter at 1 meter?
This is an intensity rating: the amount of UV-C energy exposed onto one
square centimeter of surface area on a target placed 1 meter from the
What precautions should be taken before opening or servicing the ductwork
where a UV-C lamp is in use?
The UV-C lamp should be turned OFF prior to entering the ductwork. An
external switch is provided as well as warning labels regarding service |
- Routing and Switching Network Engineer
- Interconnecting Cisco® Networking Devices Part 2 (ICND2)
Topics included in this course are Configure, verify, and troubleshoot a switch with VLANs and interswitch communications, Implement an IP addressing scheme and many more.
- Course Outline
The following topics are general guidelines for the content likely to be
included on the Remote Access exam. However, other related topics may
also appear on any specific delivery of the exam. In order to better reflect
the contents of the exam and for clarity purposes, the guidelines below may
change at any time without notice.
Configure, verify, and troubleshoot a switch with VLANs and interswitch communications
• Describe enhanced switching technologies (including: VTP, RSTP,
VLAN, PVSTP, 802.1q)
• Describe how VLANs create logically separate networks and the
need for routing between them
• Configure, verify, and troubleshoot VLANs
• Configure, verify, and troubleshoot trunking on Cisco switches
• Configure, verify, and troubleshoot interVLAN routing
• Configure, verify, and troubleshoot VTP
• Configure, verify, and troubleshoot RSTP operation
• Interpret the output of various show and debug commands to verify
the operational status of a Cisco switched network
• Implement basic switch security (including: port security,
unassigned ports, trunk access, etc.)
Implement an IP addressing scheme and IP Services to meet
network requirements in a medium-size Enterprise branch
• Calculate and apply a VLSM IP addressing design to a network
• Determine the appropriate classless addressing scheme using VLSM
and summarization to satisfy addressing requirements in a
• Describe the technological requirements for running IPv6
(including: protocols, dual stack, tunneling, etc)
• Describe IPv6 addresses
• Identify and correct common problems associated with IP
addressing and host configurations
Configure and troubleshoot basic operation and routing on
• Compare and contrast methods of routing and routing protocols
• Configure, verify, and troubleshoot OSPF
• Configure, verify, and troubleshoot EIGRP
• Verify configuration and connectivity using ping, traceroute, and
telnet or SSH
• Troubleshoot routing implementation issues
• Verify router hardware and software operation using SHOW &
• Implement basic router security
Implement, verify, and troubleshoot NAT and ACLs in a
medium-size Enterprise branch office network
• Describe the purpose and types of access control lists
• Configure and apply access control lists based on network filtering
• Configure and apply an access control list to limit telnet and SSH
access to the router
• Verify and monitor ACL's in a network environment
• Troubleshoot ACL implementation issues
• Explain the basic operation of NAT
• Configure Network Address Translation for given network
requirements using CLI
• Troubleshoot NAT implementation issues
Implement and verify WAN links
• Configure and verify Frame Relay on Cisco routers
• Troubleshoot WAN implementation issues
• Describe VPN technology (including: importance, benefits, role,
• Configure and very PPP connection between Cisco routers
- Prerequisites & Certificates
Certificate of Completion
- Cancellation Policy
Cancellations less than 2 business weeks before the expected delivery date are eligible for a 50% refund, or a credit voucher will be provided for regularly scheduled courses (choice being that of the registrant). Credit Vouchers are transferable within the same company. Please send your cancellation notice to [email protected].
- Map & Reviews
Itplanit Services Corp.
[ View Provider's Profile ]
ReviewsHere are some reviews of the training vendor.
This course has not yet been rated by one of our members.
If you have taken a course through this vendor please log into your account and leave feedback for this vendor. You will be helping ensure our members get directed to the best training facilities.
This course currently does not have any dates scheduled. Please call 1-877-313-8881 to enquire about future dates or scheduling a private, in house course for your team.
This page has been viewed 362 times. |
When faced with a worksheet packed full of data, with many columns and perhaps hundreds or thousands of rows, making sense of it all can be a daunting task. PivotTables help you pull out just the data you need to quickly make informed decisions. They are very flexible, easy to adjust, and can be created and modified with just a few clicks. Don’t worry if PivotTables are confusing at first, they will make a lot more sense once you start working with them.
Your data should be neatly organized into rows and columns without any blank rows or columns.
- Your data should be neatly organized into rows and columns without any blank rows or columns.
- Each column should have the same data type. For example, you shouldn’t have a column of prices where some cells have the currency format applied and some have the accounting format applied
- PivotTables can be created using a cell range or an existing table.
- Select any cell in the data range you want to analyze.
- Click the Insert tab on the ribbon.
- Click the PivotTable button in the Tables group.
The Create PivotTable dialog box opens. Here, choose which data to analyze and where to place the PivotTable.
If you’ve already clicked within a data range, the Table/Range field is populated. Verify the correct range is displayed.
The data range doesn’t have to be in the current workbook. Select Use an external data source to select data outside the workbook.
- Click OK.
An empty PivotTable and task pane appear on a separate worksheet. Next you need to specify the fields you want to appear in your PivotTable.
Once you’ve created your PivotTable, you have to specify the data you want to analyze. The PivotTable Fields pane appears at the right. Under the Search field you see a list of all the possible fields you can use in your PivotTable. These fields are the column headings from the original data source.
To make it a little easier to understand, let’s break it down. Say your original data set contains information for ticket sales and includes dates, destinations, prices, the number of sales, sales totals, sales agents, etc., but all you really need to know is how many tickets were sold each month for each destination. You can grab the Destination field and the Date field, add them as rows and columns in the PivotTable, and add a numeric sales field to the values area. The PivotTable will display a subset of the original data, but only include the values you really need to see.
- Click and drag a field to the Rows area.
- Click and drag a field to the Values area.
- If desired, click and drag a field to the Columns area.
If you want to filter the PivotTable, add an additional field to the Filters area.
The PivotTable updates to display the values for the fields you’ve added. The great thing about PivotTables is they are extremely flexible. If the table isn’t displaying the data like you want, just click and drag fields in and out of the Rows, Values, and Columns areas until the PivotTable represents the data correctly. |
The science bit
Hardy, twining, herbaceous perennial. Stem rough, produces a milky latex sap. Leaves 3-5 lobed, oppositely arranged, coarsely toothed. Plants either have male or female (dioecious) flower cones (inflorescences). Male inflorescences panicles; female inflorescences round spikes, both with papery bracts. Fruits much larger than inflorescences, spherical, straw-coloured. Pollinated by wind.
- Hops can be grown in different ways: on hop hills (on 10-foot tall posts in a mound of compost) up sisal strings suspended from wires attached to chestnut posts trained by ‘butchers’ and ‘umbrella’ and ‘Worcester’ methods. Hop workers used to wear stilts to sort the wires out.
- Hops are susceptible to several pests and diseases and often require spraying.
Where it grows
Hops, like barley, come from South-East Asia, and have been used in beer for the last 10,000 years. In Europe they have been used since the 9th century and in Britain since the 15th century. They are now are grown across the Northern Hemisphere. They generally require an average summer temperature of 16–18°C, but do well over a wide range of soils provided they are fertile and moisture-holding: light to heavy loams are best.
Hops were originally used in beer-making to stop the drink going sour. In the days when water in Britain was not clean, beer provided a nutritious thirst-quenching alternative. As late as the 1600s men, women and children sometimes drank around three litres of weak beer a day.
Today the ale keeps anyway due to high hygiene standards but hops are still used for the characteristic bitter taste, flavour stability and retention of the foamy head on top. Globally unfertilised flowers are used in brewing and are produced in an all-female ‘nunnery’ hop gardens. Hop production in Britain differs in that male hops are allowed in at a ratio of 1-to-200. This was a very early form of biological control! Under Britain’s particular climatic conditions the unfertilised stigmas were a focus for powdery mildew infection. Pollination led to the stigmas on the female flowers withering rapidly, reducing the chance of infection.
Recently a new hop variety has been launched that is suitable for organic production with minimal use of pesticide: ‘Boadicea’ is a dwarf English female hop that is resistant to damson hop aphid and powdery and downy mildew.
- Bract: modified or specialised leaf in a flowering structure (inflorescence).
- Herbaceous: possessing characteristics of herbs. Loam: well-drained soil composed mainly of sand and clay.
- Lobe: incomplete division in any plant organ (eg leaf).
- Panicle: branched flower stalk.
- Perennial: lives for at least two years. |
Discourse and argumentation are effective techniques for education not only in social domains but also in science domains. However, it is difficult for some teachers to stimulate an active discussion between students because several students might not be able to develop their arguments. This paper proposes to use WordNet as a semantic source in order to generate questions that are intended to stimulate students’ brainstorming and to help them develop arguments in a discussion session. In a study including 141 questions generated by human experts and 44 questions generated by a computer system, the following research questions have been investigated: Are system-generated questions understandable? Are they relevant to given discussion topics? Would they be useful for supporting students in developing new arguments? Are understandable and relevant system-generated questions predicted to be useful for students in order to develop new arguments? The evaluation showed that system-generated questions could not be distinguished from human-generated questions in the context of two discussion topics while the difference between system-generated and human-generated questions was noticed in the context of one discussion topic. In addition, the evaluation study showed that system-generated questions that are relevant to a discussion topic correlate moderately with questions that are predicted as useful for students in developing new arguments in the context of two discussion topics and understandable system-generated questions are rated as useful in the context of one specific discussion topic.
An argument is an artifact that is created to articulate and justify claims, explanations, or viewpoints, and argumentation is the process of generating these artifacts (Osborne et al. 2004; Sampson and Clark 2008). The ability to generate good arguments that involve evidence and theory to support or reject a claim or an explanation is an important component of inquiry learning (Sampson and Clark 2008; Duschl and Osborne 2002).
Questioning can be deployed to advance the argumentation ability of students, and teacher-initiated questions might stimulate the thinking process of students. Studies have reported that deploying questions can be effective for learning. With novice computer scientists, asking effective questions during the early phases of planning, a solution can support the students’ comprehension and decomposition of the problem at hand (Lane and VanLehn 2005). Asking targeted, specific questions is useful for revealing knowledge gaps with novices, who are often unable to articulate their questions (Tenenberg and Murphy 2005). Other researchers proposed to use questions to encourage students’ self-explanation. Questions of this type are referred to as explanation prompts and have demonstrated to be a promising instructional support feature (Berthold et al. 2011) and highly beneficial for learning (Chi et al. 1994). Questions can not only be used as a teaching technique by teachers; Yu and Liu (2008) reported that requesting students to pose questions by themselves during the learning process helps students develop both cognitive and metacognitive strategies.
The goal we pursue in our research is to generate questions automatically in order to support students in developing their own arguments for a given discussion topic so that they could improve their argumentation ability and would be more active in a discussion session. As the first step on the way to achieve this goal, in this paper, we investigate whether WordNet (Miller 1995), a lexical database for English, is an appropriate source for generating questions automatically. For this purpose, we will investigate the following research questions:
Are questions that are generated using WordNet as understandable as human-generated questions?
Are questions that are generated using WordNet as relevant to a given discussion topic as human-generated questions?
Are questions that are generated using WordNet perceived as useful as human-generated questions?
Are understandable and relevant system-generated questions predicted to be useful for students in order to develop new arguments?
State of the art of using questions in technology-enhanced learning
In this section, educational applications of automatic question generation are reviewed and classified. This paper extends the four classes of educational applications of question generation proposed in Le et al. (2014) with a new class: prompts for education.
The first class includes systems that pose prompts to students and have proven to be effective in supporting cognitive and meta-cognitive learning strategies (Glogger et al. 2009; Wong et al. 2002). Prompts are hints or questions that induce productive learning processes. Prompting assumes that learners already know certain learning strategies, but that they are not able to apply them appropriately. Prompts are supposed to overcome the deficiency of applying learning strategies, that is, a student’s lack of application of a helpful strategy that is already in a student’s repertoire (Glogger et al. 2009; Flavell 1978). Prompts can also be used to support journal writing. Writing learning journals, students are instructed to write down a text in which they reflect on the previous classes’ learning contents and their learning process. Berthold et al. (2007) found that cognitive prompts or a combination of cognitive and meta-cognitive prompts elicited significantly more corresponding learning strategies compared to no prompts or just meta-cognitive prompts. Schwonke et al. (2006) also reported benefits of deploying adaptive cognitive and meta-cognitive prompts to help students revise learning journals. Nückles et al. (2009) compared the usefulness of different sets of prompts for writing journals and reported that participants, who received cognitive and meta-cognitive prompts including hints on planning or remedial strategies, outperformed the participants in the other conditions (no prompts, only using cognitive prompts, only meta-cognitive prompts, cognitive and just monitoring prompts as meta-cognitive prompts).
The second class of applications of automatic generated questions includes systems that are intended to help students acquire knowledge or skills. Kunichika et al. (2001) proposed an approach to extracting syntactic and semantic information from an original text and questions are constructed based on the extracted information. The authors reported that 80 % of the automatically generated questions were considered as appropriate for novices learning English by experts. Aiming at improving reading skills of students, Mostow and his research group (for instance, Mostow et al. 2008; Mostow et al. 2013) developed an automated reading tutor which generates questions automatically for enhancing the student’s comprehension of text reading. Mostow and Chen (2009) investigated how to generate self-questioning instruction automatically on the basis of statements about mental states (e.g., belief, intention, supposition, and emotion) in narrative texts. The reading tutor has been evaluated with respect to the acceptability of menu choices (grammatical, appropriate, and semantically distinct), the acceptability of generated questions, and the accuracy of feedback. Mostow and Chen (2009) reported that only 35.6 % of generated questions could be rated as acceptable. In the same class of educational applications of question generation, Liu and colleagues (Liu et al. 2012) introduced a system (G-Asks) for improving students’ writing skills (e.g., citing sources to support arguments, presenting the evidence in a persuasive manner). The approach implemented in this system consists of three stages. First, citations in an essay written by the student are extracted, parsed, and simplified. Then, in the second stage, the citation category (opinion, result, aim of study, system, method, and application) is identified for each citation candidate. In the final stage, an appropriate question is generated using pre-defined question templates. Evaluation studies have shown that the system could generate questions as useful as human supervisors and significantly outperformed human peers and generic questions in most quality measures after filtering out questions with grammatical and semantic errors (Liu et al. 2012).
The third class of educational applications of question generation aims at assessing the knowledge of students. Heilman and Smith (2009) developed an approach to generating questions for assessing students’ acquisition of factual knowledge from reading materials. The authors developed general-purpose rules to transform declarative sentences into questions. The approach includes an algorithm to extract simplified statements from appositives, subordinate clauses, and other constructions in complex sentences of reading texts. Evaluation studies have been conducted to assess the quality and precision of automatically generated questions using Wikipedia and news articles. The authors reported that the acceptability of top-ranked WH questions is around 40–50 %. Furthermore, K-12 teachers created factual questions by selecting and revising suggestions from the system with less effort than by writing questions on their own (Heilman 2011). One common form for assessing student’s factual knowledge is the use of multiple-choice tests. Mitkov and colleagues (Mitkov et al. 2006) developed a computer-aided environment for generating multiple-choice test items. The authors deployed various natural language processing techniques (shallow parsing, automatic term extraction, sentence transformation, and computing of semantic distance). In addition, the authors exploited WordNet, which provides language resources for generating distractors for multiple-choice questions. In addition to generating test items automatically, the system provides the user the option to post-process the test items. The authors reported that the time required for generating questions including manual correction was less than for manually creating questions alone (Mitkov et al. 2006). Also with the purpose of assessing students’ knowledge, Brown and colleagues (Brown et al. 2005) developed the system REAP which is intended to provide students with texts to read according to their individual reading levels. The system chooses text documents which include 95 % of words that are known to the student while the remaining 5 % of words are new to the student and need to be learned. After reading the text, the student’s understanding is assessed. The system generates different types of questions including word bank and multiple-choice questions. In contrast to Mitkov and colleagues who used WordNet to generate distractors, Brown et al. (2005) used WordNet to generate different types of questions (definition, synonym, antonym, hyperonym, hyponym, and cloze questions). Experimental results have been reported that with automatically generated questions, students achieved a measure of vocabulary skill that is comparable to performance on independently developed human-generated questions. Another form of assessing student’s knowledge is to rely on fill-in-the-blank questions. Hoshino and Nakagawa (2005) proposed to deploy standard classification methods to decide the position of the gap in a fill-in-the-blank item. Sumita et al. (2005) developed fill-in-the-blank questions by replacing verbs with gaps in an input sentence. Possible distractors are retrieved from a thesaurus by choosing the same Part of Speech (e.g., noun, verb, adjective) and similar word frequency in a tagged corpus. A new sentence is created by placing a distractor in the gap position in the original sentence and is then used as the input for a search on the Internet. If the sentence is found on the Internet, the distractor is considered invalid. Here, participants who took a test consisting of automatically generated items achieved scores that highly correlated with their scores in the Test of English for International Communication (TOEIC).
The fourth class of educational applications of question generation includes systems that are able to provide tutorial dialogues. Olney and colleagues (Olney et al. 2012) presented a method for generating questions for tutorial dialogue. This involves automatically extracting concept maps from textbooks in the domain of Biology. This approach does not deal with the input text on a sentence-by-sentence basis only. Rather, various global measures (based on frequency measures and comparison with an external ontology) are applied to extract an optimal concept map from the textbook. Person and Graesser (2002) developed an intelligent tutoring system that improves students’ knowledge in the areas of computer literacy and Newtonian physics using an animated agent. Each topic contains a focal question, a set of good answers, and a set of anticipated bad answers (misconceptions). The system initiates a session by asking a focal question about a topic and the student is expected to write an answer containing 5–10 sentences. Initially, the system used a set of predefined hints or prompts to elicit the correct and complete answer. Graesser and colleagues (Graesser et al. 2008) reported that with respect to learning effectiveness, the system had a positive impact on learning with effect sizes of 0.8 standard deviation units compared with other appropriate conditions. Lane and VanLehn (2005) developed PROPL, a tutor which helps students build a natural-language style pseudo-code solution to a given problem. The system initiates four types of questions: 1) identifying a programming goal, 2) describing a schema for attaining this goal, 3) suggesting pseudo-code steps that achieve the goal, and 4) placing the steps within the pseudo-code. Through conversations, the system tries to remediate a student’s errors and misconceptions. If the student’s answer is not ideal (i.e., it cannot be understood or interpreted as correct by the system), sub-dialogues are initiated with the goal of soliciting a better answer. PROPL has been evaluated with the programming languages Java and C and it has been reported that students who used this system were frequently better at creating algorithms for programming problems and demonstrated fewer errors in their implementation (Lane and VanLehn 2005).
In contrast to traditional approaches to generating questions using text as input and deploying various natural language processing techniques for creating questions, the fifth class of educational applications of question generation exploits linked open data that are a part of the semantic web (Heath and Bizer 2011) for generating questions. Jouault and Seta (2013, 2014) proposed to generate semantics-based questions by querying information from the large linked open data sources DBpedia (http://dbpedia.org/) and Freebase (https://www.freebase.com/) to facilitate learners’ self-directed learning. Using this system, students in self-directed learning are asked to build a timeline of events of a history period with causal relationships between these events given an initial document. The student develops a concept map containing a chronology by selecting concepts and relationships between concepts from the given initial Wikipedia document to deepen their understanding. While the student creates the concept map, the system also generates its own concept map by referring to semantic information from DBpedia and Freebase. The system’s concept map is updated with every modification of the student’s one and enriched with related concepts that can be queried from both linked open data sources. Thus, the system’s concept map always contains more concepts than the student’s map. Using these related concepts and their relationships, the system generates questions for the student to lead to a deeper understanding without forcing to follow a fixed path of learning.
Five classes of existing educational applications of automatic question generation have been reviewed. The fifth class of educational applications, which make use of the semantic web for generating questions, needs more research. At present, to our best knowledge, just the work of Jouault and Seta (2013, 2014) falls in this research direction. In light of this research gap, this paper proposes to use WordNet in order to generate questions that aim at stimulating the brainstorming of students during the process of argumentation. WordNet (cf. “Methods” section) has been decided to be used as a semantic source for generating questions because it is a rich lexical database that is able to provide hyponyms (related concepts) to a queried concept. We hypothesize that hyponyms could be used to generate questions that are related to a given discussion topic.
Although the question generation approach presented in this paper and the work of Jouault and Seta are intended to help students deepen their understanding in a learning/discussion topic by working with generated questions, our approach is different from the work of Jouault and Seta in two points: 1) With respect to the technical issue, while Jouault and Seta adopted ontology and linked open data techniques to eliminate the difficulty of the natural language understanding problem in the learning domain (in this case, the history domain), this paper suggests an approach to deploy natural language techniques (e.g., a natural language parser) in order to extract important concepts from a discussion topic and using WordNet to query related concepts that are relevant for discussion; 2) With respect to learning goals, Jouault and Seta proposed to use automatic generated questions for enhancing students’ knowledge in history whereas our approach focuses on helping students develop new arguments for the argumentation process.
Question generation using WordNet
In this section, we describe conceptually how questions can be generated in our approach. A more detailed technical description of the approach is presented in Le et al. (2014b). In order to illustrate the question generation approach proposed in this paper, we will use the following discussion topic that can be given to students in a discussion session:
The catastrophe at the Fukushima power plant in Japan has shocked the world. After this accident, the Japanese and German governments announced that they are going to stop producing nuclear energy. Should we stop producing nuclear energy and develop renewable energy instead?
From the discussion topic, we note that the following noun phrases can serve as starting points to generate questions: catastrophe, Fukushima power plant, nuclear energy, renewable energy. This step is described in more details in the following subsection.
Analyzing text structure and identifying key concepts
In order to automatically recognize key concepts of a discussion topic, a natural language parser is used to analyze the grammatical structure of a sentence into its constituents. The language parser analyzes a text and identifies the category of each constituent (for instance: determiner, noun, or verb). This parsing process results in a parse tree. Since nouns and noun phrases can be used as key concepts in a discussion topic, we select from the parse tree of the parsed discussion text only constituents which are tagged as nouns (NN) or noun phrases (NP) (cf. Fig. 1). Since the present implementation of our approach is not able to determine which concept is more important than another one. Thus, the system proposed here uses all extracted key concepts that are marked as NN or NP in the resulted parse tree.
Question generation using noun phrases in a discussion topic
Using the extracted key concepts, we are ready to generate questions. The next issue that needs to be addressed is to determine the types of questions to be generated. According to Wilen (1991), there exist more than 21 classification systems for classroom questions (e.g., Bloom (1956), Otero and Graesser (2001), Schreiber (1967), Pate and Bremer (1967), and Graesser and Person (1994)). While Bloom’s taxonomy is widely used for classroom teaching (Arias de Sanchez 2013), the question taxonomy for tutoring proposed by Graesser and Person (1994) is specialized for one-on-one tutoring. This taxonomy consists of 16 question categories: verification, disjunctive, concept completion, example, feature specification, quantification, definition, comparison, interpretation, causal antecedent, causal consequence, goal orientation, instrumental/procedural, enablement, expectation, and judgmental. The first 4 categories are classified as simple/shallow, 5–8 as intermediate, and 9–16 as complex/deep questions. We apply this question taxonomy to define appropriate question templates for generating questions, because it is more fine-grained than Bloom’s taxonomy, and as stated, has been designed for one-on-one settings (cf. Table 1). Using defined question templates, we are able to replace the placeholder X by nouns and noun phrases extracted from a discussion topic. For example, the following question templates are filled with the noun phrase “nuclear energy” and result in some questions.
What does <X> remind you of?
What are the properties of <X>?
What is an example of <X>?
Question generation using related concepts in WordNet
Semantics-based question generation approaches use a source of semantic information which is related to the topic being discussed. Since in this paper we focus on using semantic information available on the Internet for generating questions, the source of “semantic information” we look for is on the semantic web. For example, Wikipedia (https://www.wikipedia.org/) provides descriptions of concepts. While Wikipedia might contain incorrect information due to its contribution mechanism, one of the advantages of Wikipedia is that the description of many concepts is available in many different languages. If we want to develop a question generation for different languages, Wikipedia might be an appropriate source. WordNet (Miller 1995) also provides a source of semantic information which can be related to a discussion topic. WordNet is an online lexical reference system for English. Each noun, verb, or adjective represents a lexical concept. A concept is represented as a synonym set (called synset), i.e., the set of words that share the same meaning. Between two nominal synsets, WordNet provides semantic relations. The hyponym relation represents a concept specialization. For example, for the concept “energy”, WordNet provides a list of direct hyponyms which are directly related to the concept being searched and represent specializations: “activation energy”, “alternative energy”, “atomic energy”, “binding energy”, “chemical energy”, and more. In addition, a synset can contain example sentences, which can be used for generating questions. For example, for a concept of “energy” into WordNet, an example sentence like “energy can take a wide variety of forms” for this concept is available. One of the advantages of WordNet is that it provides accurate information (e.g., hyponyms) and grammatically correct example sentences.
Placeholders in question templates (Table 1) can be filled with appropriate hyponym values for generating questions. For example, the noun “energy” exists in the discussion topic, and after extracting this noun as a key concept, it can be used as input for WordNet that provides several hyponyms, including “activation energy”. The following question templates can be used to generate questions of the question category “Definition” (see Table 2).
The goal of the evaluation is to determine whether automatically generated questions are of as high quality as human-generated questions. That is, we want to know whether an automatically generated question can be identified by human raters and how they rate the quality of system-generated questions as compared to human-generated questions.
In the first evaluation phase, we invited eight experts from the research communities of argumentation and question/problem generation to manually create questions. We gave them the following three discussion topics and asked them to create questions which can be used to support students in developing arguments. Since the eight experts work in USA, Europe, and Asia, we chose discussion domains with international relevance which had been in the news recently. For this study, we chose the domains of energy and economy. Each discussion topic consisted of two sentences and an initial discussion question. This kind of construction for discussion topics was intended because discussion participants and human experts should have enough “materials” for thinking about a specific problem. If a discussion topic was too short (e.g., only a sentence or a discussion question), this might make it difficult for discussion participants to initiate a discussion or for human experts to think of questions to be generated:
Topic 1: The catastrophe at the Fukushima power plant in Japan has shocked the world. After this accident, the Japanese and German governments announced that they are going to stop producing nuclear energy. Should we stop producing nuclear energy and develop renewable energy instead?
Topic 2: Recently, although the International Monetary Fund announced that growth in most advanced and emerging economies was accelerating as expected. Nevertheless, deflation fears occur and increase in Europe and the US. Should we have fear of deflation?
Topic 3: “In recent years, the European Central Bank (ECB) responded to Europe's debt crisis by flooding banks with cheap money…ECB President has reduced the main interest rate to its lowest level in history, taking it from 0.5 to 0.25 percent” (Kwasniewski 2013). How should we invest our money?
From our eight experts, we received 54 questions for topic 1, 47 questions for topic 2, and 40 questions for topic 3.
For each discussion topic, the system generated several hundred questions (e.g., 844 questions for topic 1), because from each discussion topic several key concepts were extracted, and each key concept was extended with a set of hyponyms queried from WordNet. For each key concept and each hyponym, fourteen questions have been generated based on the question templates in Table 1. Since the set of generated questions was too big for expert evaluation, in the second evaluation phase, we selected a small amount of automatic generated questions randomly, so that the proportion between the automatic generated questions and the human-generated questions was about 1:3. There were two reasons for this proportion. First, in case the proportion between automatically generated questions and human-generated questions is too high, then it could influence the real “picture” of human-generated questions. Second, we needed to make a trade-off between having enough (both human-generated and system-generated) questions for evaluation and considering a moderate workload for human raters. The proportion of automatic generated questions and of human-generated questions is in Table 3.
Then, we mixed human-generated questions with automatic generated questions and asked human raters to identify whether each question from the mixed set of questions had been generated by the system or by a human expert. For topic 1, we had three raters, and for each of the last two topics, we could only get two raters. Note that these human raters were not the same human experts who generated questions. Also, they did not know about the proportion between human-generated questions and system-generated questions.
Evaluation of human perception
First, we evaluated the soundness of system-generated questions. For this purpose, we asked human raters to answer the following question: Is that an automatic system-generated question (Yes/No)? We use the balanced F-score to evaluate and to analyze the ratings of humans. The F-score is calculated based on precision and recall using the following formula:
The precision for a class is the number of true positives (i.e., the number of system-generated questions correctly labeled as belonging to the positive class) divided by the total number of elements labeled as belonging to the positive class, while the recall for a class is the number of true positives divided by the total number of elements that actually belong to the positive class. If the F-score is high, it shows that the system-generated questions and the human-generated questions are easy to distinguish. Otherwise, a low F-score indicates that it is difficult for human raters to distinguish between system-generated and human-generated questions.
Table 4 summarizes the F-scores of each human rater. It shows that for topic 1, it was difficult for rater 1 (F = 0.33) and moderately difficult for rater 2 (F = 0.51) to distinguish the authorship of questions. The kappa value (0.086) indicates a low agreement between two raters—which means that even if each of the graders correctly classified some questions, their ratings would not be consistent with each other. With respect to topic 2, the F-score of both raters is moderate (0.5 and 0.52). The Kappa value for their agreement was 0.233, which can be considered as fair. This shows that for topic 2, it was easier to distinguish between human-generated and system-generated questions than in the context of topic 1. With respect to topic 3, for both raters, it was relatively difficult to identify the authorship of the questions (F-score is between 0.40 and 0.44) and the agreement between the raters was fair (0.263).
Interestingly, in the context of topic 3, one question “What is cheap money?” was generated by a human expert and by the system identically. This question was assumed by both human raters as a system-generated question. Thus, this question was not included in the statistical evaluation for topic 3.
In summary, we have learned that for all raters it was not easy to identify system-generated questions from the set of mixed questions. This indicates that system-generated questions are sound as human-generated questions. The agreement between raters was slight or fair. This strengthens the indication that it was difficult for human raters to distinguish between system-generated and human-generated questions.
Evaluation of question quality
The goal of the following evaluation is to empirically investigate the first three research questions specified in the “Background” section: 1) Are the system-generated questions understandable? 2) Are they relevant to the given discussion topic? 3) Would they be useful for supporting students in developing arguments?
The first three research questions were also given literally to human raters who were asked to rate the mixed set of questions using the scale from one to three scores (1: least, 2: middle, 3: most). First, we investigate these research questions in the context of each specific discussion topic, then we normalize the evaluation result for each topic and investigate these research questions in general.
In the context of topic 1, Table 5 shows that the mean of understandability for human-generated questions (2.28) is a little higher than of system-generated questions (2.19). However, their difference is statistically not significant. With respect to the relevance of the questions to the given discussion topic, the mean of the score for human-generated questions (2.14) is also higher than of the system-generated questions (1.96) and their difference is not significant. However, in the context of the usefulness of questions for supporting students in developing arguments: the mean of human-generated questions (2.12) is higher than of system-generated questions (1.69) and the difference is significant. In summary, in the context of topic 1, the first and the second research questions can be confirmed while the third one must be rejected.
Analyzing the system-generated questions in the context of topic 1, we learned that there was no question that was rated with score 1 (i.e., least understandable, least relevant, and least useful) on average. The list of system-generated questions that have the rating score of 1.33 on average with respect to “Usefulness” follows:
What do you have in mind when you think about tsunami?
What do you like when you think of/about catastrophe?
What does Fukushima remind you of?
What does power plant remind you of?
What features does catastrophe have?
The low usefulness of these questions might be attributed to the fact that these questions are very general and have little relation to the question in the discussion topic 1 (“Should we stop producing nuclear energy and develop renewable energy instead”). If the questions were more specific, for example, “What does the catastrophe at the Fukushima power plant in Japan remind you of?”, this could be more useful.
In the context of topic 2, Table 6 shows that the human-generated questions are statistically significant better than system-generated questions on all three criteria: understandability (t = 3.01), relevance (t = 3.93), and usefulness (t = 3.29). Thus, the research hypothesis that system-generated questions are understandable, relevant to a given discussion topic, and useful for developing new arguments as human-generated questions cannot be confirmed in the context of topic 2.
We investigated the system-generated questions which had least mean score, i.e., the rating mean score over the raters is 1. Table 7 shows that the questions that have the lowest mean score contain “non-meaningful” nouns/noun phrases (“fear of deflation”, “international monetary”, “state capitalism”, and “deflation”) and these nouns/noun phrases are not in accordance with the meaning of the other constituents of a question. That is, the constituents of a question were in contradiction, for example: “How can deflation be used today?” It is not common for us that deflation can be “used” (unless we are economy experts). The other problem with these questions is that these “non-meaningful” nouns/noun phrases are extracted from the discussion topic (e.g., “fear of deflation”, “international monetary”) and from the hyponym set provided by WordNet (“state capitalism”). This is a limitation of the question generation approach presented in this paper. In the current version, the system is not implemented with a mechanism to identify meaningful noun phrases from the set of noun phrases that are extracted from a discussion topic and from the hyponym set of WordNet.
Similar to topic 1, in the context of topic 3, Table 8 shows that human-generated questions are better, but not significantly, than system-generated questions on all three criteria. This confirms that our research questions can be answered with “Yes” on the criteria “Understandability”, “Relevance”, and “Usefulness”.
We analyze the system-generated questions with the lowest scores. We identified one least understandable, two least relevant, and one least useful question(s) (Table 9). The least understandable question can be attributed to the noun phrase “(opposite-) problems” that is generated by the system using a pre-specified question template. The question could be more understandable if it were constructed like this: “How could problems of the central bank be stopped?” Thus, the pre-specified question template should be optimized accordingly. The problems with the two least relevant questions can be explained by the noun phrases “ECB president” and “central bank” that are not as relevant as other noun phrases “debt crisis” and “cheap money” in topic 3. Again, the problem here is to determine the most important noun phrases in a discussion topic before applying question templates for constructing questions. The least useful question “What features does ECB president have?” was also rated as least relevant. In In the “Discussion” section, we will discuss about this issue and approaches to determining important concepts.
The question “What is cheap money?” that was generated identically by a human expert and by the system was rated by both human raters as very understandable. However, with respect to the criteria “Relevance” and “Usefulness”, there was disagreement between raters as Table 10 shows. Low kappa values of agreement between the human raters can be attributed to different strategies of distinguishing between system-generated questions and human-generated questions. Some human raters informed us about the different criteria they used to identify system-generated questions: 1) a question is superficial with regard to a given discussion topic, 2) a question is similar to another one in the mixed set of questions, 3) a question that expects a factual answer and is intuitive (e.g., “What features does ECB president have?”), 4) a question that contains unknown information (e.g., “How will those policies affect those outcomes/stakeholders?”), 5) human-generated questions may have typo/syntax errors, while system-generated questions are error-free.
Overall, when considering the quality of system-generated questions over all three topics, we can learn from Table 11 that there is no significant difference between the human-generated and system-generated questions, i.e., the system-generated questions are as understandable as human-generated questions. That means, the first research question can be answered in the affirmative. However, with respect to the relevance of questions to the given discussion topics and to the usefulness of the questions, the human-generated questions are significantly better, and thus, the second and the third research questions can be answered in the negative.
Correlation between understandability, relevance, and usefulness
In this section, we investigate the fourth research question: Are understandable and relevant system-generated questions also useful for students?
In the context of topic 1 (cf. Table 12), we can note that system-generated questions that are relevant to discussion topic 1 have a strong positive correlation with the criterion “Usefulness” (r = 0.76). A similar tendency can be found for human-generated questions (r = 0.81). Both correlation values are significant. However, the understandable system-generated questions are weakly correlated with the criterion of usefulness (r = 0.31), whereas for human-generated questions the correlation between the criteria understandability and usefulness is higher (r = 0.57).
In contrast to topic 1, in the context of discussion topic 2 (cf. Table 13), we can learn that for both system-generated questions and human-generated questions, the correlation between the criteria “Relevance” and “Usefulness” is weak (r = 0.14–0.17, not significant). Yet, correlation values show that understandable questions (either system-generated or human-generated) are moderately correlated with the criterion of being useful questions (r = 0.52–0.53) and these correlation values are significant.
In the context of topic 3 (cf. Table 14), for both classes of questions (human-generated and system-generated), the correlation between understandability and usefulness is positive (r = 0.39–0.43). However, it indicates a weak relationship between understandable questions and useful questions. The correlation between the relevance of a question and its usefulness (r = 0.53–0.62) is moderately positive and means there is a tendency that relevant questions will be useful for students. Note, except the correlation coefficient between the criteria understandability and usefulness for system-generated questions, all other correlation values are significant.
In summary, the fourth research question, whether understandable and relevant questions would be useful for students, can apparently confirmed in most cases. Understandable questions (both system-generated and human-generated questions) are significantly correlated with useful questions, except the system-generated questions for topic 3. Relevant questions (both system-generated and human-generated questions) are significantly correlated with useful questions, except for topic 2.
The question generation approach has been evaluated using three discussion topics from the domains of energy (topic 1) and economy (topics 2 and 3). Each topic was presented by two sentences that describe the problem of a topic, followed by a discussion question. With two discussion domains, we still cannot conclude about the coverage of scope of discussion domains that can be supported by the question generation system using WordNet. However, the results of the evaluation study give us some information about the quality of system-generated questions. In the context of topic 1, the human-generated questions were not significantly better than system-generated questions over three criteria “Understandability” and “Relevance” (however, with respect to “Usefulness”, human-generated questions were more useful). In the context of topic 3, the difference between human-generated questions and system-generated questions was not significant over three criteria. Only in the context of topic 2, which is about increasing fear of deflation in Europe and US, the difference between human-generated and system-generated questions was statistically significant, i.e., the quality of human-generated questions was better than of system-generated questions. Of course, the effectivity of our approach relies on the set of hyponyms provided by WordNet and on the accuracy of the algorithm that extracts nouns/noun phrases from a discussion topic.
In the current implementation of the system, the algorithm for extracting nouns/noun phrases from a discussion topic has the limitation that it is not able to rank the importance of a noun/noun phrase. In order to determine the relevance of a concept, several effective approaches have been devised in the area of information retrieval, e.g., document frequency (Joho and Sanderson 2007) and term frequency-inverted document frequency (Baeza-Yates and Ribeiro-Neto 1999). Document frequency is calculated by the number of documents which contain a specific term in the corpus of documents. Term frequency is used as a numerical statistic to determine how important a word is to a document in a corpus or how important a word is to a corpus. Usually, the factor “inverse document frequency” is incorporated in the term frequency algorithm to diminish the weight of terms that occur very frequently in the document corpus and increases the weight of terms that occur rarely. These approaches could be investigated to be included in the algorithm for extracting relevant concepts from the discussion topic.
With respect to the selected amount of system-generated questions for the evaluation study, we selected only a small number of system-generated questions among a huge number of generated questions (over 800 for topic 1) for evaluation without having clear selection criteria. The small number of selected system-generated questions and the ratio 1:3 between system-generated questions and human-generated questions might not reflect fully the quality of system-generated questions. We might think of increasing this ratio. Yet, possibly too many system-generated questions might bias human graders—this needs to be investigated.
This paper presented a question generation approach using WordNet for supporting students during argumentation processes. The approach extracts important concepts from a discussion topic and query hyponyms of these concepts from WordNet. Questions are constructed by either using important concepts from a given discussion topic or using hyponyms of the extracted concepts.
Although the evaluation results show that system-generated questions were as sound as human-generated questions in two discussion topics, the question generation approach presented in this paper certainly still has some limitations. First, it generates too many questions for a discussion topic. Second, the algorithm for extracting relevant concepts is not yet able to determine the grade of importance for each noun/noun phrases. These two issues are our short-term future work.
As long-term future work, we intend to use system-generated questions and human-generated questions of highest quality to test whether they are actually useful for students in the argumentation process. After that, we intend to identify and model characteristics of useful questions for argumentation purposes. Using this model, appropriate question templates will be defined for question generation.
Arias de Sanchez, G. (2013). The art of questioning: Using Bloom’s taxonomy in the elementary school classroom. Teaching Innovations Projects, 3(1), Article 8.
Baeza-Yates, R, Ribeiro-Neto, B. (1999). Modern Information Retrieval. Addison-Wesley, S. 29–30.
Berthold, K, Nückles, M, & Renkl, A. (2007). Do learning protocols support learning strategies and outcomes? The role of cognitive and metacognitive prompts. Learning and Instruction, 17(5), 564–577.
Berthold, K, Röder, H, Knörzer, D, Kessler, W, & Renkl, A. (2011). The double-edged effects of explanation prompts. Computers in Human Behavior, 27(1), 69–75.
Bloom, BS. (1956) Taxonomy of educational objectives: Handbook 1: Cognitive Domain. Addison Wesley Publishing.
Brown, J, Frishkoff, G, & Eskenazi, M. (2005). Automatic question generation for vocabulary assessment. In Proceedings of Human Language Technology Conference and Empirical Methods in Natural Language Processing (pp. 819–826).
Chi, MTH, Lee, N, Chiu, MH, & LaVancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18(3), 439–477.
Duschl, RA, & Osborne, J. (2002). Supporting and promoting argumentation discourse in science education. Studies in Science Education, 38, 39–72.
Flavell, JH. (1978). Metacognitive development. In JM Scandura & CJ Brainerd (Eds.), Structural/process models of complex human behavior (pp. 213–245).
Hoshino, A, & Nakagawa, H. (2005). Real-time multiple choice question generation for language testing: a preliminary study. In Proceedings of the 2 nd Workshop on Building Educational Applications Using Natural Language Processing (pp. 17–20).
Glogger, I, Holzäpfel, L, Schwonke, R, Nückles, M, & Renkl, A. (2009). Activation of learning strategies in writing learning journals. Zeitschrift für Pädagogische Psychologie, 23(2), 95–104.
Graesser, AC, & Person, NK. (1994). Question asking during tutoring. American Educational Research Journal, 31(1), 104–137.
Graesser, AC, Rus, V, D'Mello, SK, Jackson, GT. (2008). AutoTutor: Learning through natural language dialogue that adapts to the cognitive and affective states of the learner. In: D. H. Robinson & G. Schraw (Eds.), Recent Innovations in Educational Technology That Facilitate Student Learning, 95–125, Charlotte, NC: Information Age Publishing.
Heilman, M, & Smith, NA. (2009). Question generation via over-generating transformations and ranking. Report CMU-LTI-09-013. Carnegie Mellon University: Language Technologies Institute, School of Computer Science.
Heath, T, & Bizer, C. (2011). Linked data: Evolving the web into a global data space. Synthesis Lectures on the Semantic Web: Theory and Technology, 1(1), 1–136. Morgan & Claypool.
Heilman, M. (2011). Automatic factual question generation from text. Ph.D. Dissertation, Carnegie Mellon University. CMU-LTI-11-004.
Joho, H, & Sanderson, M. (2007). Document frequency and term specificity. In Large Scale Semantic Access to Content (Text, Image, Video, and Sound) (pp. 350–359).
Jouault, C, & Seta, K. (2013). Building a semantic open learning space with adaptive question generation support. In Proceedings of the 21st International Conference on Computers in Education (pp. 41–50).
Jouault, C, & Seta, K. (2014). Content-dependent question generation for History learning in semantic open learning space. In Proceedings of the 12th International Conference on Intelligent Tutoring Systems (pp. 300–305).
Kunichika, H, Katayama, T, Hirashima, T, & Takeuchi, A. (2001). Automated question generation methods for intelligent English learning systems and its evaluation. In Proceedings of the International Conference on Computers in Education (pp. 1117–1124).
Kwasniewski, N. (2013). Fear of Deflation: ECB Rate Drop Shows Draghi's Resolve. Retrieved on 09.06.2015: http://www.spiegel.de/international/europe/ecbsurprises-economists-by-dropping-key-interestrate-to-historic-low-a-932511.html.
Lane, HC, & VanLehn, K. (2005). Teaching the tacit knowledge of programming to novices with natural language tutoring. Journal Computer Science Education, 15, 183–201.
Le, NT, Kojiri, T, Pinkwart, N, et al. (2014a). Automatic question generation for educational applications–The state of art. Advanced Computational Methods for Knowledge Engineering, 282, 325–338.
Le, NT, Nguyen, NP, Seta, K, & Pinkwart, N. (2014b). Automatic question generation for supporting argumentation. Vietnam Journal of Computer Science, 1(2), 117–127.
Liu, M, Calvo, RA, & Rus, V. (2012). G-Asks: An intelligent automatic question generation system for academic writing support. Dialogue & Discourse, 3(2), 101–124.
Miller, GA. (1995). WordNet: A lexical database. Communications of the ACM, 38(11), 39–41.
Mitkov, R, Ha, LA, & Karamanis, N. (2006). A computer-aided environment for generating multiple-choice test items. Journal Natural Language Engineering, 12(2), 177–194. Cambridge University Press.
Mostow, J, Aist, G, Huang, C, Junker, B, Kennedy, R, Lan, H, Latimer, D, O'Connor, R, Tassone, R, Tobin, B, & Wierman, A. (2008). 4-Month evaluation of a learner-controlled reading tutor that listens. In VM Holland & FP Fisher (Eds.), The path of speech technologies in computer assisted language learning: From research toward practice (pp. 201–219). New York: Routledge.
Mostow, J, & Chen, W. (2009). Generating instruction automatically for the reading strategy of self-questioning. In Proceeding of the 14th Conference on Artificial Intelligence in Education (pp. 465–472).
Mostow, J, Nelson, J, & Beck, JE. (2013). Computer-guided oral reading versus independent practice: Comparison of sustained silent reading to an automated reading tutor that listens. Journal of Edu Computing Research, 49(2), 249–276.
Nückles, M, Hübner, S, & Renkl, A. (2009). Enhancing self-regulated learning by writing learning protocols. Learning and Instruction, 19(3), 259–271.
Olney, AM, Graesser, A, & Person, NK. (2012). Question generation from concept maps. Dialogue and Discourse, 3(2), 75–99.
Osborne, J, Erduran, S, & Simon, S. (2004). Enhancing the quality of argumentation in school science. Journal of Research in Science Teaching, 41(10), 994–1020. Wiley Periodicals, Inc.
Otero, J, & Graesser, AC. (2001). PREG: Elements of a model of question asking. Journal Cognition and Instruction, 19(2), 143–175.
Pate, RT, & Bremer, NH. (1967). Guiding learning through skillful questioning. Elementary School Journal, 67, 417–422.
Person, NK, Graesser, AC. (2002). Human or Computer? AutoTutor in a bystander Turing test. Proceedings of the 6th International Conference on Intelligent Tutoring Systems, Springer-Verlag, 821–830.
Sampson, V, & Clark, DB. (2008). Assessment of the ways students generate arguments in science education: Current perspectives and recommendations for future directions. Special issue Science Studies and Science Education, 92(3), 447–472.
Schreiber, JE. (1967). Teacher’s question-asking techniques in social studies (Doctoral dissertation, University of Iowa, No. 67–9099).
Schwonke, R, Hauser, S, Nückles, M, & Renkl, A. (2006). Enhancing computer-supported writing of learning protocols by adaptive prompts. Computers in Human Behavior, 22(1), 77–92.
Sumita, E, Sugaya, F, & Yamamoto, S. (2005). Measuring non-native speakers’ proficiency of English using a test with automatically-generated fill-in-the-blank questions. In Proceedings of the 2 nd Workshop on Building Educational Applications Using Natural Language Processing (pp. 61–68).
Tenenberg, J, & Murphy, L. (2005). Knowing what I know: An investigation of undergraduate knowledge and self-knowledge of data structures. Journal Computer Science Education, 15(4), 297–315.
Wilen, WW. (1991). What research says to the teacher series. Washington, D.C.: National Education Association. Questioning skills for teachers.
Wong, RMF, Lawson, MJ, & Keeves, J. (2002). The effects of self-explanation training on students' problem solving in high-school mathematics. Learning and Instruction, 12(2), 233–262.
Yu, FY, & Liu, YH. (2008). The comparative effects of student question-posing and question-answering strategies on promoting college students’ academic achievement, cognitive and metacognitive strategies use. Journal of Education and Psychology, 31(3), 25–52.
The authors would like to thank researchers of different research communities (argumentation, problem/question generation, and Computer Science) for their contribution in this study: Prof. Kevin Ashley, Prof. Kazuhisa Seta, Prof. Tsukasa Hirashima, Prof. Matthew Easterday, Prof. Reuma De Groot, Prof. Fu-Yun Yu, Dr. Bruce McLaren, and Dr. Silvia De Ascaniis, Prof. Ngoc-Thanh Nguyen, Prof. Viet-Tien Do, Dr. Thanh-Binh Nguyen, Zhilin Zheng, Madiah Ahmad, Sebastian Groß, Sven Strickroth.
The authors declare that they have no competing interests.
NTL developed the system, carried out the studies and drafted the manuscript. NP commented on the draft and enhanced it. Both authors approved the final manuscript.
About this article
Cite this article
Le, NT., Pinkwart, N. Evaluation of a question generation approach using semantic web for supporting argumentation. RPTEL 10, 3 (2015). https://doi.org/10.1007/s41039-015-0003-3
- Question generation |
Giovanni Boldini was an Italian painter who worked mostly in Paris. The artist was a genre painter and portraitist from the Belle Epoque. His loose and fluid brushstrokes rendered him the nickname of Master of Swish. Some of his most noteworthy artworks were Marthe de Florian, Spanish Dancer at the Moulin Rouge, and Woman at a Piano.
Giovanni Boldini was born in 1842, in the city of Ferrara, northern Italy. Although register of his childhood is not abundant, it’s safe to assume his earliest influence towards art originated from his father, who was a painter. His father produced mainly religious subjects. Around 20 years old, Boldini would go to Florence, where he stayed for six years to study and pursue his painting career.
In Florence, Boldini was not very committed as a student at the Academy of Fine Arts, only often attending his classes. Though this period was very prolific in matters of influences, for Boldini came in contact with other realist painters, especially the Macchiaioli, who were considered the Italian precursors to the Impressionism. Their impact is very noticeable in his landscapes, his attention to nature and lights, and fluid brushstrokes. A noteworthy example of his artwork from this period is a series of landscape fresco paintings for the Vila’ La Falconiera’, executed in 1870.
Giovanni Boldini would later move to London, where he achieved success as a portraitist. He completed portraits of several distinguished members of society, including the Duchess of Westminster and Lady Holland. He then moved to Paris, where he met and befriended Edgar Degas.
Boldini’s popularity would only rise, becoming the most acclaimed and fashionable portraitist in late 19th century Paris. Giovanni Boldini received the Legion of Honour, a high commendation from the French government, and soon he was nominated as commissioner in 1889 Paris Exposition’s Italian Section.
Boldini’s popularity was not bound to Europe, in New York 1897, he had a solo exhibition. He also exhibited his paintings at the Venice Biennale on four occasions. Giovanni Boldini had a rather long life; he died in Paris in July 1931 with 88 years old.
Giovanni Boldini was a character in Franca Florio, Regina di Palermo, a ballet written the Italian composer Lorenzo Ferrero, in 2007. Said piece retold the story of a famous Sicilian aristocrat called Donna Franca, whose outstanding beauty said to inspire Boldini and several emperors, poets, musicians, and artists during the Belle Epoque. |
The purpose of using a clapperboard in filmmaking is to identify the scene to the editor and in addition, synchronize audio and visual elements of the production. The “clap” sound is recorded and this is then synchronized with the recorded image to align sound and video in film production.
Lights!… Camera!… Action! Then the sweet sound of the clapperboard…
The clapperboard is one of the most iconic symbols in the filmmaking industry. Since making their way onto film sets in the early 1920s, clapperboards have become a standard on every project to date. But why is this? What even is a clapperboard, and more importantly, how do you use one? Do you even need to use one? Well, that’s quite a few questions we need to address there, so let’s get started…
The Origins of the Clapperboard
While an exact date of their conception is unknown, clapperboards were introduced sometime during the boom of the silent film era; a good estimate would be sometime around 1926-27.
Early clapperboard designs consisted of a chalkboard slate connected to an acrylic slate via a hinge mechanism, allowing the two pieces of slate to separate and close again. It was when the two slates would be forcefully closed that the iconic “clap’ would be sounded.
Clapperboards were vital to film productions. On the chalk slate, the name of the production, the scene, and the take that was about to be performed, would all be displayed. Without this, the film could not be edited. Remember, this was almost exactly 70 years before digital filming would be introduced, so the video would be captured by burning the image onto nitrate film.
To edit a film, you would have to select the piece of a film reel that you wanted to keep/get rid of and cut it at exactly the right point, before essentially gluing it back together.
It was a tedious and finicky process on its own, which would’ve been almost impossible without the use of a clapperboard.
The clapperboard would show the editor exactly what scene/take he or she was looking at, and they would be able to cut and splice together, accordingly.
The clapperboard would be instrumental to every single production up until the move to digital film in the late 90s. But clapperboards weren’t then abandoned. They’re still very much in use today.
Why are clapperboards still used today?
Well, in the majority of film productions, from big-budget blockbusters down to student films, video and audio are captured separately.
Typically, the camera and microphone will be hooked up to different capture systems, and so would need to be synced at a later date.
And this is where clapperboards come into use. The clap of the board is easy for editors to pick out on the audio track and match to the visual of the clapper clapping on the film, syncing the moving picture with the sound.
But most sets don’t even use an actual clapperboard these days; instead, they use what’s known as a “digislate”.
These digital clapperboards use an LED display to show a timecode generated by the device recording the audio.
The board just has to be shown to the camera before a scene for the editors to find the same point in the film and audio tracks; no clap to be heard.
While this may be more efficient and easier for editors, it does kind of take away from the fun of using a clapperboard.
Here is a link to a free digislate app for reference:
How to Use a Clapperboard – Do You Need to Use One?
Depending on what type of clapperboard you have, will dictate how you use one, and how you’ll have to fill one out.
Most traditional clapperboards will have the following: a space to the production name, the director’s name, the number of the camera (if you’re using more than one camera onset), the date, the scene, and the take.
Some variations might also include a space to put the name of the camera operator or director of photography/cinematographer, as well as sound capture.
But is all of this even worth it; do you actually need to use on in this day and age?
Well, it really depends. Seeing as the primary use of clapperboards in the modern day is used to sync audio and visuals, it depends on whether you’re even capturing the two on separate equipment.
Most amateur/beginner filmmakers are probably capturing both through one median, that being the primary camera (again, you’re probably only using the one). In this case, a clapperboard probably isn’t that useful to you. Your set is likely incredibly small, maybe only a half-dozen people, and you’re likely not doing enough scenes and takes to need to do mass organization at a later date.
Most student films and/or amateur films are short films, not 3-hour epics.
But it’s really up to you. For most, the clapperboard is a neat inclusion 😃
Where can I buy a clapperboard?
There are many clapperboard options on Amazon, from traditional “black” ones that will need chalk to more modern-looking ones with a whiteboard face.
Here are some options: |
do You know our newsletter? It’ll keep you briefed on what we publish. Please register, and you will get it every month.
Thanks and best wishes,
the editorial team
From free trade to fair trade
– by Gerd Müller
Gerd Müller visiting a garment factory in Cambodia.
In April 2013, a devastating accident occurred at the Rana Plaza textile factory in Bangladesh leaving more than 1,100 people dead and over 3,000 injured. This tragedy once again drew attention to inhuman and environmentally destructive patterns of production in Asia and Africa.
Today, we can say that positive changes have taken place since then. In October 2014, we launched the Partnership for Sustainable Textiles. During a visit to Bangladesh last autumn, I was able to see for myself the progress that has been achieved on the ground. Thanks to German support, many survivors of the accident have been helped to find new ways to earn a living; with the training of labour inspectors, health and safety in textile factories is moving forward. And Germany is providing advice on the establishment of committees of workers‘ representatives and on the introduction of an occupational accident insurance scheme.
The textile sector is an example of how, in times of globalisation, increasingly complex supply and trade chains span the entire globe. Globalisation, the digital revolution and rapid population growth have transformed planet earth into a global village. Many separate companies in different countries are involved in producing and selling the products we consume in Germany and Europe. Men’s shirts, for instance, go through up to 140 production stages in different countries before they end up on sale in our shops. For cost reasons, production is often transferred to countries with low levels of social and environmental standards.
Despite the challenges, we should not however forget that integration into global and regional value chains offers opportunities for developing countries and emerging economies. Thanks to its rapidly growing industrial production with a focus on export, China managed to lift more than 200 million people out of poverty between 2000 and 2013. Least developed countries (LDCs) and many African countries do not have this opportunity, because they are hardly integrated into the system of international division of labour and the global economy. The World Trade Organization estimates that LDCs account for 12 % of the world’s population, yet their share in world trade is only one percent. Even Germany, a country with well-developed international linkages, acquires only three percent of its goods from Africa.
The pact on the world’s future
We need to ensure that developing countries have better opportunities to participate in value creation and international trade. That way it is possible to counter poverty in a sustainable manner and create job opportunities. At the same time, global trade needs to be made responsible, sustainable and fair. Globalisation and trade must not be allowed to result in massive environmental degradation, climate change, greater inequality, precarious working conditions and human-rights violations.
We need to move towards a global economic model in the same way as Europe moved from 19th century Manchester Capitalism to its present economic model. The global pact on the world’s future – the 2030 Agenda for Sustainable Development – with its 17 Sustainable Development Goals (SDGs) for economically, environmentally and socially sustainable development is the new framework for action, at the international level, in our partner countries and in Germany.
Responsibility for sustainable trade begins in Germany
The responsibility for sustainable trade begins in Germany, because German companies, in particular, play an important role in efforts to raise environmental and social standards and make global supply and value chains sustainable.
The best example here is the Partnership for Sustainable Textiles. It is a partnership between politics, business, trade unions and civil society and was established with a view to achieving sustainable improvements in social and environmental standards and in economic conditions along the entire textile supply chain. So far, some 180 members have joined, more than half of Germany’s retail textile trade.
Another important step is the drafting of the National Action Plan for Business and Human Rights, through which the German government is implementing the UN Guiding Principles on Business and Human Rights. It lays out a clear framework for economic activity that complies with human rights, is socially sustainable and is also economically successful. In the first place, this framework is for companies, yet it applies to public authorities as well, who are there to help small and medium-sized companies especially carry out due diligence and to provide the required environment.
Consumption patterns are another key to more sustainability. We need to realise that our decisions as consumers have a direct effect in other parts of the world. If a t-shirt is sold for € 2.50 in Germany, it is clear that little will be left for the seamstress in Bangladesh. But there is hope. The increase in sales of fair-trade products is proof of a growing awareness of and an interest in sustainability issues. The German government has developed an online information platform to further support fair trade: www.siegelklarheit.de provides information on well-established sustainability labels and makes it clear which labels are especially credible and exacting. But national, regional and local authorities, who account for more than
€ 300 billion worth of procurement a year, also need to lead by example when it comes to sustainability. The BMZ has launched various initiatives to ensure that greater attention is given to social and environmental standards in all public procurement processes.
Fighting poverty and creating job opportunities and prospects for the people in developing countries will only succeed if companies in developing countries and their products and services are able to compete in global markets. Under the WTO’s Aid for Trade initiative Germany has continuously expanded its trade promoting efforts. The BMZ is involved, for instance, in developing an economic enabling environment and competitive and sustainable economic structures, and in promoting cooperation with the private sector and the introduction of and compliance with environmental and social standards, for example in factories in Bangladesh.
At the international level it is important to look to the WTO as the forum for negotiating fair rules for world trade, as this is where every country has a voice. Even though the 2015 WTO Ministerial Meeting in Nairobi did not lead to the conclusion of the 2001 Doha round, it brought some progress for developing countries. Agricultural subsidies are being abolished and preferential rules of origin for goods from LDCs expanded. However, the fact that the Doha negotiations have stalled means that bilateral and plurilateral trade agreements are seen as more important. We need to take care that trade agreements outside of the WTO framework are not concluded at the expense of developing countries. For this we will also need to link environmental and social sustainability standards with international and European trade policy. That is why I am calling for sustainability standards to be mainstreamed in WTO rules, in EU free-trade agreements and in EU Economic Partnership Agreements with ACP countries.
Sustainable and fair trade begins at home, in Germany; it must be supported in developing countries, and it has to be formalised at the international level. We need to replace free trade with fair trade. That is the lesson that must be learned from Rana Plaza.
Gerd Müller is Germany’s Federal Minister for Economic Cooperation and Development. |
IMAGE: The Green Barrier at Hassi Bahbah, Algeria, via.
Algeria is not a small country – according to Wikipedia, it is one hundred three-and-a-half times the size of Texas – but eighty-five percent of its territory consists of the Sahara desert. In fact, only a thin strip of land along the northern coastal edge of the country is cultivable.
IMAGE: Relief and satellite maps of Algeria, via Wikipedia.
In the 1970s, determined not to let the Sahara encroach further onto its thin sliver of agriculturally useful land, Algeria embarked on a sort of steampunk geoengineering project: planting a wall of trees up to 16 miles wide and 746 miles long along the entire length of the Sahara’s northern edge, from the Moroccan to the Tunisian border. Three hundred and ninety-five thousand acres of the Green Barrier, or barrage vert, were planted between 1974 and 1981, mostly by young men as part of their military service.
IMAGE: The Green Barrier as seen from ground level, via.
After this initial burst of activity, the Green Barrier ran into various economic, sociological, and ecological issues. The Barrier was a monoculture, entirely planted with the hardy, heat- and drought-tolerant Aleppo pine, which was a fine idea until the pine processionary moth moved in. Meanwhile, the funding ran out, and the local population, who hadn’t been included in the project’s planning or planting phases, saw the trees as a handy source of building materials and firewood. By 2007, the Sahara had migrated to within 125 miles of the Mediterranean, while the remains of the Barrier were described as “a depressing sight […] more grey than green.”
Nonetheless, during this month’s equally depressing Climate Change Conference in Copenhagen, Senegalese officials told National Geographic that 326 miles of a second Great Green Wall had already been planted. The idea was proposed in 2005 by the former Nigerian President, Olusegun Obasanjo, and formally adopted by the African Union in 2007.
If it is completed as planned, this vast agro-ecological defensive landscape will ultimately be 9.3 miles wide and 4350 miles long, crossing through eleven countries from Dakar to Djibouti.
IMAGE: The route of the proposed Great Green Wall.
Although this second green wall is also being built by soldiers (on loan from France), the team behind it do seem to be considering a wider range of vegetation, as well as ways to integrate the Great Green Wall into the lives and economy of local population. By including the native Acacia senegal in the plantings, for example, scientists hope that farmers will eventually be able to profit by harvesting the sap, which is better known as gum arabic, a key ingredient in soft drink syrups, confectionary, and cosmetics.
Meanwhile, the jury is still out as to whether ribbons of forest can actually hold back the encroaching sand. For example, the results of China’s own Green Wall project, which began in 1978 and is expected to reach the end of its fourth phase in 2010, have been pretty varied.
Most scientists agree that Africa’s Great Green Wall is not enough on its own, and that developing less-pasture intensive breeds of livestock, researching and implementing dry agriculture techniques, educating local farmers, improving water conservation and soil management, and reducing firewood-dependence among rural populations are equally – if not more – effective strategies against desertification.
IMAGE: Sunrise during the 2009 Great Sydney Dust Storm. Photo by Tim Wimborne/Reuters, via The Guardian.
Nonetheless, with desertification on the rise and the resulting dust storms being blamed for atmospheric pollution, glacial melt, harvest failures, and even the spread of infectious diseases, quarantining the deserts of the world behind ringed walls of carbon-absorbing artificial forests might not be such a bad idea.
NOTE: For a more ingenious Saharan wall proposal, which involves turning sand into sandstone by injecting it with bacillus pasteurii, check out Magnus Larsson’s Dune on BLDGBLOG, or watch his recent TED talk. |
Fergus Mor / Kings and Queens
- Name : Mor
- Born : ?
- Died : 501
- Category : Kings and Queens
- Finest Moment : Landing in Scotland and burning his boats, in 500 AD.
'The first to have the name Scot, and to speak Gaelic, in Scotland'
Going back to mists of time here, so that details are sketchy, but Fergus Mor, or Fergus mac Erc ' Son of Erc, was the ruler of Dalriada (or Dal Riata) in Ireland. Some time about 500 AD he led his people, the Scoti, from Antrim to Kintyre, in mid-Argyll, where they settled.
Fergus is credited, therefore, with the name Scot, which would also lend itself to the current name of Scotland. He also introduced the Gaelic language to Scotland. In addition, he is credited as being the founding father of the royal house of Scotland, which would continue its wobbly way for almost 800 years. |
Also known as the Battle of the Monitor and the Merrimack and the Battle of the Ironclads, the Battle of Hampton Roads was the most notable naval battle of the American Civil War.
When the American Civil War erupted in April 1861, Southern sympathizers seized control of the Gosport Shipyard (later named the Norfolk Naval Shipyard) in Virginia. Before evacuating the site on April 20, the commandant, Captain Charles S. McCauley, ordered his men to destroy the facility and to scuttle nine naval vessels at anchor. Among those ships was the USS Merrimack, which burned to her waterline before sinking.
After taking control of the shipyard, Confederate officials salvaged the Merrimack, whose steam engines were still intact. For the next nine months, Southern engineers developed and implemented plans to convert the Merrimack to a new type of ship that would revolutionize naval warfare—the ironclad.
By February 17, 1862, enough of the work was completed for the ship to be commissioned into the Confederate navy, christened as the CSS Virginia.
The Virginia's first task was to try to put an end to a Union naval blockade of Hampton Roads that had isolated Norfolk and Richmond from Atlantic trade. Not to be confused with land thoroughfares, Hampton Roads is a bay-like body of water formed by the confluence of the James, Elizabeth, and Nansemond Rivers in southeastern Virginia. In this case, the word "roads" is a nautical term for a partially sheltered body of water where ships may ride at anchor. Passage through Hampton Roads is the only point of connection between Norfolk and the Chesapeake Bay (and subsequently the Atlantic Ocean).
On the morning of March 8, 1862, the Virginia left her mooring at Norfolk, under command of Flag Officer Franklin Buchanan, and steamed into Hampton Roads to confront the five U.S. warships blocking access to the Chesapeake Bay. As shells from the frigates USS Cumberland and Congress bounced harmlessly off of her iron surface, the Virginia proceeded to pierce the Cumberland's hull with her iron ram, sending the Union frigate to the bottom along with 121 sailors. Buchanan next focused on the Congress, which had run aground during the maneuvering. The Virginia's crew shelled the Congress into submission. Seeing the Congress's white flag of surrender, Buchanan went on deck to accept her surrender, but Union batteries continued to fire on the Virginia, seriously wounding Buchanan. In retaliation, Buchanan ordered the destruction of the Congress, claiming the lives of another 120 Union sailors. At that point, the Union frigate Minnesota closed on the Virginia but also ran aground. Seeing that the Minnesota was helpless, Buchanan decided to withdraw and anchored at nearby Sewell's Point, with intentions of dispatching the Minnesota on the following day. During the night, Buchanan was taken ashore and hospitalized, and Lieutenant Catesby R. Jones assumed command of the ship.
The next morning changed the history of naval warfare. As the Virginia steamed out to dispose of the Minnesota, she encountered the USS Monitor, the Union's version of an ironclad warship, which had arrived under tow from New York on the previous evening. Commanded by Lieutenant John Worden, the Monitor immediately engaged the Virginia. For the next two and one-half hours, the two ironclads shelled each other at close range, producing little damage. At approximately 12:15 PM, Jones realized that continued shelling of the Monitor would be a waste of munitions and withdrew. The first battle between two ironclads in the history of naval warfare ended in a draw.
Casualties suffered during the Battle of Hampton Roads were estimated to be 433 sailors (US 409; CS twenty-four). Although results of the engagement were inconclusive, the Virginia failed in her attempt to dislodge the Federal fleet from Hampton Roads. The Monitor's continued presence in Hampton Roads enabled Union General George B. McClellan to initiate his Peninsula Campaign with an amphibious landing near Fort Monroe on March 17.
Later, commanded by Flag Officer J. Tattnall, the Virginia attempted to prevent McClellan's advance up the James River. After failing to prevent a Union landing at Yorktown, Tattnall unsuccessfully tried to retreat upriver. Rather than let his ship fall into Union hands, Tattnall scuttled the Virginia on May 11, 1862.
Later in the summer of 1862, following the failed Peninsula Campaign, the Monitor helped cover McClellan's retreat from the Virginia Peninsula. In December, officials ordered the ship to support Union operations off of Wilmington, North Carolina. On January 1, 1863, the Monitor foundered during a storm off of Cape Hatteras and went to the bottom of the Atlantic Ocean, along with four officers and twelve crewmen.
Cite this Entry
"Battle of Hampton Roads," Ohio Civil War Central, 2019, Ohio Civil War Central. 16 Feb 2019 <http://www.ohiocivilwarcentral.com/entry.php?rec=1371>
"Battle of Hampton Roads." (2019) In Ohio Civil War Central, Retrieved February 16, 2019, from Ohio Civil War Central: http://www.ohiocivilwarcentral.com/entry.php?rec=1371 |
Name Professor Other July 2011 Red Badge of Courage and Literary Analysis First of all, it is necessary to stress that Red Badge of Courage by Stephen Crane foc The Red Badge of Courage takes place during an unnamed battle during the Civil War. Crane deliberately never mentions the place, the date, or even the fact that the war is the one between the state The Red Badge of Courage is a story of a young man's journey to adulthood, over 48 hours of battle during the Civil War. The use of color, religious, and animal imagery highlights the difference Literary analysis involves examining all the parts of a novel, play, short story, or poemelements such as character, setting, tone, and imageryand thinking about how the author uses those elements to create certain effects.
Explanations, analysis, and visualizations of The Red Badge of Courage's themes. The Red Badge of Courage: Quotes The Red Badge of Courage 's important quotes, sortable by theme, character, or chapter. The Red Badge of Courage put American novelist Stephen Crane on the literary map. Critics responded to Crane's ( ) mixture of realism and impressionism.
Critics responded to Crane's ( ) mixture of realism and impressionism. The Red Badge of Courage is a novel in which Henry Fleming, the Youth, struggles with the question of whether he will fight or run when he sees his first real battle. After it The Red Badge of Courage, Stephen Cranes second novel (Maggie: A Girl of the Streets had appeared under a pseudonym in 1893) and his most famous work, has often been considered the first truly modern war novel.
The war is the American Civil War, and the battle is presumed to be the one fought at Chancellorsville, though neither the war nor The Red Badge of Courage is a classic Civil War novel that was written by Stephen Crane in 1895. Being a soldier in the Civil War was dange The Red Badge of Courage is a novel by Stephen Crane that was first published in 1895. |
VACUUM DEGASSING CHAMBER
Vacuum degassing is the process of using vacuum to remove gases from compounds which become entrapped in the mixture when mixing the components. To assure a bubble-free mold when mixing resin and silicone rubbers and slower-setting harder resins, a vacuum chamber is required. A small vacuum chamber is needed to eliminate air bubbles for materials prior to their setting. The process is fairly straightforward. The casting or moulding material is mixed according to the manufacturers directions.
-Product Features of the Chamber
-Stainless steel chamber for long-term air-tightness performance.
-Burhani Vacuum chamber has a rounded edge at the top, for the protection of silicone gasket.
-Vacuum gauge is assembled on a separate port, so that the reading remains accurate even while pumping/venting the chamber. --Gauge reading in both in Hg and Mpa.
-All kits are vacuum tested for 24 hours before packaging. (leakage less than 2.0 inHg at 24 hours)
-Two-sided flippable vacuum gasket for better durability and reliability.
-The gasket is chemically resistant to butane and other solvents.
- 25 mm thickness Acrylic lid for long life and clear visibility.
-Strong elastic food grade silicone vacuum hose can connect/disconnect easily with the pump/chamber.
-50um air filter on the vent valve to reduce air flow and prevent any dust/powder flowing into the chamber.
-Easy to install and low-cost to maintain: the chamber can be assembled in 5 minutes.
-45 day complimentary warranty and life-long parts support. |
A human is not able to naturally breed with a chimpanzee (and being an AP Bio Student I understand this subject quite well) Although if you're implying its possible to engineer a half-human half-chimp creature you might actually get something along the lines of the missing link. We have similar DNA although, so do all creatures. Only 1% of our DNA actually creates a codon humans recognize as being able to code for the creation of an amino acid. The rest is sort of the "dark matter" of DNA.
"In the 1920s the Soviet biologist Ilya Ivanovich Ivanov carried out a series of experiments to create a human/non human ape hybrid. At first working with his own sperm and chimpanzee females, none of his attempts created a pregnancy. In 1929 he organized a set of experiments involving nonhuman ape sperm and human volunteers, but was delayed by the death of his last orangutan. The next year he fell under political criticism from the Soviet government and was sentenced to exile in the Kazakh SSR; he worked there at the Kazakh Veterinary-Zootechnical Institute and died of a stroke two years later.
In 1977, researcher J. Michael Bedford discovered that human sperm could penetrate the protective outer membranes of a gibbon egg. Bedford's paper also stated that human spermatozoa would not even attach to the zona surface of non-hominoid primates (baboon, rhesus monkey, and squirrel monkey), concluding that although the specificity of human spermatozoa is not confined to man alone, it probably is restricted to the Hominoidea.
In 2006, research suggested that after the last common ancestor of humans and chimpanzees diverged into two distinct lineages, inter-lineage sex was still sufficiently common that it produced fertile hybrids for around 1.2 million years after the initial split.
However, despite speculation, no case of a human-chimpanzee cross has ever been confirmed to exist"
Cross breeding similar animals is not unheard of. With the suspension of disbelief I could probably write a lot on this.
My guess is that the Human gene's would appear dominant. If nothing unexpected went wrong and it did not have sever metal or physical defects. Depending if which animal is female or male, the hybrid could come out quite differently. (as seen with other Hybrid species) |
Researcher Aderonke A. Akinkugbe from the University of North Carolina at Chapel Hill has presented new findings to the 93rd General Session and Exhibition of the International Association for Dental Research.
Her study has established new evidence of a link between exposure to environmental tobacco smoke and periodontitis in US non-smokers, although it is difficult to definitively prove. This is due to the impossibility of accurately measuring levels of exposure to secondhand smoke. The disease affects around 47 per cent of US adults, and smoking has long been named as a risk factor.
The research was based on data from 3,255 self-reported lifetime non-smokers, who had a dental examination and gave a blood sample.
Serum cotinine, a derivative of nicotine, was used as a measure of the extent of smoke exposure. If more than 3ng/ml of the substance were found in the blood, the participants were excluded on the grounds that they may have been smokers, or used tobacco products in other ways (such as taking snuff). The remaining participants’ results were then fully adjusted for age, gender, ethnicity, diabetes, income and education.
A total of 57.4 per cent of the participants had serum cotinine levels between this upper limit and 0.015ng. After the fully-adjusted analysis, non-smokers exposed to environmental tobacco smoke were found to be 1.45 times more likely to develop moderate to severe periodontitis than non-smokers who had not been exposed to secondhand smoke.
This represents a significant correlation. For the purposes of the study, this was defined as either at least two interproximal sites with attachment loss of 4mm or more, or a minimum of interproximal sites with probing pocket depth of 5mm or more.
The study “Environmental Tobacco Smoke is Associated With Periodontitis in US Non-smokers” was presented on March 13th at the event in Boston. |
SeaWeek is Australia’s annual celebration of the sea. Since 1988, SeaWeek has encouraged community awareness and appreciation for marine and coastal environments. Each year is a different theme, providing educators with specific messages and avenues through which to engage people in learning about and enjoying the ocean.
In 2019, SeaWeek will be celebrated from 2 to 8 September and this year’s theme is Ocean Literacy Principle 4: The ocean made Earth habitable.
The concept of Ocean Literacy originally began to develop in the US in 2002. At its core, Ocean Literacy ‘curriculum’ which provides educators a scaffold through which to teach the key messages (principles) needed for people and oceans to co-exist. A great deal of work has gone into creating this framework which provides educators with a scope and sequence for K-12 for each of the seven principles of Ocean Literacy.
The ocean made Earth habitable
The SeaWeek theme of OL4: The ocean made Earth habitable, has three key messages, outlined below. There are lists of topics and sub-topics for all ages on the Ocean Literacy website and adaptations of these to the Australian Curriculum.
- Most of the oxygen in the atmosphere originally came from the activities of photosynthetic organisms in the ocean. This accumulation of oxygen in Earth’s atmosphere was necessary for life to develop and be sustained on land.
- The first life is thought to have started in the ocean. The earliest evidence of life is found in the ocean.
- The ocean provided and continues to provide water, oxygen and nutrients, and moderates the climate needed for life to exist on Earth
AAEE hopes that SeaWeek will enable all our members to incorporate the ocean (and OL4) into education programs this September from the 2nd to 8th.
The SeaWeek web page provides information on events and activities planned for the week. We can also use the website and social media networks to help promote events you may be coordinating. |
The training is part of the Farm2School grant received by Grant County Healthy Kids, Healthy Communities from New Mexico Farm to Table, which brought locally grown vegetables to 750 students in Cobre District Schools during the month of September.
The Pollinator Training will include information on creating a chemical-free, pollinator-friendly habitat that attracts native pollinators. The training will also include resources for garden educators, practitioners, small-scale farmers and food activists. The Pollinator Training is sponsored by Healthy Kids, Healthy Communities, the Volunteer Center and New Mexico Farm to Table.
For information, call HKHC coordinator, A.J. Sandoval at (575) 388-1198, ext. 14 or email [email protected] |
A detailed description of the castle is given by Jeremy Knight in The Monmouthshire Antiquary (1991). The castle surviving today consists of the east range which faced the river. Originally there would have been a curtain wall, roughly rectangular in plan, running behind the east range. There would also have been a surrounding moat with presumably drawbridges for entrances that were on the north and south sides. The surviving stone castle was greatly altered in the 20th century.
The east range consists of three towers linked by straight walls, the main hall, a water gate, a vaulted audience chamber, and a kitchen block. The north tower has two stories on a solid square base, and is thought to have been the quarters of the constable or steward of the castle. This tower was attached to the hall which stood at first floor level over a vaulted cellar or undercroft. There was a chamber between the hall and the central tower, with a spiral staircase attached to the corner of the tower.
The central tower is the largest tower. It contained an impressive vaulted chamber over a water-gate allowing ships direct access to the castle. Above the vaulted chamber there may have once been a chapel. To the south of the central tower was a smaller room, probably the withdrawing room for the lord, as a gallery then leads to the south tower, which is where the lord of Newport would have stayed on his visits to Newport. The kitchens are thought to have been behind the gallery.
The castle was the administrative centre for the lordship of Newport and in its heyday it would have dominated the town and the river crossing. However there is uncertainty about when the earliest castle in the borough was built, and whether this earlier castle was on the same site as the present castle.
Knight’s view was that the date of the foundation of the present stone –built castle must lie in the bracket 1327-1386. However a recent survey by Cadw suggests there were a series of building periods, dating back to the Clare lordship in the thirteenth century. Knight would have been unaware that the north curtain wall may have been earlier in date than the rest of the castle.
A Norman motte on Stow Hill is discussed elsewhere, but what is in question is whether there was once another castle, predating the thirteenth century, somewhere near the present castle. It is recorded in the Welsh Brut Y Tywysogion that in about 1172 AD King Henry II visited Castell Newyd ar Uysc (New Castle on the River Usk). In 1185 the king’s accounts show that six pounds fourteen shillings and sixpence were spent on repairs to the castle of Novi Burgi (i.e. Newport) and its buildings and bridge. This does not sound like the motte on Stow Hill, which was outside the borough’s boundaries away from the river and with no view of medieval Newport - since it faced towards the Severn Estuary and the mouth of the River Usk. Further it is unclear how the bridge or town could have been properly defended if there was no castle close to it.
There are various references in the thirteenth and fourteenth centuries to Newport Castle and town, including details of a siege in 1321 by Hugh Audley and other lords. The damage was so bad that in 1322 an order was given for 300 oaks ‘fit for timber’ to be felled to repair and construct the houses and fortalices (outworks) within the castle. This would seem to suggest that at this time the castle may have been constructed of timber, but the reference does not specifically refer to the main castle itself, where presumably the structure survived the assault.
The oldest plan of the castle and its original curtain wall is shown on the town map of 1750. ( click
here to view ) The moat is not shown and the main buildings on the riverfront are shown out of the correct alignment. A plan for William Coxe’s An Historical Tour in Monmouthshire in 1801 appears to have been based on this earlier plan since it shows the same mistake regarding the building range, but it does show a moat. ( click
here to view ) In 1885 Octavius Morgan published a surveyed plan ( click
here to view ) of what then existed in Archaeologia Cambrensis, and up to date plans were published by Jeremy Knight in 1991.
The archaeological evidence largely consists of a coin of Edward III excavated in 1845, and some roofing slate and some fifteenth or sixteenth century pottery excavated in 1970. Architecturally the surviving castle does not appear to be any earlier than the thirteenth century, but it is clear that the present castle is the partly the same castle that was severely damaged by Owain Glyndŵr in 1403. Possibly a fourteenth century stone castle replaced an earlier castle in the same vicinity.
Other documentary evidence to the surrounds of the castle include a building called ‘the long stables’ outside the castle gate in 1452, the rabbit warren in 1484, and various references to the castle green which appears to have been on the north, west and south side of the curtain walls. By the end of the fifteenth century the castle appears to have been neglected and a survey of 1522 refers to ‘a fair hall, proper lodgings after the waterside, and many houses of offices; howbeit, in manner, all is decayed in coverings and floors, especially of timber work.’ The later history of the castle is outside the scope of this survey.
1. Forthcoming report for Cadw by Will Davies, Christopher Phillpotts and Bob Trett
2. Calendar of Close Rolls 15 Edward II Volume 1318-1323. 440
3. Monmouth Merlin and South Wales Advertiser 27 September 1845. The report also refers to coins ‘of the Henries and several base coins of the age of Constantine, but too corroded to decipher’. What the significance is of the Roman coins is impossible to say without any further archaeological evidence.
Bob Trett 2010
Newport Castle Chronological Chart and |
This book explores a highly successful group of mammals-the carnivores-in stunning visual detail. A comprehensive introduction places carnivores in the context of other living things, and an extensive catalog showcases the incredible variety of these animals. Every group of carnivores is covered, including dogs, weasels, seals, and civets. Many individual specimens are also profiled, each accompanied by clear, concise text and key data.
More extensive features focus on key species, such as the polar bear and the tiger, and include stunning, specially commissioned photography.
Conceived from the ground up to take advantage of this amazing new technology, DK Natural History: Mammals - Carnivores brings the subject to life like never before, and offers an amazing range of interactive features, including:
* Interactive galleries
DK Natural History: Mammals - Carnivores includes:
* 70 interactive and illustrated pages
* Illustrated profiles of nearly 200 individual specimens
* Expanded profiles of key species
* Hundreds of crystal-clear photographs, many specially commissioned
* Key contextual images
DK Natural History: Mammals - Carnivores forms part of an ongoing series that is a landmark in illustrated reference publishing and was authenticated by experts at the Smithsonian Institution's National Museum of Natural History. The complete series offers an unrivalled visual survey of Earth's natural history, giving a clear overview of the natural world with over 6,000 species featured.
DK - Dorling Kindersley - is an award-winning publisher of distinctive, highly visual products for adults and children. We produce books, ebooks, and apps for consumers in over 100 countries and 60 languages. Founded in London in 1974 we are enormously proud to be the world's leading illustrated reference publisher.
Everything we make strives to inform, inspire, and entertain readers of all ages. |
Martin Kramer, “Syria’s Alawis and Shi’ism,” in Shi’ism, Resistance, and Revolution, ed. Martin Kramer (Boulder, Colorado: Westview Press, 1987), pp. 237-54.
In their mountainous corner of Syria, the Alawis claim to represent the furthest extension of Twelver Shi’ism. The Alawis number perhaps a million persons—about 12 percent of Syria’s population—and are concentrated in the northwestern region around Latakia and Tartus. This religious minority has provided Syria’s rulers for nearly two decades. Syrian President Hafiz al-Asad, in power since 1970, as well as Syria’s leading military and security chiefs, are of Alawi origin. Once poor peasants, they beat their ploughshares into swords, first becoming military officers, then using the instruments of war to seize the state. The role of Alawi communal solidarity has been difficult to define, and tribal affiliation, kinship, and ideology also explain the composition of Syria’s ruling elite. But when all is said and done, the fact remains that power in Syria is closely held by Alawis.1
This domination has bred deep resentment among many of Syria’s Sunni Muslims, who constitute 70 percent of the country’s population. For at the forefront of Syria’s modern struggle for independence were the Sunni Muslims who populated the cities of Syria’s heartland. They enjoyed a privileged standing under Sunni Ottoman rule; they, along with Syrian Christian intellectuals, developed the guiding principles of Arab nationalism; they resisted the French; and they stepped into positions of authority with the departure of the French. Syria was their patrimony, and the subsequent rise of the Alawis seemed to many of them a usurpation. True, Sunni Arab nationalists had put national solidarity above religious allegiance and admitted the Alawis as fellow Arabs. But there were many Sunnis who still identified their nationalist aspirations with their Islam, and confused Syrian independence with the rule of their own community. Alawi ascendence left them disillusioned, betrayed by the ideology of Arabism which they themselves had concocted.2
Some embittered Sunnis reformulated their loyalties in explicitly Muslim terms and now maintain that the creed of the Alawis falls completely outside the confines of Islam. For them, the rule of an Alawi is the rule of a disbeliever, and it was this conviction that they carried with them in their futile insurrection of February 1982. The Alawis, in turn, proclaim themselves to be Twelver Shi’ite Muslims. This is at once an interesting and problematic claim, with a tangled history; it cannot be lightly dismissed or unthinkingly accepted. It raises essential questions about religious authority and orthodoxy in contemporary Twelver Shi’ism. And it is complicated by the fact that Syria enjoys the closest and fullest relationship with revolutionary Iran of any state. The old controversy over the origins of the Alawis has been forgotten, and the contemporary Alawi enigma is this: By whose authority, and in whose eyes, are the Alawis counted as Twelver Shi’ites?
Schism and Separatism
The Alawis are heirs to a distinctive religious tradition, which is at the root of their dilemma in modern Syria. Beginning in the nineteenth century, scholars acquired and published some of the esoteric texts of the Alawis, and these texts still provide most of what is known about Alawi doctrine. The picture that emerged from these documents was of a highly eclectic creed, embracing elements of uncertain origin. Some of its features were indisputably Shi’ite, and included the veneration of Ali and the twelve Imams. But in the instance of Ali, this veneration carried over into actual deification, so that Ali was represented as an incarnation of God. Muhammad was his visible veil and prophet, and Muhammad’s companion, Salman al-Farisi, his proselytizer. The three formed a divine triad, but the deification of Ali represented the touchstone of Alawi belief. Astral gnosticism and metemspychosis (transmigration of souls) also figured in Alawi cosmology.
These religious truths were guarded by a caste of religious shaykhs (shuyukh al-din); the mass of uninitiated Alawis knew only the exoteric features of their faith. An important visible sign of Alawi esoterism was the absence of mosques from Alawi regions. Prayer was not regarded as a general religious obligation since religious truth was the preserve of the religious shaykhs and those few Alawis initiated by them into the mysteries of the doctrine. Such a faith was best practiced in a remote and inaccessible place, and it was indeed in such rugged surroundings that the Alawis found refuge. For, as might be expected, Sunni heresiographers excoriated Alawi beliefs and viewed the Alawis as disbelievers (kuffar) and idolators (mushrikun). Twelver Shi’ite heresiographers were only slightly less vituperative and regarded the Alawis as ghulat, “those who exceed” all bounds in their deification of Ali. The Alawis, in turn, held Twelver Shi’ites to be muqassira, “those who fall short” of fathoming Ali’s divinity.3
From the late nineteenth century, the Alawis were subjected to growing pressure to shed their traditional doctrines and reform their faith. The Ottomans had a clear motive for pressing the Alawis to abandon their ways. Alawi doctrine attracted much interest among French missionaries and orientalists, some of whom were convinced that the Alawis were lost Christians. The Ottomans drew political conclusions, and feared a French bid to extend France’s religious protectorate northward from Lebanon to the mountains overlooking Tartus and Latakia. At the same time, the Alawis themselves could not but feel the effects of the Muslim revival that swept through Syria in the second half of the nineteenth century and the popular Muslim backlash against the Tanzimat. These two pressures combined to produce a reformist drive among a handful of Alawi shaykhs, which enjoyed the encouragement of the Ottoman authorities. The result was some government-financed construction of mosques, which were built almost as talismans to ward off the foreign eye. But since the Ottoman purpose was to assimilate the Alawis, the formula of prayer in these first mosques was Sunni Hanafi, in accord with the predominant rite in the empire. The authorities had no reason to encourage the few reformist Alawi shaykhs to lead their coreligionists in any other direction.
All this produced few lasting effects. The influence of this early reformism was very limited, and most of the Alawi religious shaykhs would have nothing to do with it. The rapid turnover of Ottoman governors also meant that pressure upon the Alawis was not maintained. Since these governors could extract very few taxes from the Alawis, it seemed unsound fiscal policy to spend revenues on them. In the twilight years of the Ottoman Empire, the Alawis remained essentially as they had been for centuries, divided and unassimilated, with their esoteric doctrines still intact. Few Alawis had ever crossed the portal of a mosque.4
When the Ottoman Empire fell, the French claimed Syria as their share, and the Alawis found their new rulers eager to protect and patronize them. French policy was generally one of encouraging Alawi separatism, of setting Alawis against the Sunni nationalists who agitated for Syrian independence and unity. From 1922 to 1936, the Alawis even had a separate state of their own, under French mandate. But within their state, the Alawis were still the economic and social inferiors of Sunnis, and these relationships could not be undone by simple administrative decree. There was, however, one form of dependence which had to be broken, if the Alawis were to feel themselves equal to Sunnis. Ottoman authorities had imposed Sunni Hanafi law wherever their reach extended, a law administered by Sunni courts. Alawi custom had prevailed in Alawi civil matters, in which the Ottomans had no desire to intervene, but this custom had no legal standing. In the new order, a pressing need arose to give the Alawis recognized communal status, courts, and judges. This was a daunting task, for Alawi custom was too dependent upon traditional social authority to be reduced to codified principles and applied in the courts.
A solution was found in 1922, by importing the law and some of the judges. In that year, the French authorized the establishment of separate religious courts for the Alawis (mahakim shar’iyya alawiyya), and it was decided that they would rule in accordance with the Twelver Shi’ite school of law.5 This school was as remote from Alawi custom as any other. Its principal advantage lay in the obvious fact that it removed Alawi affairs to separate but equal courts and placed Alawis squarely outside the jurisdiction of their Sunni neighbors and overlords. But since there were no Alawis sufficiently expert in Twelver Shi’ite jurisprudence to serve as judges, Twelver Shi’ite judges had to come up from Lebanon to apply the law.6 The Alawis, then, were spared subordination to Sunni courts by embracing the Twelver Shi’ite school, but they were incapable of judging themselves according to its principles. Not a single Alawi had been to Najaf, to hear the lectures delivered in its academies by the recognized Twelver Shi’ite jurisprudents of the day. Yet there were a few Alawi shaykhs who did delve in books of Twelver jurisprudence, and these were soon given formal appointments as judges in Alawi religious courts. It seems likely that what prevailed in these courts was a very rough notion of Twelver Shi’ite jurisprudence, modified still further to accommodate Alawi custom.
In laying hand on the Twelver law books, the Alawi religious shaykhs had borrowed all that they cared to borrow from the Twelver tradition. These texts gave them a useful store of precedents for application in the narrow field of civil law. But in the weightier matter of theology, Alawi shaykhs clung to their own doctrine. They had no use for other branches of Twelver scholarship, and made no effort to put themselves in touch with Twelver Shi’ite theologians and jurisprudents elsewhere. Once Alawi judges were installed in the Alawi religious courts, Lebanese Twelver judges ceased to frequent the Alawi region, and the Alawis were content to remain cut off from the body of Twelver Shi’ism. As a result, Lebanon’s Twelver Shi’ites were left completely in the dark about the beliefs of the Alawis.
This emerges from an anecdote about a visit to Latakia in the 1930s by Lebanon’s preeminent Twelver divine, Shaykh Abd al-Husayn Sharaf al-Din of Tyre. To his host, a leading Sunni notable and sayyid of Latakia, he said: “I have come first of all to visit you and then to ask about the doctrine of the Alawis among whom you live. I have heard it said that they are ghulat.”7 In this curious scene, a Twelver Shi’ite inquired of a Sunni about the beliefs of an Alawi. In fact, the Alawi shaykhs were no more prepared to bare their doctrines to Twelver Shi’ites than to Sunnis. The Alawis had simply chosen to judge themselves, in their own courts, by the principles of Twelver Shi’ite jurisprudence. The religious shaykhs had not decided to submit their beliefs to the scrutiny of Twelver Shi’ites, or to recognize the authority of living Twelver divines.
Political separatism was compatible with Alawi religious esoterism and it won many adherents among the Alawi religious shaykhs. But as the French mandate wore on, nationalist agitation for Syrian independence and unity caused the French to falter in their support of Alawi separatism. Without unqualified French support, separatism did not stand a chance of success. Cautious Alawis instead began to seek Sunni guarantees for the fullest possible Alawi autonomy and equality in a united Syrian state. The Sunnis, in turn, wished to integrate the Alawi territory in a united Syria with the least amount of Alawi resistance. These interests converged in 1936 as Syria approached independence. To smooth the integration, some thought that a Sunni authority should recognize the Alawis as true Muslims, an expedient recognition which would serve the political interests of Alawis and Sunnis alike. But in order for the recognition to have the desired effect, it would have to declare the Alawis to be believing and practicing Muslims.
The recognition came in July 1936, and took a reciprocal form. The Alawis themselves took two steps. First, a group of Alawi religious shaykhs (rijal al-din) issued a proclamation, affirming that the Alawis were Muslims, that they believed in the Muslim profession of faith, and performed the five basic obligations (arkan) of Islam. Any Alawi who denied that he was a Muslim could not claim membership in the body of Alawi believers. Second, an Alawi conference held at Qardaha and Jabla submitted a petition to the French foreign ministry, stressing that “just as the Catholic, the Orthodox, and the Protestant are yet Christians, so the Alawi and Sunni are nevertheless Muslims.”8 At the same time, the Sunni mufti of Palestine, Haj Amin al-Husayni, issued a legal opinion (fatwa) concerning the Alawis, in which he found them to be Muslims and called on all Muslims to work with them for mutual good, in a spirit of Islamic brotherhood.9
There was more to this exchange than met the eye. The Alawi proclamation and petition did not renounce any of the esoteric beliefs attributed to the Alawis. Their very existence could not be divulged. It was widely believed that the Alawis kept some of their beliefs secret, and so their own public elucidation of their doctrine could not be expected to have much effect. But Haj Amin al-Husayni’s fatwa was another matter since it issued from a prominent Sunni authority, in his dual capacity as mufti of Palestine and president of the General Islamic Congress in Jerusalem. Yet the fatwa also was problematic. Why did a Sunni authority in Jerusalem, and not in Damascus, act to recognize the Alawis? After all, there were no Alawis in Palestine, and Haj Amin had not made an independent investigation of their beliefs or rituals. Was he moved by a pure desire for ecumenical reconciliation?
It seemed unlikely. More to the point, Haj Amin had very close ties with those leaders of the pan-Arabist National Bloc who led the struggle for a united Syria. The pan-Arab nationalists in Damascus probably initiated the move, not Haj Amin, who was simply their obliging cleric. They obviously turned to Jerusalem because they could not extract comparable recognition of their Alawis from Sunni religious authorities in Damascus. These authorities apparently were not prepared to soil their reputations by declaring night to be day since they refused to regard the Alawis as Muslims. So when Syria’s nationalists were pressed to provide Sunni recognition of the Alawis, they secured it from a dubious source. It would be accurate to say that in sealing this deal of recognition, both Alawis and Sunnis extended their left hands.
Excluded from all this were the Twelver Shi’ites, although there may have been an attempt to involve one of them as well: Shaykh Muhammad al-Husayn Al Kashif al-Ghita of Najaf. This ecumenical evangelist was keen to strike religious bargains with Christian, Sunni, and Druze, so long as these served the sublime political purposes of Arab unity. This was undoubtedly his motive in entering into correspondence with Shaykh Sulayman al-Ahmad of Qardaha. Shaykh Sulayman held an exalted position among the Alawis. He was the spiritual leader of the majority Qamari section of Alawis and bore the formal title of “servitor of the Prophet’s household” (khadim ahl al-bayt). A poet of reputation, he had been admitted to the Arab Academy in Damascus.10 Yet he bore the responsibility of a master entrusted with all of the powerful esoteric teachings of the Alawi faith, and these he was bound to preserve from the prying divine from Najaf. Their correspondence was apparently never published and yielded no public gesture of recognition. Perhaps even Shaykh Muhammad al-Husayn realized that he had reached the limits of expediency.11
Certainly not a word of public comment on the standing of the Alawis was heard from Najaf or Qom, the great seats of Twelver Shi’ite learning. An open endorsement of the Alawis by a leading Twelver Shi’ite divine would have carried much more weight than the Alawis’ own self-interested protestations, or the questionable fatwa from Jerusalem. But how could the leading lights in Najaf and Qom embrace the Alawis, when not one Alawi had attended their religious academies? When the works of the medieval Twelver theologians, still read and revered in these academies, described the Alawis as ghulat? When the news from Syria brought word that an epileptic, illiterate shepherd named Sulayman al-Murshid had unleashed a wave of messianic expectations among many Alawis, who acclaimed him a nabi, a prophet? On the one hand, much influence might be gained by laying claim to this community for Twelver Shi’ism; on the other, much authority might be lost by endorsing people of questionable belief. Recognition of the Alawi claim was obviously a matter that required exacting study in Najaf and Qom.
In 1947, Ayatollah Muhsin al-Hakim, the leading Twelver Shi’ite divine in Najaf, turned his attention to the Alawis. He wrote to Shaykh Habib Al Ibrahim, the Twelver mufti of the Lebanese Bekaa Valley, asking him to visit the Alawi region on his behalf, and to provide a first-hand report on their beliefs and ways. Shaykh Habib accepted the mission and traveled extensively among the Alawis, meeting with reformist shaykhs and offering religious guidance. The Lebanese emissary concluded that there was a clear need to send some intelligent young Alawis to Najaf, where they could engage in proper theological and legal studies under the masters. They would then return home radiant with knowledge to enlighten their brethren. Ayatollah Hakim agreed to bear the expense of this missionary effort, and twelve Alawi students left for Najaf in 1948.
In a short time, all but three of the students had dropped out. On their arrival in Najaf, they met with hostility from some of the Twelver Shi’ite men of religion, who set conditions upon their acceptance as Muslims and even demanded that they submit to purifying ablutions. In Najaf, the Alawi students found that they were still called ghulat, even to their faces. Years later, Ayatollah Hakim expressed his regret at this treatment, saying that “it seems this was the result of some ignorant behavior by the turbanned ones.” But no one intervened at the time. The young students, cast into strange surroundings, could not bear these humiliations for long, and most returned home.12
No one suggested for a moment that older Alawi religious shaykhs be sent to Najaf. Instead, Shaykh Habib proposed the establishment of a local society to promote the study of Twelver Shi’ite theology and jurisprudence. In this manner, Alawi shaykhs could receive proper guidance in an organized framework. The Ja’fari Society, established in response to Shaykh Habib’s proposal, had its headquarters in Latakia, and branches in Tartus, Jabla, and Banias. In addition to diffusing Twelver doctrine, the society undertook to construct mosques and lobbied for official recognition of the Twelver Shi’ite school by independent Syria. For with Syrian independence in 1946, the separate Alawi religious courts had been abolished, and Alawis were made to appear before Muslim religious courts that recognized only the Sunni schools.
The recognition sought by the Ja’fari Society was finally extended in 1952. Thereafter, the Twelver school was deemed equal to other recognized schools of law and its precepts could be applied by Muslim religious courts.13 The Alawis, then, had won some formal recognition from the Syrian government. But they still had not received the endorsement of the Twelver Shi’ite authorities of Najaf and Qom. In fact, all of the recommendations made by Ayatollah Hakim’s Lebanese emissary assumed that the Alawis were deficient in their understanding of true religion and still needing much knowing guidance.
In 1956, another Twelver Shi’ite emissary called upon the Alawis: Muhammad Rida Shams al-Din, a scholar at Najaf and a member of one of South Lebanon’s most respected clerical families. His trip was funded by Ayatollah Mohammad Husayn Borujerdi, the very highest Twelver Shi’ite authority of the day, who had his seat at Qom and a large academy at Najaf. Ayatollah Borujerdi was very keen on Islamic ecumenism and invested much effort in pursuing a Sunni-Shi’ite reconciliation. Leading the Alawis back to the fold seemed an obvious motif for still another kind of ecumenical initiative, and Borujerdi was willing to bear the expense of a second group of Alawi students, who would study at his academy in Najaf.
The Lebanese emissary won an enthusiastic reception, and he immediately published a sympathetic account of the Alawis.14 But nothing came of the plan to bring a second group of students to Najaf. Memory of the ill treatment meted out to the first group was still fresh, but there may have been a more compelling reason. For in 1956, one of the remaining Alawi students from the first mission wrote a book about the Alawis, which was published in Najaf. While generally apologetic in tone, the book leveled some pointed criticisms at Alawi doctrine and the structure of Alawi religious authority. It was ignorance to deny the ignorance of Alawis in matters of religion, the student wrote. He denounced the “bloated army” of unschooled Alawi religious shaykhs, who inherited their status and lived off tithes exacted from believers whom they kept in the dark.15 If these were the sorts of ideas that the brightest Alawi students were bound to bring back from Najaf, then an unwillingness among the Alawi shaykhs to organize a second student mission would be perfectly understandable. No more Alawi students reached Najaf until 1966, when three came to study under Ayatollah Hakim. One of them reported that his group did not encounter the same visceral hostility which enveloped their predecessors.16 But by the late 1960s, Syria’s ruling Ba’th party had entered upon a collision course with the rival Iraqi Ba’th party, and antagonism has generally plagued Syrian-Iraqi relations ever since. For Alawi students, Najaf was again beyond reach.
Several young Alawis preferred Cairo to Najaf anyway, and entered programs of religious studies at Al-Azhar. In 1956, an Azhar shaykh appeared in Qardaha with offers of scholarships for ten Alawi students.17 With the establishment of the Egyptian-Syrian union in 1958, Alawis came under even greater Sunni pressure, and were encouraged to get their religious training in Cairo. There is no way of knowing how many Alawi students passed through Al-Azhar during those years and later, but they could not have been fewer than those who reached Najaf. Al-Azhar provided an education with an obvious Sunni bias and offered only rudimentary instruction in Twelver Shi’ite jurisprudence. But, unlike the Najaf academies, Al-Azhar granted regular diplomas which were recognized in Syria, and this made it a very attractive alternative.18 So the handful of Alawi religious shaykhs with wider education were divided in their attachments between Najaf and Cairo, between Twelver Shi’ism and Sunnism. This was the ambiguous situation in 1966, when power in Syria was seized by Alawi hands.
To Legitimize Power
The rise of Alawi officers to positions of influence and power put a sharp edge on the religious question. The new regime’s radical economic and social policies stirred opposition, especially among urban Sunni artisans, petty traders, and religious functionaries. As the regime’s base became more narrowly Alawi over time, opponents found it convenient to transfer the political debate to the highly emotive plane of religion. Those who did so argued that the regime’s Arabism merely legitimized Alawi political hegemony; its socialism simply sanctioned the redistribution of Muslim wealth among the Alawis; and its secularism provided a pretext for stifling Muslim opposition. Fundamentalist opponents of the regime sought to draw the boundaries of political community in such a way as to exclude the Alawis and did so by relying upon their own exacting definition of Islamic orthodoxy.
This situation was rich in irony. The Alawis, having been denied their own state by the Sunni nationalists, had taken all of Syria instead. Arabism, once a convenient device to reconcile minorities to Sunni rule, now was used to reconcile Sunnis to the rule of minorities. The cause of Sunni primacy, once served by having the Alawis recognized as Muslims, now demanded that the Alawis be vilified as unbelievers.
In February 1971, Hafiz al-Asad became the first Alawi president of Syria. Rising from a poor Qardaha family, he played an important role in dismantling the old order and seized power by crushing an Alawi rival. His elevation to the presidency marked a turning point. The significance of this office in Syria’s had been symbolic rather than substantive, but the presidency had always been held by Sunnis, and its passage to an Alawi proclaimed the end of Sunni primacy. In January 1973, the government went still further and released the text of a new draft constitution. This document was also of symbolic significance, for it sought to legitimize the radical changes made by the regime. Its message was emphatic: Unlike pre-Ba’th constitutions, this one did not affirm that Islam was the religion of state. This grievous sin of omission precipitated a crisis, as Sunni demonstrators poured out of the mosques and into the streets. General strikes closed down Hamah, Homs, and Aleppo. Asad, who was taken aback, proposed the insertion of an amendment in the constitution, stipulating that the president of the state shall be Muslim. But the situation actually deteriorated after Asad’s offer. At issue was not the constitution, but Alawi hegemony. The violent unrest ended only with the entry of armored units into the cities.19
In 1973 the Alawi religious shaykhs stumbled over one another in their rush to affirm that the Alawis were Muslims, that they were Twelver Shi’ites through and through, and that other beliefs attributed to them were calumnies.20 But these Alawi claims were in dire need of some external validation. Much had changed since 1936, and Sunni recognition would not do. The higher Sunni religious authorities in Syria had already knelt before Asad, and no one regarded them as capable of thinking or speaking independently on any issue. What was needed was some form of recognition from a Twelver Shi’ite authority, who could buttress the Alawis’ own problematic claim that they were Twelver Shi’ites.
The solution appeared in the person of the Imam Musa al-Sadr.21 By 1973, this political divine had made much progress in his effort to stir Lebanon’s Twelver Shi’ites from their lethargy. His most impressive achievement had been the establishment of the Supreme Islamic Shi’ite Council (SISC), authorized by a 1967 law that declared the Twelver Shi’ites a legal Lebanese community in the fullest sense. With the establishment of the SISC, a question arose as to whether the small Alawi community in Tripoli and the Akkar district did or did not come under its jurisdiction. Numbering about 20,000, these Alawis in Lebanon were closely tied to those in Syria, and belonged to the same tribes. Although they were not recognized by Lebanese law as a distinct community, they generally tended their own affairs. The Alawis in the north of Lebanon had no historical ties to the Twelver Shi’ites in the south and east.
In 1969, Musa al-Sadr became chairman of the SISC and attempted to bring Lebanon’s Alawis under his jurisdiction. A strong streak of ecumenism ran through Musa al-Sadr’s highly politicized interpretation of Shi’ism. Even as he fought Sunni opinion over the recognition of Lebanon’s Twelver Shi’ites, he did not stop preaching the necessity for Muslim unity. The uncomplimentary references to the Alawis in the Twelver sources would not have deterred him. He may also have been eager to extend his reach into the north of Lebanon. Inclusion of the Alawis, however few in number, would give him a constituency in a region where he had none.
But to bring Lebanon’s Alawis under his wing, Musa al-Sadr first had to treat with the Alawi religious shaykhs in Syria. The dialogue began in 1969, and dragged on for four years. A statement by the SISC made only vague allusion to “difficult historical circumstances” and “internal disputes,”22 but it was not hard to imagine what blocked an agreement. The Alawi religious shaykhs in Syria feared that their coreligionists in Lebanon might slip from their grasp, and they were also mindful that some Lebanese Alawis still hoped to secure official recognition of the Alawi community as separate and distinct from all others. The religious shaykhs probably never imagined that they would face a serious challenge issued by a Twelver Shi’ite divine from Lebanon. They had chosen Twelver Shi’ite law to guarantee their religious independence, not to diminish it. So they drew out the dialogue with Musa al-Sadr, withholding their assent.
Then came the Sunni violence of 1973 and the reiterated charge that the Alawis were not Muslims. The disturbances shook the Syrian Alawi elite, who then pressed the Alawi religious shaykhs to look differently at Musa al-Sadr’s overtures. If Musa al-Sadr would throw his weight behind the argument that Alawis were Twelver Shi’ites, this would undermine at least one pillar of the Sunni indictment of the regime. Since the Alawis of Lebanon did not differ in belief from those of Syria, their formal inclusion in the Twelver Shi’ite community would constitute implicit recognition of all Alawis. For his part, Musa al-Sadr may have begun to realize that his recognition of the Alawis might bring political advantages which he had not previously imagined. The regime of Hafiz al-Asad needed quick religious legitimacy; the Shi’ites of Lebanon, Musa al-Sadr had decided, needed a powerful patron. Interests busily converged from every direction.
The covenant was sealed in a Tripoli hotel in July 1973. In a public ceremony, Musa al-Sadr, in his capacity as chairman of the SISC, appointed a local Alawi to the position of Twelver mufti of Tripoli and northern Lebanon. Henceforth, Lebanon’s Alawis were to come under the jurisdiction of an appointee of the SISC. A delegation of Alawi religious shaykhs from Syria witnessed the event, and Musa al-Sadr delivered a speech justifying the appointment. Lebanon’s Alawis and Twelver Shi’ites were partners since both had suffered from persecution and oppression. “Today, those Muslims called Alawis are brothers of those Shi’ites called Mutawallis by the malicious.” What of the internal unrest in Syria? “When we heard voices within and beyond Syria, seeking to monopolize Islam, we had to act, to defend, to confront.” Then Musa al-Sadr roamed still further afield: “We direct the appeal of this gathering to our brethren, the Alevis of Turkey. We recognize your Islam.” The new mufti, Shaykh Ali Mansur, joined in the ecumenical oratory: “We announce to those prejudiced against us that we belong to the Imami, Ja’fari [Twelver] Shi’a, that our school is Ja’fari, and our religion is Islam.” Nor did Musa al-Sadr lose the opportunity to call for an end to tension between Syria and Lebanon, which had resulted from a disagreement over the role of Palestinian organizations in Lebanon.23
The Alawi religious shaykhs in Syria had given the appointment their blessing. But this deal was done at the expense of another Alawi party: those Lebanese Alawis who wanted to preserve their separate identity, and perhaps win official recognition for their community. This opposition was championed by a group known as the Alawi Youth Movement. In a series of statements, the group maintained that the Alawis, while Twelver Shi’ites, were a separate community and deserved separate status under the law. The SISC was attempting to assimilate the Alawis against their will.24 Tension in the Alawi quarter of Tripoli grew as the day of the ceremony approached, and when it arrived, security forces set up roadblocks at entrances to the city and the affected quarter. Opponents of the mufti’s appointment held a rally that evening, featuring the inevitable demonstration of shooting into the air and a call to the community to boycott the new mufti.25 Tension ran high for weeks afterward, and, in one instance, partisans and opponents of the new mufti even exchanged gunfire.26 This internal dispute forced Musa al-Sadr to tread carefully, and the SISC issued a clarification, explaining that the purpose of the mufti’s appointment was not to subsume the Alawis, but to provide them with a service that they lacked.27
But regardless of what happened in Tripoli, Syria’s Alawis could claim to have Musa al-Sadr’s endorsement. Did it amount to much? Musa al-Sadr did have extensive ties in Qom, his place of birth, and Najaf, where he had studied. His father had been one of the great pillars of scholarship in Qom. So it is interesting to note by what higher authority Musa al-Sadr claimed to act in the matter of the Alawis. His initiative, he declared, was part of his ecumenical work on behalf of the Islamic Research Academy, a Nasserist appendage of Al-Azhar.28 This was one of those Sunni arenas in which Musa al-Sadr regularly appeared as part of his self-appointed ecumenical mission. Unlike other Lebanese Twelver emissaries to the Alawis, Musa al-Sadr did not represent a leading Twelver divine at Najaf or Qom. He acted solely in his official Lebanese capacity, with the sanction of an obscure academy in Cairo. For the embrace of 1973 was political, not theological. Syria’s Alawis certainly did not plan to submit to Twelver authority, and Musa al-Sadr’s move did not diminish their religious independence by a whit. They simply surrendered the small Alawi community of Lebanon, as one would force a marriage of convenience upon a reluctant daughter. Musa al-Sadr took the vow, and Hafiz al-Asad provided the dowry. Without that Syrian support, Musa al-Sadr’s movement might not have weathered the storm which soon descended upon Lebanon.29
Still, the influence of Musa al-Sadr did wane following the outbreak of civil war. The Syrian regime, then, did not rest content with his endorsement, but sought to cultivate still another Shi’ite divine with an ambition as vaunting as Sadr’s. This was Ayatollah Hasan al-Shirazi, a militant cleric from a leading Iranian-Iraqi family of religious scholars. In 1969, Shirazi’s incendiary preaching in Karbala had led Iraqi security authorities to arrest and torture him. He fled or was expelled from Iraq in 1970 and soon found his way to Lebanon, where he had spent an earlier period of exile. There he began to gather a following, and like Sadr he received Lebanese citizenship by special dispensation in 1977.30 A certain mystery enveloped Shirazi’s affiliations, for he, too, seems to have enjoyed a friendship of convenience with Hafiz al-Asad. Asad must have recognized Shirazi’s value as a possible card to play against both Iraq and Musa al-Sadr, should the need arise, while the exiled Shirazi desperately needed a patron.31 It is not surprising, then, that Shirazi should also have made himself a champion of the Alawis, placing his coveted stamp of approval upon their qualifications as Twelver Shi’ite Muslims. Shirazi argued, in a preface to an Alawi polemical tract, that the beliefs of the Alawis conformed in every respect to those of their Twelver Shi’ite brethren, a fact which he had ascertained through personal observation.32 Shirazi’s explicit endorsement, combined with Sadr’s, constituted a forceful argument for Alawi claims. But the obvious political expediency of this move rendered it as suspect as any previous endorsement. Shirazi, after all, was in exile, and in sore need of Syrian support. If he were to build his influence in Lebanon with Syrian backing, could he do less than Sadr had done? It is idle to speculate how this alliance might have unfolded: in May 1980, Shirazi was shot to death in a Beirut taxi.
As to the actual doctrines expounded by the Alawi religious shaykhs, it is impossible to know whether they underwent any change as a result of these embraces. Perhaps the younger, educated shaykhs formulated some sort of Alawi reformism and made a closer study of Twelver theology and philosophy. Perhaps their elders yielded on a few points of detail. But in an esoteric faith, doctrinal controversies are kept in a closed circle of the initiated, and these held their tongues, except to assure their critics that they were Twelver Shi’ites.
Yet the question of religious doctrine was inseparable from that of religious authority, and here there was no change. Syria’s Alawis did not recognize external authority, and they did not bind themselves as individuals to follow the rulings of the great living ayatollahs. On this crucial point, they differed from all other Twelver Shi’ites, and as long as they refused to recognize such authority, they could not expect reciprocal recognition by any divine of the stature of Ayatollah Abol Qasem Kho’i in Najaf, or Ayatollah Kazem Shariatmadari in Qom. It is worth noting that Ayatollah Shariatmadari, who had very broad ecumenical interests, did correspond with Shaykh Ahmad Kiftaru, Sunni grand mufti of Syria and faithful servant of the Syrian regime. Shaykh Ahmad even visited Qom during that tense summer of 1973, and one is tempted to speculate that he urged Shariatmadari to recognize the Alawis.33 But Shariatmadari kept his silence, and made no gesture to Syria’s Alawi religious shaykhs, who claimed so insistently to be his coreligionists.
The Impact of Iran’s Revolution
In June 1977, Ali Shariati was laid to rest in Damascus, near the mausoleum of Zaynab. Regarded as something of an Iranian Fanon, Shariati offered a radical reinterpretation of Shi’ism, winning a devoted following and the scrutiny of SAVAK. When he died suddenly in London, his admirers charged foul play and arranged to have him buried in Damascus. The choice of Damascus as a place where Shariati’s mourners might safely congregate was not accidental. After 1973, the Syrian authorities provided haven and support for numerous Iranians who were active in the religious opposition to the regime of the Shah. Musa al-Sadr, who officiated at Shariati’s funeral, had much to do with encouraging these ties, since he openly collaborated with the Iranian religious opposition.
The Syrians, for their part, could not have imagined that this motley assortment of Iranian émigrés and dissidents might ever come to power in Iran. But it was no trouble to keep them, and they did have links to some leading Twelver Shi’ite clerics. If the endorsement of Ayatollah Shariatmadari could not be had, then perhaps that of Ayatollah Khomeini in Najaf might be secured. After all, Khomeini subordinated religious tradition to the demands of revolutionary action, and, like Musa al-Sadr, he needed influential friends. It is obviously impossible to know whether pursuit of such recognition for the Alawis played any role in the support given by the Syrian regime to the Iranian religious opposition. The Syrians may simply have wished to indulge Musa al-Sadr and defy the Shah. But Syrian support was steady, and in 1978, when Khomeini was forced out of Iraq and denied entry to Kuwait, he considered seeking refuge in Damascus before settling upon Paris.
The close relationship between Syria and the Islamic Republic of Iran was rooted in this early collaboration of convenience. A full account of Syrian-Iranian cooperation since 1979 would catalogue the stream of Iranian visitors to Damascus, and would mention Syria’s tolerance of a contingent of Iranian Revolutionary Guards in Syrian-controlled Lebanon. It would explain Iran’s silence in the face of pleas by the Sunni Muslim Brotherhood for moral support in its struggle against the Syrian regime. And it would consider how Islamic Iran justified waging ideological warfare against a Ba’thist, Arab nationalist regime in Iraq, while aligning itself with a Ba’thist, Arab nationalist regime in Syria. Common hatreds and ambitions inspired this expedient alliance between two incongruous political orders. The Iraqi regime was hateful to both Iran and Syria. In Lebanon, Iran realized that it could not extend support to its clients there without Syrian cooperation; Syria knew that without Iran it could not control those Lebanese Shi’ites who believed that they were waging sacred war against the West. A sense of shared fate, not shared faith, bound these two regimes together.
The Syrian relationship with Islamic Iran did enhance the religious legitimacy of Syria’s rulers, but in a very subtle and indirect way. When these Twelver clerics—Khomeini’s closest students and disciples—visited Damascus, they spoke only the language of politics. They did not utter any opinion on the beliefs, doctrines, or rituals of the Alawis, about which they knew no more than any other outsider. Instead, they spoke of political solidarity, appealing to all Muslims to set aside their religious differences, to unite to meet the threats of imperialism, colonialism, and Zionism. The Syrians, they argued, had made great sacrifices in the war against these evils. This particular commitment is the very essence of Islam in the minds of Iran’s radical clerics, and they have not inquired further. To do so would only open a chasm between them and their self-proclaimed coreligionists.
But the Iranian revolution has increased the pressure for religious reform within the Alawi community. In August 1980, Asad reportedly met with Alawi communal leaders and religious shaykhs at Qardaha. Asad called upon the religious shaykhs to modernize and make reforms and to strengthen the tenuous links of the community with the main centers of Twelver Shi’ism. To this end, two hundred Alawi students were to be sent to Qom, to specialize in Twelver Shi’ite jurisprudence.34 These Qardaha gatherings are not open affairs, and it is impossible to determine the accuracy of this account. But once the star of Twelver Shi’ism had risen in Iran and Lebanon, the regime had every reason to press the religious shaykhs to compromise and to do their share to deflate the Sunni argument against Alawi primacy.
The departure of hundreds of Alawi graduates for the Qom academies would completely undermine the traditional structure of religious authority in the Alawi community. The old beliefs would wither; the new creed might not take root. Whether so many students have been sent out on their irrevocable course is impossible to say, for the consent of the religious shaykhs would not be given without long, procrastinating thought. But Hafiz al-Asad is waiting, and the guardians of Alawi faith may yet be made to sacrifice eternal truth to ephemeral power.
1 On the general issue of sectarianism in modern Syria, see Nikolaos van Dam The Struggle for Power in Syria: Sectarianism, Regionalism and Tribalism in Politics, 1961-1978 (London: Croom Helm, 1979); Itamar Rabinovich, “Problems of Confessionalism in Syria,” in The Contemporary Middle East Scene, eds. Gustav Stein and Udo Steinbach (Opladen: Leske Verlag 1979), 128-32; Elizabeth Picard, “Y a-t-il un probème communautaire en Syrie?” Maghreb-Machrek, no. 87 (January-February-March 1980): 7-21; and Michel Seurat, L’État de barbarie (Paris: Seuil, 1989), 84-99. On the Alawis in society and politics, see R. Strothmann, “Die Nusairi im heutigen Syrien,” Nachrichten der Akademie der Wissenschaften in Göttingen, phil.-hist. Kl. Nr. 4 (1950): 29-64; Moshe Ma’oz, “Alawi Officers in Syrian Politics, 1966-1974,” in The Military and State in Modern Asia, ed. H.Z. Schriffrin (Jerusalem: Academic Press, 1976), 277-97: Peter Gubser, “Minorities in Power: The Alawites of Syria,” in The Political Role of Minority Groups in the Middle East, ed. R.D. McLaurin (New York: Praeger, 1979), 17-48, Hanna Batatu, “Some Observations on the Social Roots of Syria’s Ruling Military Group and the Causes for Its Dominance,” Middle East Journal 35 (1980 ): 331-44; Mahmud A. Faksh, “The Alawi Community of Syria: A New Dominant Political Force,” Middle Eastern Studies 20 (1984): 133-53; and Daniel Pipes, Greater Syria: The History of an Ambition (New York: Oxford University Press, 1990), 166-88.
2 On Sunni opposition to Alawi primacy, see Hanna Batatu, “Syria’s Muslim Brethren,” MERIP Reports 9, no. 12 (November-December 1982): 12-20; Umar F. Abd-Allah, The Islamic Struggle in Syria (Berkeley: Mizan Press, 1983); and Thomas Mayer, “The Islamic Opposition in Syria, 1961-1982,” Orient 24 (1983): 589-609; Raymond A. Hinnebusch, Authoritarian Power and State Formation in Ba’thist Syria (Boulder, Colo.: Westview Press, 1990), 276-300.
3 For the main features of Alawi religious doctrine and organization, see René Dussaud, Histoire et religion des Nosairis (Paris: Bouillon, 1900); Encyclopaedia of Islam, 1st ed., s.v. “Nusairi” (Louis Massignon); Heinz Halm, Die islamische Gnosis: die extreme Schia und die Alawiten (Zurich: Artemis Verlag, 1982), 284-355; Matti Moosa, Extremist Shiites: The Ghulat Sects (Syracuse: Syracuse University Press, 1988), 255-418; and Fuad I. Khuri, Imams and Emirs: State, Religion and Sects in Islam (London: Saqi Books, 1990), 136-41, 198-202. For a compendium of hostile Twelver references to the Alawis, see the clandestine publication of the Syrian Muslim Brotherhood, Al-Nadhir, 22 October 1980.
4 On Ottoman-sponsored mosque construction for the Alawis, see Mahmud al-Salih, Al-Naba al-yaqin an al-alawiyyin (Damascus, 1961), 134-37; and Strothmann, 51.
5 Oriente Moderno 1 (1922): 732; 4 (1924): 258-59.
6 On the appearance of Lebanese judges in the Alawi region, see Encyclopaedia of Islam, “Nusairi,” and Jacques Weulersse, Les pays des Alaouites, vol. 1 (Tours: Arrault, 1940), 261.
7 Ali Abd al-Aziz al-Alawi, Al-Alawiyyun (Tripoli [Lebanon]: n.p., 1972), 43.
8 Texts in Munir al-Sharif, Al-Muslimun al-alawiyyun, 2d ed. (Damascus: Dar al-umumiyya, 1960), 106-8.
9 Full texts with translations in Paulo Boneschi, “Une fatwà du Grande Mufti de Jérusalem Muhammad Amin al-Husayni sur les Alawites,” Revue de l’histoire des religions 122, no. 1 (July-August 1940): 42-54; nos. 2-3 (September-December 1940): 134-52.
10 On Shaykh Sulayman, see Al-Irfan (Sidon) 28 (1938): 520-21, 648.
11 Although he may have yielded to temptation after all. According to the same Alawi source, Shaykh Sulayman managed to secure from Najaf a license (ijaza) as an interpreter of law (mujtahid) although he never set foot in the Shi’ite shrine city; see Salih, Al-Naba al-yaqin, 138. This could only have been at the instance of Shaykh Muhammad al-Husayn. But there is no corroboration for this report in other Alawi published sources.
12 On the first student mission, see Alawi, Al-Alawiyyun, 38-41; Muhammad Rida Shams al-Din, Ma’a al-alawiyyin fi Suriya (Beirut: Matba’at al-insaf, 1956), 48-50; and Al-Irfan (Sidon) 37 (1950): 337-38.
13 On the Ja’fari Society, see Shams al-Din, Ma’a al-alawiyyin, 50-52; Alawi, Al-Alawiyyun, 41-42; text of the official decrees recognizing school, ibid., 47-49.
14 On Borujerdi’s role, see Shams al-Din, Ma’a al-alawiyyin, 19, 43.
15 Ahmad Zaki Tuffahah, Asl al-alawiyyin wa-aqidatuhum (Najaf, 1957), 5, 52-53.
16 Alawi, Al-Alawiyyun, 41.
17 Shams al-Din, Ma’a al-alawiyyin, 36-37.
18 Among the Alawi Azhar graduates was Shaykh Yusuf al-Sarim, who became one of Latakia’s leading religious shaykhs. Although his orientation was said to be strongly Sunni, he was assassinated by the Muslim Brotherhood in August 1979.
19 On the crisis of 1973, see John J. Donohue, “La nouvelle constitution syrienne et ses détracteurs,” Travaux et jours, no. 47 (April-June 1973): 93-111 and Abbas Kelidar, “Religion and State in Syria,” Asian Affairs, n.s., 5, no. 1 (February 1974): 16-22.
20 See the resolutions of the Alawi religious shaykhs, and other statements in the pamphlet Al-Alawiyyun, shi’at ahl al-bayt (Beirut: n.p., 1972); also Al-Hayat, 4 April 1973.
21 See Fouad Ajami, The Vanished Imam: Musa al Sadr and the Shia of Lebanon (Ithaca, N.Y.: Cornell University Press, 1986).
22 Statement by Supreme Islamic Shi’ite Council, Al-Hayat, 6 July 1973.
23 Al-Hayat, Al-Nahar, 7 July 1973; see also Middle-East Intelligence Survey 1, no. 10 (15 August 1973): 77-78.
24 Statements by Alawi Youth Movement, Al-Nahar, 7 July 1973; Al-Hayat, 20 July 1973.
25 Al-Nahar, 7 July 1973.
26 Al-Nahar, 18 July 1973.
27 Al-Nahar, 6 July 1973.
28 Statement by Supreme Islamic Shi’ite Council, Al-Nahar, 6 July 1973. On the Academy, see Jacques Jomier, “Les congrès de l’Académie des Recherches Islamiques dépendant de l’Azhar,” Mélanges de l’lnstitut Dominicain d’Études Orientales du Caire 14 (1980): 95-148.
29 It is interesting to note that the endorsement of the SISC was reaffirmed by Shaykh Muhammad Mahdi Shams al-Din after Musa al-Sadr’s disappearance. According to Shams al-Din, “there are no religious sects within the Shi’ite community. When we speak of Alawis or Isma’ilis, this signifies regional, historical denominations based on political allegiances and not religious differences. The Ja’faris or Shi’ites are absolutely indivisible, and they all share the same belief in the Twelve Imams.” Magazine (Beirut), 15 December 1979.
30 On Shirazi, see Tariq al-thawra (Tehran) no. 25 (Rajab 1402): 10-11; Rah-e enqelab (Tehran), no. 29(Jumada I-II 1403): 25-29, where mention is made of his view of the Alawis as brethren of the Shi’a. I owe these references to Prof. Amatzia Baram.
31 On Shirazi’s role in Lebanon and his Syrian ties, see Arabia and the Gulf, 16 May 1977.
32 Al-Alawiyyun, Shi’at ahl al-bayt, preface.
33 On the visit, see Al-Hadi (Qom) 2, no. 4 (August 1973): 182-83.
34 According to Seurat, L’État de barbarie, 89. |
In this first post, I will consider Everitt's claims concerning the centrality of God to religious belief; and his highlighting of the clash between Faith and Reason.
1. The God Hypothesis in Philosophy of Religion
Philosophy can garb itself in sophisticated verbiage, but it all boils down to asking three basic types of question:
- Ontological Questions: These are questions about existence. What exists? How did it get that way? How does it work? What are things made of?
- Ethical Questions: These are questions about appropriate behaviour. What should I do? How should I treat others?
- Epistemological Questions: These are questions about knowledge. How do I know what exists or what I should do? What tools (logic, evidence, experiment etc.) can I use to gain knowledge about the world?
Religion tries to answer these basic philosophical questions. Examine any major religion, and you will find a set of ontological claims about the origins of the universe, the mechanics of the universe and the events in human history.
Examine any major religion and you will also find a set of moral prescriptions about how one should live one's life and how one should treat other people.
Everitt argues that for most religions the God hypothesis is at the root of its ontological and ethical claims: God is the ultimate ontological entity which explains all other aspects of reality, including moral values. Thus, although religions are not just about God, it is appropriate to dedicate significant attention to the God question.
2. Where do we Begin? Faith vs Reason
So we are dealing primarily with an ontological claim (although note: the moral argument). In order to establish the truth or falsity of this claim, we need to begin with an agreed-upon epistemology. In other words, with an acceptable method of inquiry.
Well, what is the most acceptable method of inquiry? Most people might think we should use the tried-and-tested methods of logic and experiment (call this "Reason"). This is the foundation of the scientific rationalism that has revolutionised the modern world. And Everitt would certainly like it if we could agree on this method in advance.
But there is a problem. Religious philosophers and intellectuals have often objected to the use of Reason when it comes to God. They would like it if Reason could be supplemented, or perhaps even replaced, by "Faith" (which can be defined in various ways). Faith would allow us to believe in God without needing to engage with Reason.
So we cannot even begin to approach the standard philosophical arguments for the existence of God until we have dismissed Faith.
Everitt does not shirk this task.
In Chapter 1, he looks at three traditional invocations of Faith and asks what we should expect Reason to tell us about God. In Chapter 2 he examines Plantinga's argument for a "reformed epistemology".
That's it for now. In the next post, I will cover the three traditional objections to Reason. |
Hydrops & Suggested Meniere’s Diet Patient Information
Read more information about Meniere Disease
Why do hydrops symptoms occur?
The fluid-filled hearing and balance structures of the inner ear normally function independently of the body’s overall fluid balance. In the normal inner ear, the fluid is maintained at a constant volume and contains specific concentrations of sodium, potassium, chloride and other electrolytes. This fluid bathes the sensory cells of the inner ear and allows them to function normally.
With injury or degeneration of the inner ear structures, independent control may be lost. The volume and concentration of the inner ear fluid may fluctuate with changes in the body’s fluid balance. This fluctuation can cause symptoms of hydrops. Hydrops symptoms may include pressure or fullness in the ears, tinnitus (ringing in the ears), hearing loss, dizziness and imbalance.
Read more information about ringing in the ears Tinnitus
How do I control hydrops symptoms by diet?
Your inner ear fluid is influenced by certain substances in your blood and other body fluids. For example, when you eat foods that are high in salt or sugar, your blood concentrations of salt or sugar increase. This can then affect the concentration of substances in your inner ear and make your symptoms worse.
People with certain balance disorders must control the amount of salt and sugar that is added to food. Awareness of hidden salts and sugars in foods must also be considered. Limiting or reducing your use of caffeine and alcohol will also help reduce the symptoms of dizziness and ringing in the ears.
What are the dietary goals to manage hydrops?
The overall goal is to ensure stable fluid balance so that secondary fluctuations in the inner ear fluid can be avoided. In order to achieve this goal, the following steps are recommended:
- Distribute your food and fluid intake evenly throughout the day. Eat the same amount for each meal and do not skip meals. If you eat snacks, have them at regular times.
- Avoid foods which have a high sugar or salt content. Aim for a diet which is high in fresh fruits, vegetables and whole grains and low in canned, frozen and processed foods.
- Drink adequate amounts of fluid daily. Such fluid should include water, milk and low sugar fruit juices. Coffee, tea and carbonated soft drinks should not be counted as part of this intake. You should try and anticipate fluid loss that will occur with exercise or heat and replace these fluids before they are lost.
- You should avoid caffeine-containing fluids and foods such as coffee, tea and chocolate. Caffeine is a diuretic that causes excessive urinary loss of fluids. Caffeine also has stimulant properties that may make your symptoms worse.
- You should reduce or eliminate your alcohol intake. Alcohol can affect the inner ear directly and change the volume and concentration of the inner ear fluid. This can result in an increase in symptoms.
- You should avoid foods that contain MSG (monosodium glutamate). This is often present in pre-packaged food products and in some Asian food. It may increase symptoms in some patients.
What drugs need to be thought about in regards to hydrops?
Certain drugs can make hydrops symptoms worse. In order to minimise symptoms, the following steps are recommended:
- Unless required for a medical condition, avoid aspirin and medications that contain aspirin. These medications can increase dizziness and tinnitus. Discuss with your health practitioner if necessary.
- Avoid caffeine-containing medications
- Pay attention to over the counter medications as well as some drugs prescribed by health practitioners or alternative medicine therapists for other problems. Such medications may increase your symptoms.
- Avoid cigarettes. The nicotine present in cigarettes constricts blood vessels and will decrease the blood supply to the ear. This may make symptoms worse.
Concerns or questions?
You can contact your ENT Specialist at the Melbourne ENT Group (MEG):
Your GP is also the best contact for ongoing care and concerns.
The Meniere’s Association of Australia has information resources available. These resources can be accessed as per the details below: |
The ‘Red Book’ of Spanish Birds (‘Libro Rojo de las Aves de España’) highlights illegal use of poisons as a threat to endangered species protected by the Birds and Habitats Directives. Action is required to eliminate illegal use of poisons for predator control in Spain and so minimise such threats to the country’s Red List species.
The main aim of the LIFE Nature VENENO NO project is to achieve a significant reduction in illegal poisoning incidents affecting protected species in Spain. Priority species targeted by the project include the Spanish Imperial Eagle, the Lammergeier Vulture, the Red Kite and the Egyptian Vulture (including the Canary Islands subspecies). All of these raptors are included in the annexes of the Birds and the Habitats Directives. Overall project goals aim to make important contributions to Spain’s national strategy against the use of poisoned bait in the natural environment, approved by the National Commission for Nature Protection.
• Reduction in the illegal use of poisons for predator control, particularly in sites covered by Spain’s SPA network;
• Approval of regional action plans and protocols to help authorities tackle illegal use of poisons for predator control;
• Introduction and maintenance of new specialised control patrols that will serve as a model for similar species protection initiatives;
• Increased public support for the prevention of illegal poison use in predator control; and
• Greater controls on the sale of licensed toxic products. |
Welcome to the Industing Crushing Equipment Fachory![email protected]
The critical speed is calculated according to the 5rrg2 standard equation. The bond work index of the ball mill is as follows. Please read more about how the grinder and ball mill work. Pdf.
We are committed to building crushing, industrial grinding, ore processing and green building materials, and provides intelligent solutions and mature supporting products.
Critical speed of ball mill 911m June 19 2015a critical speed of ball mill actually ball bar Ag or sag is the speed at the end of formula derivation. The following critical speed is NC critical speed derived from ball mill PDF pochirajcoin.
Critical speed equation maximum speed of ball mill critical speed of mineral processing Metallurgical ball mill in fact, the critical speed of ball mill refers to the speed of centrifugal force. Large crushing and screening mill provides you with efficient and economic services 7927687 07 58 58 piskunov Street Irkutsk.
The critical speed SEP of gravity ball mill is how to design the critical speed of ball mill, and the critical speed of ball mill is also the critical speed of ball mill.
The formula of critical speed of ball mill defines the optimum ball diameter in ball mill. The equation for determining the optimum diameter of the ball is defined. When the rotational speed in volume is equal to 85% of the critical speed, the optimum ball loading quantity is determined.
The critical speed of the ball mill power model is influenced by the mill speed. In 1993, bonds formula is still very extensive. The mill speed is close to 100% of the critical speed. The centrifugal separation condition is the last term in equation 1. Bond introduces 18 mm and 25 mm balls, and the volume density is 0.
Equation formula ball mill critical speed description metric critical speed calculator critical speed calculator critical speed enter any two of the following and click the quotecalculatequote button to get the price hard method 03082016183 to solve the elastic collision problem.
The critical speed equation of ball mill is derived from the equation of critical speed of ball mill. The slurry flow in the ball mill is estimated by the mass balance of salt tracer in the mill, and the source control variable ball process of solid concentration change in slurry under the mean residence time and critical speed is given.
Critical speed of ball mill actual ball rod Ag or sag calculation critical speed of 310 ball mill.
The critical speed formula of the ball mill defines the calculation formula of the critical speed of the ball mill the critical speed of the ball mill 911 metallurgist March 17, 2017 the critical speed of the rotary mill is the speed of the grinding tool for calculating the critical speed of the ball mill according to the above formula.
The calculation formula and derivation results of the critical speed of ball mill show that there is an equation for calculating the critical speed 310, and the differential equation of the critical speed of the ball mill is derived. Even if the speed of the ball mill of the company is considered to be fixed, it can also be used in the angle of the ball mill. |
Finding Free Literature Online
If you are homeschooling or teaching and need to find literature for your students to read inexpensively, or if you are a parent who wants more material to read aloud or recommend to your children, this article will provide free online sources for literature.
Finding Free Literature to Print or Read on Computers or eReaders
Websites that offer books in PDF format are useful when you want to print books or read them using Adobe Reader. Books offered in plain text (.txt) can be opened into Microsoft Word. Books formatted as EPUB can be read on many eReaders. Some sites that offer these choices include:
Located at http://archive.org the Internet Archive is a non-profit digital library with about 2.5 million items. Besides its literature collection, the Internet Archive offers access to NASA images, a media collection, and the WayBack machine for viewing archived versions of web pages of the past. The main search categories are moving images, texts, audio, and software. The audio collection includes some audio books and poetry recordings. ManyBooks.net
With over 29,000 titles, ManyBooks.net is right where its name says it is. You can search by author, title, new titles, recommended, genre, and languages, of which there are 36 including Icelandic, Chinese, Sanskrit, and Middle English. Open Library
The Open LIbrary at http://openlibrary.org has over a million titles that are searchable by subject and author, as well as a 10,000 book lending library of eBooks that may be borrowed for 2 weeks (single copy). Project Gutenberg
A volunteer effort to create and distribute eBooks of various types, Project Gutenberg, founded by Michael S. Hart in 1971 is considered the oldest digital library. It had more than 39,000 items in November 2011.
The contents of the collection--to which about 50 books are added each week--are primarily works of Western literature. Much of it is novels, short stories, poetry, and drama. There are other genres and some audio and music notation files.
From the home page at http://www.gutenberg.org you can see a shortlist of the latest editions, search, or browse books by categories. You can search the Education Bookshelf or the Children's Bookshelf, for example. Books are also available in German, French, and Portuguese.
Books can be found in a wide variety of formats, including HTML, EPUB, Kindle, PDF, Plucker, QiOO Mobile, and Plain Text UTF-8. Some are also offered as audio files.Free books from Makers of eReaders
- Amazon makes free Kindle books available here, and since you can put a Kindle reader on your iPad, Android, Blackberry, iPhone, Windows Phone 7, or computer, you don't have to have a Kindle to read them.
- Apple makes free iBooks available through iBooks > Store on an iDevice or through iTunes on your computer.
- Like Amazon, Kobo makes free reader apps for a large number of devices, besides having its own readers, and you can find free Kobo books available here.
What Types of Books are Available for Free
Many books that are available are in the public domain. A large number are out-of-copyright and written prior to 1923. Authors who have to see their books to earn a living are unlikely to make their books available for free online. However, you may be able to borrow print editions or eBook versions from your public library. or school library, if you are affiliated with a school.
Transferring Digital Books
If you intend to transfer books to a device such as an eReader, you may need a cable to connect it to your computer, depending on where you get your material from and how you download it (and to which device).
Printing Digital Books
In some cases, you may wish to print one or more pages of a free book you find online. For example, perhaps you want to place a recipe from a digital eBook into your recipe file in the kitchen. If you get a PDF or .txt version, you can open the first in Adobe Reader (which is free) and the second in any word processing program and print the required pages from there.
Although this article focuses on finding free digital books online, you and your students can also make digital books for teaching, learning, and sharing experiences. You can do this in a variety of ways. You can create a word processing file (yes, that's already a digital book). You can save that file to PDF. If you have a Mac and use Pages, you can export to EPUB, as you can with the free programs Calibre, and Sigil. |
Via Alex Tabarrok, Quanta magazine reports on a decades-long effort that has recently produced a radically simplified way of calculating quantum interactions. Instead of adding up millions or billions of terms, you simply sum the volumes of the pieces of a multi-dimensional jewel-like object called a “positive Grassmannian.” Its inventors call this object an amplituhedron:
The amplituhedron is not built out of space-time and probabilities; these properties merely arise as consequences of the jewel’s geometry….Encoded in its volume are the most basic features of reality that can be calculated, “scattering amplitudes,” which represent the likelihood that a certain set of particles will turn into certain other particles upon colliding.
….The 60-year-old method for calculating scattering amplitudes — a major innovation at the time — was pioneered by the Nobel Prize-winning physicist Richard Feynman….“The number of Feynman diagrams is so explosively large that even computations of really simple processes weren’t done until the age of computers,” Bourjaily said….In 1986, it became apparent that Feynman’s apparatus was a Rube Goldberg machine.
….Arkani-Hamed and Trnka discovered that the scattering amplitude equals the volume of a brand-new mathematical object — the amplituhedron. The details of a particular scattering process dictate the dimensionality and facets of the corresponding amplituhedron. The pieces of the positive Grassmannian that were being calculated with twistor diagrams and then added together by hand were building blocks that fit together inside this jewel, just as triangles fit together to form a polygon.
….“They are very powerful calculational techniques, but they are also incredibly suggestive,” Skinner said. “They suggest that thinking in terms of space-time was not the right way of going about this.”
Hey, who needs space and time anyway? Jewels are the heart of the universe, just like the new-agers have been telling us.
Ahem. Anyway, read the whole piece if you enjoy this kind of thing. Aside from making calculations easier, it’s possible that the nature of the amplituhedron will provide new insights into what the fundamental laws of the universe really are. Or it might turn out to be a red herring. Who knows? But the details are a helluva lot more interesting than whatever childish machinations are the flavor of the day in the House Republican caucus. |
Mexico and the United States share a border and a history. Celebrate Cinco de Mayo by introducing your students to their neighbors to the south!
Cinco de Mayo commemorates the 1862 Battle of Puebla, in which a few thousand ill-equipped Mexican citizens defeated a much larger army of highly trained French soldiers. Although the victory did not result in the immediate end of French occupation, many historians believe it indirectly affected the outcome of the American Civil War and led to Mexico's eventual independence. Today, people in both the United States and Mexico celebrate Cinco de Mayo -- the Fifth of May -- as a day of freedom and goodwill.
A brief description of each activity appears below. Click any headline for a complete teaching resource!
Where in the World Is Mexico?
Students create puzzles of a map of Mexico. (Grades Pre-K-2, 3-5)
Discover Mexico's Secrets
Students follow CIA procedures to discover Mexico's secrets, and then report on their findings. (Grades 3-5, 6-8)
The Beat of Mexico
Students learn about mariachi music and create their own rhythm instruments. (Grades Pre-K-2, 3-5)
A Gallery of Mexican Art
Students tour a virtual gallery of Mexican Art and create a glossary of art terms. (Grades 6-8, 9-12)
Mexico and Its Neighbor to the North
Students stage a debate about immigration between the United States and Mexico. (Grade 9-12)
Copyright © 2014 Education World |
Pride and Prejudice
If you think your family is embarrassing, try having a satirical father, an idiot mother, two hopeless flirts for youngest sisters, and a nerd for a middle sister (and not the cool kind of nerd). Yeah. Lizzy's motto is basically "mo' sisters, mo' problems." And in the world of Pride and Prejudice, your family's behavior reflects on you. If your sister runs off with the high school dropout who owes money to some really unsavory characters, it reflects badly on you. But if she dates the all-star quarterback, you're in for some reflected glory—and you might even end up dating on the b-string.
Questions About Family
- Is Mrs. Bennet a good mother? Yes? No? Sort of? Why? What are her responsibilities toward her children?
- Is Mr. Bennet a good father? What does "good father" mean in this context? What are his responsibilities toward his children?
- Compare the Bennets to other parenting models in the novel. How do they stack up against the Lucases? Against Lady de Bourgh? Against what we know of Darcy's parents?
- How much do the actions of parents ripple through the lives of their children? How much do the characters expect them to? Do young characters think about the effects their parents are having on them? Why or why not?
Chew on This
Austen doesn't care how much Mr. and Mrs. Bennet love their kids; they're still bad parents.
In the novel, young people are influenced by their friends much more than by their parents. |
Where we live, learn, work and play matters to our health. And Collin County is the healthiest in Texas, according to a survey that examined nearly every county in the United States.
The Robert Wood Johnson Foundation and the University of Wisconsin’s Population Health Institute recently released the third annual County Health Rankings.
Collin County ranked No. 1 overall out of 221 counties in Texas – meaning that Collin County is the healthiest in Texas.
To learn more – or see the full rankings for Texas and across the country – visit www.countyhealthrankings.org.
Health outcomes in County Health Rankings represent how healthy a county is. The survey measured two types of health outcomes: how long people live (mortality) and how healthy people feel while alive (morbidity).
Health factors in County Health Rankings represent what influences the health of a county. Researchers measured four types of health factors: health behaviors, clinical care, social and economic, and physical environment factors. In turn, each of these factors is based on several measures.
Among the specific measures used to calculate the rankings were: adult smoking, obesity, binge drinking, access to primary care physicians, rates of high school graduation, rate of violent crime, air pollution levels, unemployment rates and number of children living in poverty.
The goal of the rankings is to educate counties about where they are doing well and where improvement is needed to ensure that every community is as healthy as possible.
The only demerit Collin County received was a poor showing in physical environment. That wasn’t tied to air pollution, the number of ozone days or a lack of access to recreational facilities. It was tied to Collin County residents’ access to – not indulgence in – fast food restaurants.
Here are the top 10 Texas counties, as ranked by County Health Rankings for health outcomes and health factors.
Health Outcomes By County:
- Fort Bend
Health Factors By County:
- Fort Bend
- Jeff Davis |
Magnetic Nanoparticles Show Promise for Combating Human Cancer
Posted February 1, 2010 | Atlanta, GA
Scientists at Georgia Tech and the Ovarian Cancer Institute have further developed a potential new treatment against cancer that uses magnetic nanoparticles to attach to cancer cells, removing them from the body. The treatment, tested in mice in 2008, has now been tested using samples from human cancer patients. The results appear online in the journal Nanomedicine.
Nanoparticles, in brown, attach themselves to cancer cells, in violet, from the human abdominal cavity. Credit: Ken Scarberry/Georgia Tech
“We are primarily interested in developing an effective method to reduce the spread of ovarian cancer cells to other organs ,” said John McDonald, professor at the the School of Biology at the Georgia Institute of Technology and chief research scientist at the Ovarian Cancer Institute.
The idea came to the research team from the work of Ken Scarberry, then a Ph.D. student at Tech. Scarberry originally conceived of the idea as a means of extracting viruses and virally infected cells. At his advisor’s suggestion Scarberry began looking at how the system could work with cancer cells.
He published his first paper on the subject in the Journal of the American Chemical Society in July 2008. In that paper he and McDonald showed that by giving the cancer cells of the mice a fluorescent green tag and staining the magnetic nanoparticles red, they were able to apply a magnet and move the green cancer cells to the abdominal region.
Now McDonald and Scarberry, currently a post-doc in McDonald’s lab, has showed that the magnetic technique works with human cancer cells.
“Often, the lethality of cancers is not attributed to the original tumor but to the establishment of distant tumors by cancer cells that exfoliate from the primary tumor,” said Scarberry. “Circulating tumor cells can implant at distant sites and give rise to secondary tumors. Our technique is designed to filter the peritoneal fluid or blood and remove these free floating cancer cells, which should increase longevity by preventing the continued metastatic spread of the cancer.”
In tests, they showed that their technique worked as well with at capturing cancer cells from human patient samples as it did previously in mice. The next step is to test how well the technique can increase survivorship in live animal models. If that goes well, they will then test it with humans. |
NLP and the T.A.T.E Model
The T.A.T.E. Model is great for identifying what accounts for success or failure in how we do things. We use it in our NLP Practitioner courses and it’s simply an adaptation of the famous T.O.T.E. model which was published over fifty years ago by Miller, Gallanter and Pribram.
Simply put, the TATE enables you to identify 4 elements in how someone does something i.e. in their strategy or programme for doing something.
- Trigger: How to you know when to begin doing something – and what is your goal or objective?
- Action: Once you’ve begun what exactly do you do – and how do you assess your progress towards your objective?
- Target: How clearly have you defined this objective – so that you’ll know when you’ve reached it?
- Exit: What do you do when you’re finished?
A walk in the park
Let’s apply the TATE to examine your strategy for going for a refreshing walk in the park.
Trigger: what lets you know this is a good idea – and when, where, and for how long will you walk?
I’m feeling sluggish. I think I’ll go for a walk in the park. I’ll make it a refreshing rather than a very leisurely walk.
Action: How do you do it. When you’re walking in the park how do you so it? Fast? Slow? How are you assessing progress against your Target
I’m in the park and I’m walking and it is refreshing. So I’m doing fine.)
Target: When? (I’ll leave now). Where? (Richmond Park in Greater London) and for how long (about 30 minutes). How? (Brisk rather than leisurely pace.)
Exit: return home and do something.
This outline is pretty straight-forward and obvious – and not particularly enlightening. So let’s apply it to an important everyday event…
Eating and the TATE
Let’s imagine our old friends Jack and Jill have different strategies for eating.
Trigger: how do you know when to eat?
Jack: Whenever I feel hungry, or bored, or watch TV, or think of a favourite food, or pass a shop sell food which I like.
Jill: When I feel hungry.
Action: What do you do when you begin the activity? How do you assess progress?
Jack: It tastes so good so I just wolf it down and search for more – and virtually any food will do at this stage. I know I’m done when I cannot eat any more – because I’m so full.
Jill: I like to savour my food. So I eat fairly slowly and with attention to the taste and to how I feel.
Target: What’s your objective?
Jack; Once I begin there’s no stopping me – I have to clean my plate, for a start. Then I’ll often see if there’s anything nice in the fridge or the cupboard.
Jill: Once I recognise that I am no longer hungry I’ll stop eating – even if there is still food remaining on the plate.
Exit: What do you do when you’re done
Jack: Usually I’m so full the best I can manage is to get to the settee and relax with a few cans of lager.
Jill: I like to move around doing things when I’ve eaten – helps the food go down – or I’ll sometimes go for a walk.
The results of these two strategies
The TATE model enables us to unpack or model someone’s strategy for doing something. In the case of Jack and Jill the results of their two different strategies, over time, will be dramatic. At each of the four stages in the TATE we can identify critical differences but let’s consider just two of these:
(1) The difference in their originating Triggers:
- Jack’s eating is triggered by anything or nothing
- Jill’s trigger for eating is feeling hungry.
(2) The difference in their Targets:
- Jacks Target, or signal to stop eating, is when he is full – and certainly no food is left on his plate.
- Jill’s Target is quite simple (and is not shared by many of us) i.e. to no longer feel hungry.
The application of the TATE
One benefit of the TATE is that it enables us to identify areas-for-improvement in our strategy for doing just about anything at four stages in the sequence. And we can use it to model people who do something particularly well so that we can take bits of their successful strategy and use these to improve how we do something.
For example, If Jack were to replace even some of the elements from Jill’s strategy the results, over some months, in terms of health, weight, body size, and self esteem could be quite significant… |
List all the equipment and materials you required to carry out your experiment.
List the steps you planned to follow when you designed your experiment. Then note any steps you had to modify or rearrange to achieve your goal when you actually performed the experiment.
Your FIRST STEP: Be sure to check with the adult who is supervising your lab work. Go over your proposed procedure for safety (reread the safety procedures, if necessary!), and get permission to use any the materials you will need.
Present your data in an organized way and show the relationships you discovered graphically, if possible. You may use a spreadsheet to draw any charts or graphs you think are useful.
Write up your report and post it to our Moodle Forum by noon the day before our last chat session so that your fellow classmates can review your work and prepare for our discussion.
© 2005 - 2021 This course is offered through Scholars Online, a non-profit organization supporting classical Christian education through online courses. Permission to copy course content (lessons and labs) for personal study is granted to students currently or formerly enrolled in the course through Scholars Online. Reproduction for any other purpose, without the express written consent of the author, is prohibited. |
Published in 2002, Deforesting the Earth was a landmark study of the history and geography of deforestation. Now available as an abridgment, this edition retains the breadth of the original while rendering its arguments accessible to a general readership.
Deforestation – the thinning, changing, and wholesale clearing of forests for fuel, shelter, and agriculture – is among the most important ways humans have transformed the environment. Surveying ten thousand years to trace human-induced deforestation's effect on economies, societies, and landscapes around the world, Deforesting the Earth is the preeminent history of this process and its consequences.
Beginning with the return of the forests after the ice age to Europe, North America, and the tropics, Michael Williams traces the impact of human-set fires for gathering and hunting, land clearing for agriculture, and other activities from the Paleolithic age through the classical world and the medieval period. He then focuses on forest clearing both within Europe and by European imperialists and industrialists abroad, from the 1500s to the early 1900s, in such places as the New World, India, and Latin America, and considers indigenous clearing in India, China, and Japan. Finally, he covers the current alarming escalation of deforestation, with our ever-increasing human population placing a potentially unsupportable burden on the world's forests.
List of Illustrations
List of Tables
PART I - CLEARING IN THE DEEP PAST
1. The Return of the Forest
2. Fire and Foragers
3. The First Farmers
4. The Classical World
5. The Medieval World
PART II - REACHING OUT: EUROPE AND THE WIDER WORLD
6. Driving Forces and Cultural Climates, 1500–1750
7. Clearing in Europe, 1500–1750
8. The Wider World, 1500–1750
9. Driving Forces and Cultural Climates, 1750–1900
10. Clearing in the Temperate World, 1750–1920
11. Clearing in the Tropical World, 1750–1920
PART III - THE GLOBAL FOREST
12. Scares and Solutions, 1900–1944
13. The Great Onslaught, 1945–95: Dimensions of Change
14. The Great Onslaught, 1945–95: Patterns of Change
Epilogue: Backward and Forward Glances
List of Measures, Abbreviations, and Acronyms
Michael Williams is professor of geography and the environment at the University of Oxford and a Fellow of Oriel College. He is the author, most recently, of Americans and Their Forests: A Historical Geography as well as the editor of Wetlands: A Threatened Landscape and coeditor of A Century of British Geography.
"Williams's tome is a planetary overview of the impact of humanity on trees since the neolithic era, which has varied from disastrous to frighteningly catastrophic, and is worsening by the second. (If humans are intelligent, why didn't they understand sooner that a strict limit on their reproduction was the fulcrum of all balances?) Williams does the big view with statistics (how much timber was required to heat Rome's baths and effects thereof on the Mediterranean ecostructure) and the smaller, precisely placed detail – the dissatisfaction with the Chinese mandarinate felt by those whose fir trees were burnt in quantities to produce the soot for the bureaucrats' considerable ink requirements."
– Vera Rule, Guardian
"My first shallow reaction was that at 543 pages this is a big abridgement. But I soon realised the astounding breadth and depth of contained information and understood why it could not possibly be smaller. The book relays the complete story of forests the world over from the Holocene to the present, and somehow Michael Williams has managed to keep the story cogent and concise [...] The sheer breadth of information in this book makes it a valuable addition to any bookshelf belonging to people that want to understand why the world's forests look the way they do."
– Robert M. Ewers, Environmental Conservation
"Anyone who doubts the power of history to inform the present should read this closely argued and sweeping survey. This is rich, timely, and sobering historical fare written in a measured, non-sensationalist style by a master of his craft. One only hopes (almost certainly vainly) that today's policymakers take its lessons to heart."
– Brian Fagan, Los Angeles Times |
They're not underwater craters, and they're definitely not from aliens. That's according to Danish scientists who say they've finally figured what's causing ocean-floor crop circles known as fairy rings.
According to LiveScience, these mysterious circles were noticed off the coast of Denmark. The unusual dark rings are basically circles of grass around a patch of dirt. (Via LiveScience)
Tourists in Denmark first took pictures of the rings back in 2008. The International Business Times points out many on the Internet started trying to come up with explanations for why they were there after pictures surfaced.
Scientists previously found the material forming a circle was eelgrass, but two scientists from the University of Southern Denmark and the University of Copenhagen took it upon themselves to get an answer to the mystery of why the grass made circle shapes.
Turns out the answer isn't nearly as exciting as alien landing sites and World War II bomb craters like some skeptics had reportedly thought. The scientists found the phenomenon is caused by something else that's already within the water.
In a statement, the scientists explained: "We have studied the mud that accumulates among the eelgrass plants and we can see that the mud contains a substance that is toxic to eelgrass."
They add that when the eelgrass dies, it begins withering away from the center of a grass patch, and that ends up creating those circles.
The scientists say the seagrass is also found in other parts of the world, but add researchers are working to make sure they don't disappear in the future from possible threats to its marine life. |
The Maya king and his nobles lived an easy life. They had their every need provided for by the commoners. They were even carried from place to place in litters by slaves.
Life as a Maya Commoner
Life as a Maya commoner was full of hard work. The typical peasant worked as a farmer. At the start of the day, the wife would get up early and start a fire for cooking. Then the husband would leave to go work at the fields. After a hard day working at the fields, the farmer would come home and bathe. Bathing was an important part of the day for all the Maya people. The men spent evenings working on crafts such as tools, while the women wove cloth to make clothing.
What were their clothes like?
The clothing worn by the Maya depended on the region they lived in and their social status. The wealthy wore colorful clothing made from animal skins. They also wore feather headdresses and fancy jewelry.
Commoners wore simpler clothing. The men often wore loincloths while the women wore long skirts. Both men and women would use a blanket called a manta to wrap around their shoulders when it was cold.
Clothing for a Maya woman by Daderot
Men and women both wore their hair long. Once they were married, both men and women often got tattoos.
What did the Maya eat?
The most important food that the Maya ate was maize, which is a vegetable like corn. They made all types of food from maize including tortillas, porridge, and even drinks. Other staple crops included beans, squash, and chilies. For meat the Maya ate fish, deer, ducks, and turkey.
The Maya introduced the world to a number of new foods. Probably the most interesting was chocolate from the cacao tree. The Maya considered chocolate to be a gift from the gods and used cacao seeds as money. Other new foods included tomatoes, sweet potatoes, black beans, and papaya.
What were their homes like?
The nobles and kings lived inside the city in large palaces made from stone. The commoners lived in huts outside the city near their farms. The huts were usually made from mud, but were sometimes made from stone. They were single room homes with thatched roofs. In many areas the Maya built their huts on top of platforms made from dirt or stone in order to protect them from floods.
Although much of the Maya life was spent doing hard work, they did enjoy entertainment as well. A lot of their entertainment was centered around religious ceremonies. They played music, danced, and played games such as the Maya ball game.
Maya ball court by Ken Thomas
Interesting Facts about Maya Daily Life
The Maya considered crossed eyes, flat foreheads, and big noses to be beautiful features. In some areas they would use makeup to try and make their noses appear large.
The Maya loved to wear large hats and headdresses. The more important the person, the taller the hat they wore.
The farmers of the Maya did not have metal tools or beasts of burden to help them farm. They used simple stone tools and did the work by hand.
Sometimes the ball games that the Maya played were part of a religious ceremony. The losers were sacrificed to the gods.
The Maya had hundreds of different dances. Many of these dances are still practiced today. Some examples of the dances include the Snake dance, the Monkey dance, and the Dance of the Stag. |
According to a 2006 article in USA Today, 64% of Americans admit to using the f-word at least occasionally. (Many of the other 36% were probably lying.) The taboo around the f-word probably exceeds how offended people are by it, but our self-censorship persists — you can’t say it on broadcast television and those of us who write family-friendly email newsletters rarely use it (except for when they’re also the names of towns in Austria). The same is probably true for other similar words — our reluctance to using them in polite company is nice, as there’s no reason to make someone unnecessarily uncomfortable. But it is probably a bit on the extreme side.
The title above, on the other hand, doesn’t contain any swears or expletives in it. But it may make you a bit squeamish. And if it does, don’t worry — you’re not alone. Except when also discussing cake, the word “moist” can be off-putting — according to a 2016 study by Paul H. Thibodeau, a professor of psychology at Oberlin College, 10% to 20% of the population often dislikes the word. That’s far and away worse than any non-expletive. It probably isn’t an exaggeration to say that “moist” has unfavorables which compare to other words which are typically adorned with asterisks or thinly-veiled euphemisms — even though we’d never think of calling it the “m-word,”
The phenomenon is called “word aversion” per the New York Times. At first, many experts thought that the aversion was to what the word sounds or looks like, or something triggered by our mouth muscles as we form the word, but that turns out not to be the case. According to Nautilus, “similar-sounding words, such as ‘foist,’ did not generate the same reaction,” and “because those words also put your facial muscles in similar positions, we can also discount the disgust-facial expression theory.” Rather, as Mental Floss explains, “people found the word moist most disgusting when it was accompanied by unrelated, positive words like ‘paradise’, or when it was accompanied by [words that would give it a PG-13 meaning]. By contrast, when it accompanied food words (like ‘cake’), people weren’t as bothered by it.” And, quite tellingly, “the more disgust [respondents] associated with bodily functions, the less they liked [the word] ‘moist’.”
Further, word aversion doesn’t end with the m-word. Slate spoke with Natasha Fedotova, a PhD candidate at the University of Pennsylvania, discussing her research into the power that words have. Specifically, Fedetova was investigating “the extent to which individuals connect the properties of an especially repellent thing to the word that represents it,” something which understandably leads to word aversion. Per Slate: “If you serve people who are grossed out by rats Big Macs on plates that have the word ‘rat’ written on them, some people will be less likely to want to eat the portion of the burger that touched the word.” The words take on magical properties; per Fedeotova, it is “as though they can transfer negative properties through physical contact.”
And to add even more magic, it turns out that such negative properties are contagious. Mic spoke with University of Chicago linguistics professor Jason Riggle, who noted that disgust can spread from person to person: “if you drink too much tequila, the thought of tequila will make you sick. But you can also observe someone else getting sick from something and develop a similar disgust response. Disgust is contagious in that way.” Word-induced disgust, it turns out, are no different. The more one hears that “moist” is a dirty, vile word, Riggle explained, the more likely that person is to conclude that the masses are correct.
So if, after reading this, you are now part of the ten to twenty percent who really don’t like the word “moist,” well, sorry about that.
From the Archives: Turning of Niagara Falls: The word “moist” makes a surprising (and not at all disgusting) appearance. |
Definition of cartoon
1 : a preparatory design, drawing, or painting (as for a fresco)
2a : a drawing intended as satire, caricature, or humor a political cartoonb : comic strip
3 : animated cartoon
4 : a ludicrously simplistic, unrealistic, or one-dimensional portrayal or version the film's villain is an entertaining cartoon
cartoonishplay \-ˈtü-nish\ adjective
cartoonistplay \-ˈtü-nist\ noun
cartoonlikeplay \-ˈtün-ˌlīk\ adjective
cartoonyplay \-ˈtü-nē\ adjective
Examples of cartoon in a Sentence
She enjoys reading the cartoons in the Sunday paper.
The kids are watching cartoons.
Recent Examples of cartoon from the Web
Meanwhile, features such as The Home Forum, Points of Progress, photo spreads, People Making a Difference, editorial cartoons, and more will appear almost exclusively in the Weekly.
Sometime in the middle of the night, the 13-year-old had climbed out of her bedroom window taking along her phone and her favorite blue cartoon blanket.
The instrument bears a devilish looking, cartoon-like image of a wolf’s face — eyes menacingly narrowed, ears pricked up, red tongue hanging out, fangs at the ready.
Damon Albarn has put together his virtual band with human stand-ins for cartoon characters Murdoc, 2D, Russel and Noodle.
His Royals mural features one of his big cartoon bunnies carrying a baseball bat and a Sluggerrr doll.
There were black-and-white photos of towering waves with surfers zipping across their glassy faces, cartoons, sketches, a cluster of ads from surfboard makers and shapers, and even a short piece of fiction.
Goodwin is scheduled to testify in the case next month, in what would mark a rare appearance for the Scottish banker who’s become something of a cartoon villain for many of the U.K. investors who sued.
No successful television show, cartoon, or film franchise is safe from getting a plasticky new spin for the twenty-first century.
These example sentences are selected automatically from various online news sources to reflect current usage of the word 'cartoon'. Views expressed in the examples do not represent the opinion of Merriam-Webster or its editors. Send us feedback.
Origin and Etymology of cartoon
Italian cartone pasteboard, cartoon, augmentative of carta leaf of paper — more at card
First Known Use: 1671
CARTOON Defined for English Language Learners
Definition of cartoon for English Language Learners
: a drawing in a newspaper or magazine intended as a humorous comment on something
: a series of drawings that tell a story
: a film or television show made by photographing a series of drawings : an animated film or television show
CARTOON Defined for Kids
Seen and Heard
What made you want to look up cartoon? Please tell us where you read or heard it (including the quote, if possible). |
There is an ongoing drama in the Saturnian ring system that causes small moons to be born and then destroyed on time scales that are but an eyeblink in the history of the solar system. SETI Institute scientists Robert French and Mark Showalter have examined photos made by NASA's Cassini spacecraft and compared them to 30 year-old pictures made by the Voyager mission. They find that there is a marked difference in the appearance of one of the rings, even over this cosmologically short interval, a difference that can be explained by the brief strut and fret of small moons.
"The F ring is a narrow, lumpy feature made entirely of water ice that lies just outside the broad, luminous rings A, B, and C," notes French. "It has bright spots. But it has fundamentally changed its appearance since the time of Voyager. Today, there are fewer of the very bright lumps."
The bright spots come and go over the course of hours or days, a mystery that the two SETI Institute astronomers think they have solved.
"We believe the most luminous knots occur when tiny moons, no bigger than a large mountain, collide with the densest part of the ring," says French. "These moons are small enough to coalesce and then break apart in short order."
The F ring is at a special place in the ring system, at a distance known as the Roche limit, named for French astronomer Edouard Roche who first pointed out that if a moon orbits too close to a planet, the difference in gravitational tug on its near and far side can tear it apart. This happens at a distance dependent on the mass of the planet, and in the case of Saturn, happens to be at the location of the F ring. Consequently, material here is caught between the yin and yang of forming small moons, and having them pulled apart. The moons in question are typically no more than 3 miles (5 km) in size, and consequently can come together quickly.
This chaotic region is given additional stir by Prometheus, a moon that's roughly 60 miles (100 km) in size that orbits just inside the F ring. Every 17 years, Prometheus aligns with the F ring in a way that emphasizes its gravitational influence on the ring's particles, precipitating the formation of the mini-moons, or moonlets.
"These newborn moonlets will repeatedly crash through the F ring, like bumper cars, producing bright clumps as they careen through lanes of material," says Showalter. "But this is self-destructive behavior, and the moons – being just at the Roche limit – are barely stable and quickly fragmented."
This scenario can explain the rapid variation in the number of bright clumps in the F ring, but is it true? If the periodic influence of Prometheus is causing the waxing and waning of the clumps, then there should be an increase in their prevalence over the next few years, a prediction that the astronomers will be checking with Cassini data.
In addition to the drama of moons that come and go over less than a human lifetime, studies of the ring system give insight into how solar systems in general are built.
"The sort of processes going on around Saturn are very similar to those that took place here 4.6 billion years ago, when the Earth and the other large planets were formed," notes French. "It's an important process to understand."
This research was published in the online edition of the journal Icarus on July 15, 2014.
Explore further: A ghostly 'ladder' in Saturn's F ring
Robert S. French, Shannon K. Hicks, Mark R. Showalter, Adrienne K. Antonsen, Douglas R. Packard, "Analysis of clumps in Saturn's F ring from Voyager and Cassini," Icarus, Volume 241, October 2014, Pages 200-220, ISSN 0019-1035, dx.doi.org/10.1016/j.icarus.2014.06.035 . Preprint: arxiv.org/abs/1408.2548 |
In black and white photography, the gradient scale happens to be one of the most important parts of the developing process. The goal of the photographer is not just to take a picture, but to balance the brightness and darkness of the photograph to make a variety of shades of gray. In society, white people work hard to find the grays, or the “acceptable blacks,” (not “too” black, but instead gray enough to be recognized, not necessarily seen) to make their world more diverse.
In Ralph Ellison’s Invisible Man, the narrator takes the readers on a journey of his experiences as a black man in the early 1900s. The places he visits and people he meets influence his realization of his invisibility. A photo gradient scale becomes a representation of how white and Black people play a key role in the process of assimilation in society. Whites at the beginning of the scale and blacks at the end.
At the top of the gradient scale is white. Bright. Pure. A standard. In society, white people are seen; they stand out in ways others fail to do so. During slavery, white people brought the Africans to America, got rid of the Native Americans and made America a Christian country. White males were then deemed the “Founding Fathers” of this country. Therefore, everything done by others, positively uplifts the reputation of white Americans in society. Everything they do, have done, and will do is considered to be the “right” way. What those of other races contribute to the country is ignored, making all, especially those who are Black, invisible.
The series of clips above give the viewer a history lesson behind the prominent African-Americans who were erased from history. Famous pictures, video clips, and art featuring mainly whites are examined by television host Glenn Beck, who challenges one to take a deeper look into how society erases Blacks from history and replaces them with only the supreme whites. Beck interacts with the audience to see if they can understand the problem with trying to erase critical parts of history. He also interviews experts to understand why people tend to do such, and why others, especially African-Americans, allow it to happen.
In Invisible Man, the narrator’s interaction with Mr. Norton was a clear representation of such. He tells the narrator, ‘”Your great founder…had tens of thousands of lives dependent upon his ideas and upon his actions. What he did affected your whole race. In a way, he had the power of a king, or in a sense, of a god” (Ellison, 45). Superiority is key in the white community. Everything that one, from any race, does impacts their lives and their destiny. White Americans remain at the beginning of the gradient scale and as a result, blacks are considered inferior.
The color black is placed at the end of the gradient scale because of the reputation of the color: less than, dirty, invisible. Their failure to realize they are invisible and adjust their identity keeps them at the end of the scale, creating a name for themselves that most look down upon. In the book, the narrator states that all of the students at the college “hated the black-belt people, the “peasants,” during those days!…They did everything it seemed to pull us down” (Ellison, 47). Black people are considered to be under-performing and poor and in return are looked over and ignored. They fail to realize that their many contributions to this country and value are not seen by others. Without awareness of their invisibility to others, the mass of black people have continued to keep their race from progressing and have lead to others learning to assimilate to the white world.
The concept of the gradient scale can further be related to Paul Beatty’s novel The White Boy Shuffle. Similarly to the protagonist in Invisible Man, the main character, Gunnar Kaufman, struggles to find his Black identity during his experiences in both Black and white communities. What is most interesting about the novel’s relation to the gradient scale is that the two colors (black and white) are directly elaborated upon in the novel.
The way white was described was similar to the way it was portrayed in Invisible Man, but more subtle. It was directly stated that “White was the expulsion of colors encumbered by self-awareness and pigment,” highlighting that when white is involved, there are no other colors that are a part of it (Beatty, 35). That directly relates to how the “Founding Fathers” are the only people who receive credit for the development of this nation, and nobody else.
If one were to read through the rest of the white color description, they would notice that “White Gunnar” and his “white ways” are described.
White Gunnar ran teasingly tight circles around the recovering hollowed-out Narc Anon addicts till they spun like dreidels and dropped dizzily to the ground. White Gunnar was a broken-stringed kite leaning into the sea breeze, expertly maneuvering in the gusty gales. White Gunnar stabbed beached jellyfish with driftwood spears and let sand crabs send him into a disco frenzy by doing the hustle on his forehead (Beatty,35).
The protagonist in Invisible Man struggled with being judged because of “acting white” or doing “white things,” and constantly questioned his ways. However, Gunnar disregarded what others thought about him, he continued to praise the white color and the culture.
The color black continues to take on a negative connotation in The White Boy Shuffle. In relation to Gunnar, black and the Black culture is described to be
an unwanted dog abandoned in the forest who finds its way home by fording flooded rivers and hitchhiking int he beds of pickup trucks and arrives at its destination only to be taken for a car ride to the desert…Black was being a nigger who didn’t know any other niggers (Beatty, 35).
Because Gunnar was disconnected from the Black culture, primarily because he grew up in a white neighborhood, and because of his experiences with other Black people in his life, mainly his father, he looked down upon the color and the overall essence of being a Black person.
The color gray is considered to take on a more neutral shade. This part of the gradient scale was saved for last because it is the part where the process of assimilation takes place. Black people who have recognized that in order for them to succeed, they must rid of some blackness and add whiteness, making them gray. These people have paraded white Americans with the yeses and grins the Invsible Man’s grandfather spoke about. They play the part, separating themselves from the Black community to move up in the white society. The clip bellow highlights the negative and somewhat positive effects cultural assimilation has had on African-Americans.
In Invisible Man, the protagonist’s grandfather gives him advice on how to move up in society as a Black man saying, “Live with your head in the lion’s mouth. I want you to overcome ‘em with yeses, undermine ‘em with grins, agree ‘em to death and destruction, let ‘em swoller you till they vomit or bust wide open” (Ellison, 16). Assimilation immediately becomes the narrator’s way of survival in the world, and has become the answer to freedom from the reputation of the Black community for many Blacks today.
The concept of assimilation is directly mentioned when Gunnar explains a story about his drunken father saying,
He came naked, his entire body spray-painted white, his face drool-glued against the trunk of the swing-low tree. He ran home under the sinking Mississippi moon, his white skin tingling with assimilation.
His father pretending to be white by spray painting his body is just a small illusion to what Gunnar really struggles with while living in the white neighborhood.
The symbol of a black and white gradient scale shows how in society, white is considered to be superiority while black, inferior. While some manage to eliminate parts of their blackness and add whiteness, to become gray, it may not always work to their advantage. Some have been exposed to the white culture and diluted by their failure to see them. Others have realized their invisibility and have abided to the white standard. However, the problem remains that people will never understand that assimilation is not the way to go. Until those who assimilation to the white culture understand that they need to focus more on their Black identity and uplifting the Black community, the gradient scale will continue to be relevant and accurate. Whites at the top and Blacks at the bottom.
Beatty, Paul. The White Boy Shuffle. New York: Picador, 1996. Print.
cure2arthritis. “Uplift, Accommodation, And Assimilation African American History.” Online video clip.
YouTube. YouTube, 26 Jun. 2012. Web. 12 May 2013.
Ellison, Ralph. Invisible Man. New York: Random House, Inc., 1952. Print.
TedVoron. “Pt 1 Glenn Beck AMERICA’S BLACK FOUNDING FATHERS Founders’ Day .” Online video clip. Youtube. Youtube, 28 May. 2010. Web. 12 May 2013.
TedVoron. “Pt 2 Glenn Beck AMERICA’S BLACK FOUNDING FATHERS Founders’ Day .” Online video clip. Youtube. Youtube, 28 May. 2010. Web. 12 May 2013.
TedVoron. “Pt 3 Glenn Beck AMERICA’S BLACK FOUNDING FATHERS Founders’ Day .” Online video clip. Youtube. Youtube, 28 May. 2010. Web. 12 May 2013.
This work is licensed under a Creative Commons Attribution 3.0 Unported License. |
Received: 08/08/2016; Revised: 10/08/2016; Accepted: 17/08/2016
Visit for more related articles at Research & Reviews: Journal of Nursing and Health Sciences
The article contains different researches which has been done on the haematology and related. The cancer which is caused by blood and their results are described by the authors in the article. The researches which have been done on the global basis in the field of blood cancer will create interest of the readers. Haematology is also termed as leukaemia.
Leukemia is a growth of the platelets. Leukemia starts in a cell in the bone marrow. The cell experiences a change and turns into a sort of leukemia cell. Once the marrow cell experiences a leukemic change, the leukemia cells may develop and survive superior to anything ordinary cells. After some time, the leukemia cells swarm out or stifle the improvement of typical cells. The rate at which leukemia advances and how the cells supplant the ordinary blood and marrow cells are diverse with every sort of leukemia .
It is the most well-known kind of blood malignancy and influences 10 times the same number of grown-ups as youngsters. A great many people determined to have leukemia are more than 50 years of age.
Leukemia is a gathering of growth that more often than not starts in the bone marrow and results in high quantities of strange white platelets. These white platelets are not completely created and are called impact cells or leukemia cells. Indications may incorporate draining and wounding issues, feeling tired, fever, and an expanded danger of contaminations . These manifestations happen because of an absence of ordinary platelets, with analysis, regularly, made by blood tests or bone marrow biopsy. The accurate reason for leukemia is obscure. Various types of leukemia are accepted to have diverse causes. Both acquired and natural (non-inherited) components are accepted to be included.
Interminable myeloid leukemia (CML) is myeloproliferative clonal neoplasm with pluripotent hematopoietic foundational microorganism starting point. BCR-ABL combination quality results from an adjusted complementary translocation between BCR (Breakpoint group district) and ABL (Abelson) qualities is the fundamental finding in CML. Transposition of ABL proto-oncogene from chromosome 9 to BCR on chromosome 22 is either at chromosome level [Philadelphia (Ph) chromosome t(9;22)(q34;q11)] or secretive at quality level. BCR-ABL encodes an unregulated, cytoplasm-focused on tyrosine kinase, prompting uninhibited cell multiplication. CML is a triphasic malady, ceaseless stage (CP), quickened stage (AP), and impact stage (BP) . Most patients are asymptomatic and analyzed in CP; most patients will advance to quickly deadly BP inside 3–5 years if untreated.
Second era TKIs have demonstrated viability as first line treatment of constant stage unending myeloid leukemia, with predominance in accomplishing CCyR and MMR over imatinib treatment and with lower rates of movement to quickened and impact stage when contrasted with envision. Dasatnib is the main TKI answered to cross the blood mind hindrance. We report an instance of segregated CNS impact emergency in an incessant stage CML understanding who accomplished CHR and MMR while on essential dasatinib treatment. An instance of a youthful male with perpetual stage CML who regardless of accomplishing an astounding reaction to dasatinib treatment, created detached CNS impact emergency despite the fact that this tyrosine kinase inhibitor is the one and only answered to cross the blood cerebrum boundary [4,5].
PBL is an uncommon bone danger initially portrayed in 1928 by Oberling as a reticulum cell sarcoma took after by a case arrangement of 17 cases by Parker and Jackson. It represents 7% of every bone tumor. It is described by the expansion of dangerous lymphoid cells inside bone. Patients can give single or numerous bony sores, with or without provincial lymph hub contribution; yet to be named an essential bone tumor, there can't be any additional nodal injuries or supraregional lymph hub association.
PBL is most regularly recognized inside long bones, with the femur being the most influenced bone general [3-5]. It can show in any age bunch, with most cases displaying in more seasoned grown-ups. There is a male transcendence, with a few reports taking note of up to a 1.5:1 proportion.
The most widely recognized side effect is torment without injury, which can be connected with swelling and a discernable mass in a few patients. The nearness of B manifestations, a finding as a rule seen in systemic lymphomas, is not normal in PBL. Pathologic breaks and spinal string pressure are uncommon in PBL and are more connected with systemic lymphoma with auxiliary bone inclusion. Histologic discoveries show different sorts of lymphoma, the most well-known being diffuse vast B-cell lymphoma .
Beneficiaries of hematopoietic immature microorganism transplantation (HSCT) have a high danger of creating viral respiratory tract contaminations (RTI). The postponement in recuperation of lymphocytes, specifically T-lymphocytes [1,2] and the need for immunosuppressive meds to constrict intense union versus host response, raise the danger of creating RTI amid the initial 100 days after HSCT . Relentless decreases in wind current in patients after HSCT have been appeared for normal respiratory infections (CRV) . Also, RTI including the lower respiratory tract are connected with a generous mortality.
The rate of viral pneumonia in patients with affirmed viral RTI ranges somewhere around 7 and 44% [4–6]. A multi-focus European study reported a flu related mortality of 6.3% in HSCT patients amid the flu A pandemic in 2009 . A regular top can be seen in winter and spring . Other CRV diseases, e.g. parainfluenza and respiratory syncytial infection (RSV), likewise crest occasionally. Preventive systems and fast diagnostics are consequently vital, particularly amid these regular tops .
Auxiliary Myelodysplastic Syndrome (MDS) is known not connected with an effect of various negative elements including ionizing radiation of various sources (word related, medicinal, unintentional and so on . The danger of MDS relies on upon size of the ingested radiation measurement. A review accomplice investigation of nuclear bomb survivors uncovered 151 patients with MDS in the Nagasaki University Atomic-Bomb Disease Institute companion and 47 patients with MDS in the Radiation Effects Research Foundation Life Span Study partner. The MDS hazard existed in nuclear bomb survivors from 40 to 60 years after the radiation presentation and demonstrated a noteworthy direct reaction to introduction dosage level (p<0.001) with an ERR of 4.3 for every Gy (95% CI: 1.6 to 9.5; p<0.001). The frequency of MDS among the ChNPP mishap tidy up laborers had a tendency to surpass a separate worth among populace of Ukraine inspected at the same time frame (4.58 versus 3.70%) . Observing of the partner of intense radiation disorder (ARS) survivors in the post-incidental time of the Chernobyl mischance at the National Research Center for Radiation Medicine (NRCRM) was performed following 1986 . Three instances of MDS were analyzed whereupon among the ARS patients. This case report therefore recommends a conceivable connection amongst light and improvement of MDS in ARS patients after the Chornobyl and permits considering these cases as the optional MDS [11-13].
Ceaseless lymphocytic leukemia (CLL) is a lymphoproliferative infection portrayed by a dynamic amassing of CD19+/CD5+/CD23+ B cells in the blood, bone marrow and lymphatic tissues. The levels of surface immunoglobulins (Ig) and the outflow of CD20 and CD79b are distinctively low when contrasted and typical B cells. Leukemic cells are confined to the declaration of either k or λ immunoglobulin light chains. CLL is the most widely recognized leukemia in western nation, with an expected occurrence of 3-5 cases/100,000/year. The middle age at determination is 72 years; be that as it may, right around 10% of subjects has less than 55 years at ailment onset . The determination of CLL is built up by the accompanying the IWCLL-2008 criteria : i) the nearness in the fringe blood of ≥ 5,000 monoclonal B lymphocytes/μl for no less than 3 months with under 55% of prolymphocytes; ii) the clonality of coursing B lymphocytes as surveyed by stream cytometry; iii) the average immunophenotype and iv) the components of leukemia cells found in the blood smear which are little, develop lymphocytes with a limited outskirt of cytoplasm and a thick core lacking nucleoli and with mostly amassed chromatin. The clinical heterogeneity describing CLL, with survival time extending from months to decades mirrors the natural differing qualities of the malady . Looks into on the atomic pathogenesis of CLL permitted the distinguishing proof of contrasts in morphology, immunophenotype, particular chromosomal variations from the norm, abnormalities in the B-cell receptor (BCR) flagging and transformations of malignancy related qualities [17-20]. This organic heterogeneity mirrors the wide range of clinical practices of the infection, going from patients with a moderate collection of leukemic cells to subjects with quickly expanding lymph hubs. Clinical markers incorporate clinical stage frameworks (Rai and Binet), lymphocyte multiplying time (LDT) and abnormal amounts of serum markers as LDH, beta-2 microglobulin and thymidine kinase have been utilized to anticipated tumor weight and movement [21-23]. Anyway, the cut off of these markers is the powerlessness to give survival and treatment reactions.
Acute lymphoblastic leukemia (ALL) is an uncommon ailment with a general rate of 1.4/100,000 people for every year in the United States . Roughly 85-90% of grown-up patients with ALL accomplish a complete abatement (CR) with current affectation chemotherapy regimens . With enhanced administration methodologies, including better hazard stratification and advanced helpful devices, for example, pediatric-based chemotherapeutic regimens, focused on treatments, for example, tyrosine kinase inhibitors (TKIs) and allogeneic hematopoietic undifferentiated organism transplantation (allo-HSCT), general survival rates of 40-half in grown-up ALL patients are conceivable .
Regardless of these enhancements, no less than 33% of patients with standard-chance ALL and 66% with high-chance ALL experience a backslide . In patients encountering backslides, general survival is much poorer, with just 7% surviving 5 years . Survival was appeared to be fundamentally better when allo-HSCT was performed after first backslide in CR contrasted with later CR or with distinguishable leukemia (56 ± 7% versus 39 ± 11% versus 20 ± 5%, separately, for three-year survival) [29,30]. A portion of the prognostic variables for enhanced results after allo-HSCT are accomplishing CR, shorter time to CR accomplishment, lower number of past medicines, and having less comorbidity at the season of allo-HSCT . The most vital objective of a successful rescue regimen is instigating CR with insignificant lethality to permit patients to continue with allo-HSCT.
Intense lymphoblastic leukemia (ALL) is a hematological threat described by an uncontrolled multiplication of lymphoblasts. In spite of the fact that it influences all age bunches, it is the most continuous type of adolescence malignancies . Assessed quantities of new instances of ALL in the United States in 2016 is 6,590 and out of which anticipated passings are 1,430 . In India, the lymphoid leukemias are relied upon to be 18,449 by the year 2020 . In spite of the fact that the reasons for ALL are obscure, antagonistic quality environment collaborations are liable to be required in the danger of building up ALL . Leukemia generally emerges as an aftereffect of DNA translocations, distinctive sorts of transformations in qualities managing platelet advancement or homeostasis [36,37] and folate lack [38,39].
Folate pathway has two parts, for example, methylation responses and nucleotide union. Polymorphisms in qualities required in methylation pathway were not found to impact the danger of ALL in Indian populace . In this manner, investigating the variations in DNA blend pathway proteins, for example, dihydrofolate reductase (DHFR) and thymidylate synthase (TYMS) may give bits of knowledge into the weakness to ALL.
Dihydrofolate reductase (DHFR) is a pivotal catalyst in the folate pathway which changes over dihydrofolicacid (DHF) to tetrahydrofolic corrosive (THF). THF is key for the amalgamation of amino acids and nucleic acids, required for cell development and expansion. Debilitation of folate pathway results, in an uncontrolled expansion of cells prompting different diseases . DHFR quality is situated on chromosome 5 and has seven transcripts. The significant transcript of DHFR quality was found to have different transformations, for example, three stop picked up variations, twenty missense variations, seven graft locale variations, eleven synonymous variations, two coding arrangement variations, ninety-one 5' prime UTR variations, seventy three 3' prime UTR variations, 1211 intron variations, 232 upstream quality variations and 210 downstream quality variations. Among these, non-synonymous SNPs and SNP in the administrative area, assume a noteworthy part in quality capacity and are frequently answered to assume a part in the improvement of infections in people [42-45].
Immunosuppression is enormously required in tumor escape from resistant reconnaissance in intense myeloid leukemia (AML) patients. Being an immunosuppressive atom, CD200 is upregulated in some hematological malignancies. CD200 likewise speaks to a free prognostic variable in AML. In the present study, we surveyed the impact of CD200 expression level in AML cases by stream cytometry on common executioner (NK) cell action and assess its prognostic ramifications. In this study it was accounted for that CD200high patients demonstrated a diminishment in the recurrence of initiated NK cells (CD56dim) contrasted and CD200low patients. Survival investigation demonstrated that the patients with CD200High expression had altogether shorter OS (middle, year and a half) than the patients with CD200Low expression (middle, 25 months) (P= 0.0188) with risk proportion of 0.4860 (95%CI: 0.2261–1.0447). Interferon-γ level was profoundly communicated in AML cases with CD200low when contrasted with CD200high (P>0.0001*). For the most part, our discoveries recommend that CD200 overexpression smothers NK cell antitumor reaction in AML patients and consequently expanded danger backslide in AML patients [46-48].
CD200 is a trans-film cell surface glycoprotein having a place with the sort I immunoglobulin superfamily . It is identified with the B7 group of co-stimulatory receptors, with two extracellular spaces, a solitary transmembrane area and cytoplasmic tail without sign theme [50-52]. Articulation of CD200 is ordinarily found in some populace of T and B lymphocytes, neurons and endothelial cells. The outflow of CD200R1 which is the receptor for CD200 is much of the time confined to monocyte/macrophage genealogy and certain populace of T cells prompted cytokine profiles from Th1 to T-administrative cells . Immunosuppression through engagement with CD200R, a cell surface receptor is communicated on leukocyte of myeloid ancestry involving macrophages, pole cells, dendritic cells, basophils and T-cell populace .
In a few human diseases, CD200 expression and capacity has been accounted for before and its appearance in intense myeloid leukemia (AML) was accounted for by Tonk et al as there is overexpression in CD200 in hematological malignancies incorporating AML and in strong tumors. What's more overexpression of CD200 in AML is a poor prognostic pointer, since the outflow of this protein is a typical character of malignancy immature microorganisms and it is firmly identified with the advancement of the tumors . In any case, the outflow of CD200 and immunosuppression has a critical part in the movement of the malady. Undifferentiated organisms and other basic tissues are shielded from insusceptible harm by CD200 that has a focal part in resistant resilience .
Perpetual myeloid leukemia (CML) is portrayed by the Philadelphia chromosome (Ph) coming about because of an adjusted translocation somewhere around 9 and 22 t(9;22)(q34;q11.2). Because of this adjustment, the break-point group area (BCR) quality at position 22q11.2 is compared to the C-Abelson (ABL1) quality at 9q34 bringing about the BCR-ABL1 combination quality, encoding dynamic tyrosine kinase. The ID of Ph chromosome is imperative for finding and treatment reason .
There are 5-10% of CML cases noted to have variation Ph translocations and these discoveries have been accounted for since past 20-25 years [60-65]. Basic variations are cases that included chromosome 22 with a chromosome other than 9, and a Complex Variant Translocations (CVTS) chromosome other than 22 or 9 have been accounted for to go about as third chromosome .
The components of the era of the variation translocations are not completely saw; a few creators have proposed 2 distinct systems: a 1-stage instrument in which chromosome breakage happens all the while on 3 or 4 unique chromosomes in 3 way or 4-way translocation, separately and a 2-stage system including 2 successive translocation in which a standard t (9;22) translocation is trailed by a second translocation including expansion chromosomes .
Histiocytic necrotizing lymphadenitis, the alleged Kikuchi-Fujimoto sickness, was initially depicted in 1972 by two autonomous Japanese pathologists, Kikuchi and Fujimoto [68-70] is an uncommon illness influencing basically young ladies. The displaying manifestations are high fever and difficult cervical adenopathy, with obsessive discoveries of histiocytic necrotizing lymphadenitis . A few creators have reported cases with lymphadenopathy in an atypical area and such cases are hard to separate from harmful lymphoma [72-76]. A biopsy is important to touch base at a last histological finding. In patients giving cervical adenopathy, the differential determination can be expansive. Here, we display the instance of a young lady with KFD and audit the noteworthy components of this disorder.
There are different neurologic indications of Multiple Myeloma (MM) seen either at presentation or as an inconvenience of different against myeloma operators regulated throughout the malady. These neurologic entanglements may once in a while be trying to analyze and treat. Fringe sensory system is all the more regularly influenced and fringe neuropathy is the most well-known type of neurologic intricacies found in MM. Here we report a man of his word with MM on customary renal substitution treatment created serious myoclonus 3 days status post autologous undeveloped cell (ASCT) . Other than MM, he had Hypertension and Diabetes Mellitus as to his renal disappointment. He was likewise experiencing neuropathy for which gabapentin was initiated.
Neurologic appearances of plasma cell issue essentially include fringe sensory system, with fringe neuropathy being the dominating structure. Spinal string pressure, leptomemingeal contribution, intracranial plasmacytomas, and cranial paralyses incited by electrolyte and metabolic confusions are among other neurological signs of MM . Also hostile to myeloma treatment should prompt development and/or compounding of the current neuropathy. The subtype, rate and reversibility of medication related over the various operator used to treat MM. We report here a patient with MM who created myoclonus after high measurements melphalan and autologous immature microorganism transplantation.
Liu et al. assessed vitamin D level and exhaustion in intense leukemia patients experiencing chemotherapy. Forty one patients with intense leukemia (AL) experiencing chemotherapy were selected and 30 patients were tentatively analyzed the relationship between 25(OH) vitamin D and weariness. Vitamin D levels were measured and patients with subnormal (<32 ng/ml) were supplemented with 25(OH) vitamin D. Spearman Correlation Coefficients and Wilcoxon rank whole test were utilized for the examination. Vitamin D lack and inadequacy in AL patients are like the overall public. There was no huge relationship (P>0.05) between vitamin D level and exhaustion in the study. Consequently, Vitamin D supplementation may no enhance weariness in intense leukemia patients experiencing chemotherapy with vitamin D insufficiency. Be that as it may, Larger specimens ought to be further analyzed the impact of vitamin D supplementation on exhaustion in growth patients with vitamin D insufficiency.
Systemic contagious contaminations are expanding internationally in patients with immunespression disorders, for example, malignancy and human immunodeficiency infection (HIV/AIDS) and even in those getting different viral treatments and chemotherapies . In spite of the fact that the sickness weights of parasitic contaminations are possibly high, they are not really considered as real general wellbeing issues both clinically and in writing and at the worldwide wellbeing stage contrasted with jungle fever, tuberculosis and some ignored tropical maladies [3,4]. In Cameroon and most Sub Saharan Africa nations, parasitic contaminations are progressively contending on the size of pioneering diseases connected with poor insusceptibility, for example, with HIV/AIDS [2,5,6]. Restricted or no studies in Africa that has examined co contamination of systemic organisms and tumor exist, particularly in patients with leukemia. The capability of dessert of deft systemic contamination in leukemia patients may thwart compelling treatment of leukemia with chemotherapy.
Monoclonal gammopathies (MG) are a heterogeneous gathering of ailments going from asymptomatic patients to those with extreme clinical disintegration.
Wellbeing related personal satisfaction (HRQoL) is progressively utilized as an optional end-point in clinical trials, specifically, in numerous myeloma (MM) - related studies. In any case, a few issues block a summed up use. Initially, the confirmation accessible is still rare; besides, a few shortcomings and irregularities in examination and presentation are watched . Second, institutionalization for information gathering, investigation and reporting is inadequate. Third, a globally accepted survey ought to be utilized.
The European Organization for Research and Treatment of Cancer Quality of Life Questionnaire (EORTC QLQC30) is a 30-thing self-administrated survey, with one-week review period, including five useful scales (physical, part, passionate, social, and intellectual working), three manifestation scales (weakness, sickness/retching and torment) and a worldwide wellbeing status/personal satisfaction scale. This is a standout amongst the most generally utilized patient-reported result measures as a part of oncology clinical trials and practice. As of late, the QLQ-C30 has exhibited unwavering quality and legitimacy in MM patients. Its inward dependability has been as of late called attention to for most areas except for psychological working. The QLQ-C30 is viewed as a dependable instrument and may consequently be utilized to help basic leadership forms in clinical trials and in clinical practice.
Essential cutaneous lymphomas (PCL) are a heterogeneous gathering of additional nodal non-Hodgkin lymphomas characterized as threatening tumor got from B, T or characteristic executioner cells. Essential cutaneous follicle focus lymphoma (PCFCL), speaks to the most well-known sort of essential cutaneous B-Cell Lymphomas. Essential cutaneous lymphomas (PCL) are a heterogeneous gathering of additional nodal non-Hodgkin lymphomas characterized as threatening tumor got from B, T or common executioner cells. PLC present signs just in the skin without including other areas right now of determination . PCL speak to the second most regular extranodal lymphoma area after essential gastrointestinal lymphoma . Around 25% of PCL are sorting B-Cell Lymphomas. As indicated by the most recent order these are separated into 3 bunches (WHO - EORTC): Primary cutaneous follicle focus lymphoma (PCFCL), essential cutaneous minimal zone lymphoma (PCMZL) and essential cutaneous diffuse huge B-cell lymphoma, leg sort (PCDLBCL, LT) [3,4]. We show the instance of three female patients going to our administration.
Hemophagocytic lymphohistiocytosis (HLH) analyzed over the span of intense myeloid leukemia (AML) is by and large activated by treatment-impelled diseases. AML-prompted HLH is an extremely uncommon circumstance for which no analytic or helpful rules are accessible. We report the event of HLH in an AML5 post-transplant backslide. For our situation, the non-appearance of discernible pathogen and the parallel advancement amongst HLH and leukemia load proposed an immediate connection amongst AML and HLH. We recommend that the indicative of AML-related HLH ought to be instantly considered before unexplained fever, cytopenia, liver brokenness or neurological side effects as restorative mediation is earnest in this life-undermining circumstance.
Hemophagocytic lymphohistiocytosis (HLH) is an uncommon and much of the time lethal ailment. Essential HLH are familial disarranges because of a scope of hereditary changes influencing perforin qualities. Auxiliary HLH happen over the span of contaminations or malignancies, especially in lymphoma patients including T-cell, NK-cell, diffuse huge B-cell lymphoma and Hodgkin lymphoma. Be that as it may, leukemia speaks to just 6% of malignancy related HLH . In intense leukemia patients, HLH is activated much of the time by contamination because of bacterial, viral or parasitic pathogens . We report here an instance of HLH because of intense myeloid leukemia (AML) backslide.
The Human T-cell Leukemia Virus sort 1 (HTLV-1) is the etiological operator of Adult T-cell Leukemia Lymphoma (ATLL), an uncommon and forceful T-cell harm. The transmission of the infection happens sexually or by IV drug misuse, however the most effective method for viral transmission is through bosom nourishing from a tainted mother to her child [1,2]. This is on the grounds that the bosom epithelial cells control a physiological enlistment of lymphoid and myeloid cells from the dissemination into the milk, while discharging nutritive atoms, anti-microbial substances, development variables, provocative cytokines, and chemokines . Subsequently, bosom milk permits contact between lymphoid cells which elevates cell to cell transmission of the infection, a more effective way of infection spread when contrasted with free molecule disease [4,5]. However, for obscure reasons, just a couple percent of contaminated people create ATLL after a long stretch of dormancy . As of now, there is no real way to foresee which contaminated patients will create ATLL, and there is no viable treatment for those entering the intense period of the infection. Of note, it is still not known whether the joining of the proviral DNA into particular loci in the human genome has a part in ATLL advancement . Also, the idea of the monoclonal malady improvement has as of late been bantered as an aftereffect of profound sequencing results, which demonstrated that various clones can develop amid movement of the infection . It is likewise not comprehended why ATLL grows just in CD4+ T-cells, while the infection is available in all lymphoid and myeloid forebears, including hematopoietic foundational microorganisms (HSC) [9,10]. Nonetheless, information acquired from HTLV-1 contaminated acculturated mouse (HIS) exhibited that high recurrence of HTLV-1 disease was found in the twofold positive T-cells amid lymphogenesis recommending that lymphoid forebears constitute the corner of HTLV-1 contamination. The other contaminated cells either speak to the dormant supplies of the infection or need properties to bolster the procedure of change [11-14]. Since HTLV-1 contamination has developed components that initiate CD4+ T-cells and disable the safe CTL reaction, the result of the sickness to a great extent relies on upon two enemy figures, the proviral load and the proficiency of the insusceptible reaction against the tainted cells [6,15]. Initiation of multiplication and hindrance of tumor silencers are likewise two noteworthy signs of oncogenic occasions happening amid the long stretch of idle disease. In any case, the gathering of hereditary imperfections is accepted to be a main impetus for change [77-80]. How and when these hereditary imperfections aggregate is still under extraordinary examination.
Hodgkin lymphoma (HL) is a B-cell lymphoma that happens in the lymph hubs (transcendently those in the cervical district) and is described by the nearness of few growth cells, as a rule speaking to 0.1 to 10% of the aggregate number of cells in the tissue. HL is isolated in great Hodgkin Lymphoma (cHL) which is further subdivided by histology, being the nodular sclerosis, blended cellularity, lymphocyte-rich and lymphocyte-exhausted subtypes; and in instances of nodular lymphocyte-dominating HL [81-86].
HL is a standout amongst the most widely recognized sorts oflymphoma with a yearly frequency of 5:100.000 people all inclusive and 3:100.000 people in the western world [87-90]. Regardless of its occurrence, HL mortality is low, with a cure rate of roughly 80% [2,4]. Right now, the standard treatment for HL is a chemotherapy plan comprising of Adriblastin, Bleomycin, Vinblastine and Dacarbazine (ABVD), related or not with radiotherapy. This blend has been utilized for more than 20 years and has high effectiveness and a low lethality profile .
Kidney harm in non-Hodgkin lymphoma/leukemia (NHL/CLL) and lymphoplasmacytic lymphomas (LPCL) are created by a few systems: tumor mass restriction; clonal cell extension; hormones, cytokines and development components emission; metabolic, electrolyte and coagulation unsettling influences; statement of paraproteins and treatment difficulties. Side effects of kidney harm may command and even block plain NHL/CLL or LPCL and just renal pathology discoveries provide the insight into the finding. We expected to assess clinical presentation and pathology of kidney harm in patients with NHL/CLL or LPCL. Utilizing electronic database and intentionally outlined graph, we scanned information for 158 patients with lymphoproliferative issue (LPD) and pathology demonstrated kidney sores. Patients with various myeloma, Hodgkin's lymphoma, Castleman infection, "essential" AL amyloidosis and "essential" light chain statement sickness were rejected from further investigation. Study bunch comprised of 24 patients, 14 (58.3%) male and 10 (41.7%) female, middle age 67 (17;76) years. 16 patients (66.6%) were determined to have NHL/CLL, 7 patients (29.1%) with Waldenström's Macroglobulinemia (WM) and 1 (4.1%) with Franklin's ailment (FD). 10 (41.7%) of patients gave nephrotic disorder (NS), 17 (70.8%)–with hindered kidney capacity and 6 (25.2%) with both NS and renal brokenness. By pathology glomerulonephritis (GN) was found in 11 (45.8%) of patients, in 4 cases GN example was connected with monoclonal paraproteins, and in 7 cases GN was thought to be paraneoplastic. Interstitial nephritis was found in 10 (41.6%) patients, in 8 of them because of particular lymphoid penetration; and amyloidosis convoluted just 3 (12.5%) cases. Patients with NHL/CLL or LPCL, giving renal irregularities, show assortment of pathology examples barely unsurprising on clinical premise. Frequently in our patient arrangement was particular lymphoid interstitial invasion and paraneoplastic glomerulonephritis with MN and MPGN designs. As a rule of NS and/or intense kidney damage (AKI) renal biopsy was urgent for the finding of NHL/CLL and LPCL [92-96].
An overall radiation wellbeing panic was made in the late 1950s to stop the testing of nuclear bombs and piece the improvement of atomic vitality. Despite the expansive measure of confirmation that repudiates the tumor forecasts, this trepidation proceeds. It weakens the utilization of low radiation measurements in therapeutic indicative imaging and radiation treatment. This brief article returns to the second of two key studies, which reformed radiation security, and recognizes a genuine blunder that was missed. This mistake in dissecting the leukemia frequency among the 195,000 survivors, in the consolidated uncovered populaces of Hiroshima and Nagasaki, discredits utilization of the LNT model for evaluating the danger of disease from ionizing radiation. The edge intense measurements for radiation-incited leukemia, taking into account around 96,800 people is distinguished to be around 50 rem, or 0.5 Sv. It is sensible to expect that the edges for other growth sorts are higher than this level. No expectations or clues of overabundance malignancy hazard (or some other wellbeing danger) ought to be made for an intense presentation beneath this worth until there is experimental confirmation to bolster the LNT theory [96-100].
Relevant BCR-ABL tyrosine kinase over-movement decides in detailed style the development of multiplication and hostile to apoptosis that emerge to a great extent as determined marvels of generally homeostatic components of the c-ABL quality inside hematopoietic undeveloped cells and hemangioblasts in the bone marrow. The capacity to stifle totally, both as far as phenotype and cytogenetically, the myeloid cell line extension by imatinib mesylate is characteristic of a wonder that depends entirely on the changed status of the phone of birthplace in the endless myeloid leukemia process. It is with pertinence to complex interest of the elements of the melded BCR-ABL protein item that relevant molding of the cells of starting point of the quality translocation further rouses the dimensional extension of the changed myeloid cell clones to expanding proliferative rates, in this way prompting impact emergency as possible loss of separating potential [97,98]. |
When Peter Annin, director of the Burke Center for Freshwater Innovation at Northland College, was completing research for an updated version of his book The Great Lakes Water Wars, he discovered a detail about Great Lakes water diversions that had gone unnoticed for eight years.
According to his findings, the state of Wisconsin never announced it had approved the village of Pleasant Prairie’s request to extract seven million gallons of water per day from Lake Michigan, the largest water diversion in the state.
Annin joined Stateside to talk about whether the diversion might have violated the Great Lakes Compact, a regional agreement between the eight Great Lakes states and two Canadian provinces signed in 2008. It bans the “diversion of Great Lakes water outside the basin, with limited exceptions.” Annin refers to the compact as a “legal water fence” that “is designed to keep Great Lakes water inside the Great Lakes basin.”
Communities that are partially located in the Great Lakes basin, as well as those in counties that straddle the basin, are the only exceptions to that rule. Those communities are allowed to apply for water diversions.
Annin explained that following the signing of the compact, Great Lakes governors were required to report the levels of all existing diversions to be grandfathered into future extraction plans. Wisconsin’s Pleasant Prairie had been approved to use Lake Michigan water in the late 1980s following public health concerns. At this time, Pleasant Prairie was approved for a diversion of 3.2 million of gallons of water per day.
According to Annin, when Wisconsin’s state officials reported their diversion levels to the Great Lakes Governors Association, they boosted Pleasant Prairie’s diversion levels to 10.69 million of gallons of water per day. That made it the largest water diversion in all of Wisconsin, and its approval came with no public notification.
In 2016, when the Wisconsin city of Waukesha wanted to draw increased amounts of water from Lake Michigan, it had to get the approval of the governors from all eight Great Lakes states. This April, before the Foxconn Plant in Mount Pleasant, Wisconsin received approval from Wisconsin Department of Natural Resources, drafters of the Great Lakes Compact discussed whether or not the corporate use of the water violated the agreement.
“The Great Lakes Compact was adopted in an incredibly transparent way. The Waukesha diversion was also was processed in a very transparent way, and what is shocking a lot of people is that you could have this massive seven million gallon water diversion for a relatively modestly sized community—that became the largest water diversion in Wisconsin—and nobody knew,” said Annin.
“Wisconsin now has more water diversions than all other great lakes states combined. And it has become sort of what I call the new frontline in the Great Lakes water war” he added.
Stateside contacted the office of Gov. Rick Snyder for a comment. Press Secretary Tanya Baker sent us this statement:
“All jurisdictions take compliance with the Great Lakes Compact very seriously. The state is looking back through the record to try to understand the history, as well as reviewing the case to see what the impact of the withdrawal is. If an issue is identified, it will be taken up with the Compact Council.”
Annin said that the Wisconsin DNR is adamant they have followed the letter of the law and that the Pleasant Prairie water diversion is legal. But he said not everyone agrees with them, and the Chicago-based environmental group the Alliance for the Great Lakes is currently consulting attorneys over the issue. |
The United States Mint's usage of coining collars has been fairly conservative, in that few attempts have been made to go beyond the plain or reeded collars typical of our circulating coinage and most commemoratives.
The United States Mint's usage of coining collars has been fairly conservative, in that few attempts have been made to go beyond the plain or reeded collars typical of our circulating coinage and most commemoratives. An early exception was the use of a lettered collar in the creation of pattern silver dollars in 1885. These coins featured the regular obverse and reverse dies of George T. Morgan's standard silver dollar, but each coin displayed the legend E PLURIBUS UNUM in raised characters. This was achieved through the use of a segmented collar consisting of three pieces, each comprising arcs of 120 degrees. These were in a retracted position as the planchet was fed into the press, coming together to form a complete circle of 360 degrees at the moment of striking. It was necessary that they be able to retract outward after striking, because the raised metal of the edge device otherwise would have caught on the stars and lettering and fouled the ejection process.
Superintendent of the Philadelphia Mint, Colonel A. Louden Snowden, believed this segmented collar to have been his own invention, but he was proved wrong in an amusing incident related by famed coin dealer Henry Chapman. In his auction catalog for the Clarence Bement Collection in 1916, Chapman described how Snowden displayed his device to him with pride, noting that Snowden "was going to have it patented and revolutionize the World's Coinage." Chapman added that he "exhibited to him [Snowden] a crown of Oliver Cromwell, and showed him where Thos. Simon had made a better job of it 237 years before," concluding his tale with a declaration that, "The Col. collapsed forthwith."
This same type of segmented collar was again used by the U.S. Mint for circulating coinage beginning in 1907. An essentially identical edge device was imparted to all of the Saint-Gaudens double eagles, while a raised edge of 46 (later 48) stars was produced for the eagle. While these collars no doubt proved more challenging to maintain than the conventional reeded edges used previously, it was clearly demonstrated that raised lettering and other characters were practical for mass production.
After gold coinage ceased in 1933, the Mint reverted to using solid, ring-shaped collars exclusively, with their ordinary plain or reeded edge devices. Not until 1992 was anything out of the ordinary tried again. That year's commemorative silver dollar for the Olympic Games featured an edge that combined reeding with incused lettering that read XXV OLYMPIAD four times around the circumference. This was achieved in two steps: The reeding was applied by a conventional collar in the normal course of striking the coin, after which the coin was rotated within a machine that used a squeezing action to impress the lettering into the reeding. To facilitate this impression and to provide enough contrast to make the incused lettering readable, the coin's edge reeding was very narrow and closely spaced. The resulting effect was one of white lettering within a shaded background. While innovative for United States coinage, this was a technique that had been used by other nations for some years, most notably by Britain in its one-pound coins of 1983 to date.
Before leaving the subject of collars, I'd like to describe some of the more interesting mint error coins that can result when all does not go according to plan. Perhaps the most commonly seen collar-related errors are broadstruck coins. A broadstruck coin is simply one in which the collar failed to move into its proper position around the lower, or anvil, die. This permitted the planchet to expand beyond the coin's normal diameter at the moment of striking. Such coins have no edge device and may have a slightly irregular shape, though this typically is quite subtle.
Another, less common collar error is the partial collar strike, in which the collar is almost in place but does rise fully to its normal position. This produces a coin that is normal on the enclosed part of its edge and broadstruck on the remainder. The peculiar result is sometimes called a "railroad rim," as the broadstruck portion extends beyond the normally struck area in an effect suggestive of a railroad wheel's extended flange. This type of error is most visually appealing with reeded edge coins, though all variants are highly prized by mint error collectors.
Collars may be thought of as the third die in a complete set and, just like obverse and reverse dies, they sometimes display cracks and other failures. While less noticeable than on the faces of a coin, such breaks will produce areas of raised metal on a coin's edge. While obverse and reverse dies experience compression stress and have a large mass behind their surfaces to reinforce them, collars experience expansion stress and have nothing to support them. In theory, they are thus more likely to fail before producing many error coins. This could account for the rarity of mint errors confined to a coin's edge.
David W. Lange's column, USA Coin Album, appears monthly in Numismatist, the official publication of the American Numismatic Association. |
Disease surveillance is more important now than ever before, and the Ontario Animal Health Network (OAHN) is dedicated to early detection and response to emerging infectious disease in our province’s animal populations.
What is OAHN?
The Ontario Animal Health Network (OAHN) is a collaborative way of looking at animal health and disease in Ontario.
Public trust and confidence from a collaborative animal health network in Ontario.
Coordinated preparedness, early detection, and response to animal disease, through sustainable cross-sector networks. OAHN seeks to form a regular line of communication with as many veterinarians across Ontario as possible, both to collect information about disease seen in practice (in a manner that is easy for practitioners), and to share pertinent health and disease information regularly. The desired outcome is to enable veterinarians to make more informed diagnostic and treatment decisions based on current disease information in Ontario.
How does it work?
- Each species sector has an expert network, consisting of veterinarians from: private practice, OVC, the AHL, and OMAFRA. Some networks also include members from producer groups or other government groups. Veterinary representatives are elected by their peers.
- For most networks, a quick, species-specific survey is distributed quarterly to veterinarians to identify syndrome prevalence (e.g., increased cases of neurologic signs).
- Lab data is also compiled quarterly noting top pathogens/diseases affecting each species from AHL (all sectors), Gallant Labs (swine) and IDEXX (equine).
- Species Expert Networks meet quarterly to discuss and interpret the lab data and to review clinical observations from the veterinary surveys and from the network member veterinarians. They discuss implications for animal health, for veterinarians and for industry.
- Network discussions focus on trends, risks and actionable items for each industry (e.g., continuing education needs, research needs, emergency preparations)
- Reports and/or other communications (infographics, podcasts, fact-sheets) summarizing current disease risks and network observations are created and distributed to veterinarians. Some networks also create a report specific to owners or producer/industry groups.
- Research projects are initiated to allow quick investigation into disease trends, based on the networks’ analysis of available data and veterinary observations. These projects provide insight into animal health in Ontario, and help vets make decisions based on current risks and geographically relevant information.
- The Expert Network may be involved with outbreak risk assessments with OMAFRA or urgent matters where consultation with veterinarians is required. Pertinent disease alerts and crucial information is distributed to veterinarians when outbreaks occur.
Species-sector networks are comprised of:
- Two co-leads (one OMAFRA lead veterinarian and one private practice veterinarian or industry representative) jointly facilitate network activities.
- Three to four private practice veterinarians (*species experts may be elected in place of a veterinarian if not available, for industries such as bees).
- AHL specialist
- OVC species specialist
- OMAFRA epidemiologist
- Network coordinator
Some networks also include members from other government or industry organizations. |
The Return of Chorb
"The Return of Chorb" explores the aftermath of death and how we deal with the loss of a loved one. What are the consequences of death, and is there some way to overcome it? Spinning off from the myth of Orpheus, "Chorb" follows the story of a man trying to "re-create" the image of his dead wife. A distinction is drawn between the spectral, horrifying appearance of the dead woman and the peaceful, immortal memory of her. The story’s main character tries to escape the former by taking refuge in the latter, but much like Orpheus’s trip to the underworld, this is hardly a walk in the park.
Questions About Mortality
- Is Chorb, like Orpheus, trying to "cheat" death? Does he succeed?
- How is Chorb’s love for his wife altered by her death? How does his fear of her ghost affect the way he remembers her?
- Think about the way Chorb’s wife is killed. How is this particular occurrence integral to "Chorb"?
Chew on This
The absence of sex in "The Return of Chorb" works with the theme of death to create a barren and doomed atmosphere. |
ARE YOU STILL TRYING TO QUITE SMOKING?
The effects of smoking on human health are serious and in many cases, deadly. There are more than 7,000 chemicals, including over 60 carcinogens that are known to cause cancer, says the American Cancer Society (ACS) ¹.
How does nicotine affects your body?
- Nicotine reaches the brain within 10 seconds after smoke is inhaled leading to degeneration in a region of the brain that affects emotional control, sexual arousal, REM sleep and seizures ².
- Carbon monoxide binds to haemoglobin in red blood cells, preventing affected cells from carrying a full load of oxygen ³.
- Cancer-causing agents (carcinogens) in tobacco smoke damage important genes that control the growth of cells, causing them to grow abnormally or to reproduce too rapidly.
- Smoking affects the function of the immune system and increases the risk of respiratory and other infections.
- Smoking affects Heart, Liver and Kidney, causing weakened arteries and heart attack, cancer of Liver, Kidney and Bladder.
- The effects of smoking hold additional risks for women: period pains, earlier menopause, cancer of cervix, infertility and delay in conception.
Among the few natural, scientifically-proven ways – hypnotherapy, meditation, massage and converse, Chinese Acupuncture is on the first place to help to kick your cigarette habit in the butt.
How does acupuncture help break the smoking habit?
Acupuncture can provide relief for the symptoms associated with nicotine withdrawal such as, the jitters, restlessness and irritability. As the National Cancer Institute (NCI) says, withdrawal symptoms are worse within the first week of quitting and the intensity of the symptoms drops over the first month4.
Acupuncture can be used to regulate the nervous system in order to break nicotine addiction. It aims to balance energy within the body to optimise health. Treatments focus on reducing nicotine craving, cravings for food that usually increase after giving up smoking and lead to weight gain, also anger, depression, anxiety, irritability, restlessness - generally all common symptoms that people suffer when they try to stop smoking. Acupuncture treatments can also aid in relaxation and detoxification.
"One of the most effective natural and drug-free ways to quit smoking is through acupuncture" said Allison Bailey, Harvard trained MD acupuncturist to Medical Daily. "This form of therapy is very effective for treating addictions of all kinds, including for smoke cessation"5.
The American Journal of Medicine (AJM) reports the findings of a review that observed 6 previous clinical trials (823 people) which used acupuncture as an alternative therapy for smokers to kick the habit. They suggest that acupuncture may help smokers quit 6.
How to start with acupuncture treatment
For an acupuncture smoking cessation program to be successful, you will need patience, commitment and preparation. Besides the acupuncture treatments, factors linked with success also include a strong desire to quit, for the patient to always be reminding themselves of the reasons for not smoking, and the support of friends and family.
Prior to receiving acupuncture, acupuncturist will evaluate several of your smoking habits and make full diagnosis that will determine a unique method of treatment.
Usually a combination of body acupuncture points and points on the ear are used to influence the organs and energy pathways associated with smoking. Smoking cessation begins to decrease gradually because the acupuncture enhances the levels of serotonin in the plasma and brain tissue7.
Acupuncture is not a panacea or magic cure in the treatment of addictions such as smoking, but it is effective in making it easier to quit and to remain smoke-free in the long term.
Book an appointment with Giedre and make this day a day to start on fresh path of well – being |
A Tale of Two Brigids: A Celtic Goddess and a Christian Saint
St Brigid is one of the patron saints of Ireland. But the virgin nun has roots that go back to the days when the land’s pagan deities received prayers instead. It seems the Celtic goddess Brigid shares more than just a name with the saint.
There are churches dedicated to Saint Brigid in many parts of the world. With time, she became an important icon for the Catholic Church. However, it is still uncertain if she was a real person. An analysis of various resources suggests that her legend actually grew from a myth about a Celtic goddess .
During the first centuries of its existence, the Christian religion adopted and modified many pagan sites and stories. Several churches replaced ancient altars and sacred pagan locations. Moreover, stories about the great people of the past and myths about their deities became the foundation for legends which describe the lives of Christian saints . When the early Christians discovered a powerful story in the land of a recently converted community they tried to replace it with one of their own.
- Cursing Stones of Ireland: When Christianity and Pagans Pooled Their Sacred Water
- Female Druids, the Forgotten Priestesses of the Celts
- Mysterious Underground Labyrinth in Scotland May Have Originally Been a Druid Temple
Brigid, the Celtic Goddess of Spring
Her name is often said to be Brigid, but she has also been called Brigit, Brig, Brighid, Bride, etc. She was an ancient Irish goddess who was associated with spring, poetry, medicine, cattle, and arts and crafts. Brigid’s feast day was celebrated around February 1 and was called Oimlec ( Imbolc). The original Irish text says the following about her:
''Feast of the Bride, feast of the maiden.
Melodious Bride of the fair palms.
Thou Bride fair charming,
Pleasant to me the breath of thy mouth,
When I would go among strangers
'Thou thyself wert the hearer of my tale.''
The name Brigid may come from the word ''Brigani'' meaning ''sublime one''. It was Romanized as Brigantia when that empire was powerful. This form of the name was used to name the river Braint (now Anglesey), Brent (Middlesex), and also Brechin in Scotland. Brigid appears to be related to the Roman goddess Victoria, but sometimes she was presented as similar to Caelstis or Minerva instead.
According to Cormac's Glossary (written by 10th century monks) she was a daughter of the god Dagda, a protector of a tribe. She was worshiped as a goddess of poetry, fertility, and smiths. Her identification with Minerva comes from the interest of both goddesses in bards and artists.
In ancient times, smiths were not only recognized for their craft, but their work was also connected with magic. Brigid was strongly associated with the symbol of fire as well. She was a part of the Tuatha Dé Danann , an Irish supernatural race known from mythology. She may have also been one of the triple deities of the Celts.
Plate of god Dagda of the Gundestrup cauldron. ( Public Domain )
St Brigid of Ireland appears
When Ireland was Christianized, the monks and priests needed good examples to inspire people to follow the new faith . They used the same method as in the other parts of the world and started to create stories which sounded familiar to the inhabitants of the converted areas. In one of these stories they described a woman who connected the two cultures.
According to Catholic resources, St Brigid was born in 451 or 452 AD in Faughart, near Dundalk, County Louth. She was said to be a daughter of a druid man and a slave woman. Brigid reportedly refused many marriage offers and decided to become a nun. She settled for some time near the foot of Croghan Hill with seven other virgin nuns . They are said to have changed their home a few times, but finally the nuns lived in Kildare, where Brigid died as an old woman on February 1, 525 AD. The Catholic Church argues that the date of her death and the pagan goddess’ day is a coincidence; however it also provides a meaningful link between the Celtic goddess and Christian Saint.
Saint Brigit as depicted in Saint Non's chapel, St Davids, Wales. ( CC BY SA 3.0 )
In legends, St Brigid was a daughter of Dubtach. She was perhaps prepared to be a druid , though in the end she became a nun. This was quite a popular solution for wise people of pre-Christian religions: To avoid problems, many of them preferred to become a part of monasteries and continue their practice connected with the ancient ways while under the guise of “Christians.”
Like the goddess, St Brigid is associated with fire too. The first biography written about her was made in 650 AD by St Broccan Cloen. However, in the 20th century many researchers began to doubt the historical evidence for her life. The saint wrote:
''Saint Brigid was not given to sleep,
Nor was she intermittent about God's love;
Not merely that she did not buy, she did not seek for
The wealth of this world below, the holy one.''
- The Celtic Goddess Epona that Rode Swiftly Across the Ancient Roman Empire
- Boudicca, the Celtic Queen that unleashed fury on the Romans
- Did Irish Medieval Saints Perform Abortions? Controversy Ahead of 8th Amendment Referendum
The stories of St Brigid have some unusual details that differ from typical early medieval legends of Christian saints. One of the strangest examples is a story of her life with a woman named Dar Lugdach. According to the descriptions, these two women used to sleep together, but not for a lack of the space or beds. The name of the potential lover of St Brigid means "daughter of the god Lugh.” Moreover, St Brigid’s miracles are often strongly related to druid knowledge about alchemy, magic, and other disciplines.
Saint Brigid of Kildare. ( Public Domain )
A Double Symbol in Ireland
The history of both of the women is connected with the Brigantines tribe. They were both associated with Leinster, which was the tribe’s center. The monks who described the legend of the goddess in the 10th century would have already known the story of St Brigid as well. Thus, both of the women are icons supported by different groups. Many people agree that there is no reason to separate the two stories and today the followers of pagan religions worship both of them as one – the goddess Brigid.
The goddess of spring. ( SPIRITBLOGGER'S BLOG )
St Brigid is still one of the most important Irish saints and for the pagans she's seen as a continuation of old Irish traditions. The stories of both Brigids have inspired many writers, artists, etc. Both of the legendary females have become important symbols in Ireland and nowadays it is hard to decide which one means more. While the researchers argue about the evidence of their existence and connections, many people enjoy the celebration of both female icons on February 1 when they hold traditional feasts in their name.
By: Natalia Klimczak
Ernest Abel, Przewodnik po świecie duchów i demonów, 2011-2013.
Brighid, available from:
Carmina Gadelica, Volume 1, by Alexander Carmicheal, available from:
St. Brigid of Ireland, available from:
Brighid, available from: |
For unlimited access to Class Notes, a Class+ subscription is required.
LECTURE 16: MEASURING GENETIC DIVERSITY
Fisher & Haldane: quantitative aspects of evolutionary change; population genetics into evolution
Wright: mendelian law, natural selection, and continuous variation – mathematical
Synthesis of ecology (Darwinian natural selection and genetics)
Theory of impericism.
Not all genes are variable
What forces influence diversity?
•Random genetic drift ( small population size effects)
oCan lead to reduction in diversity
oPurifying ( negative) – reduce fitness removed by selection
oPositive selection (adaptation) – mutatuion arises and is favoured.
oBalancing selection – heterozygotes are favoured over homozygotes
Fisher felt that all important evolution occurred in large population by natural selection; genetic drift=
Wright – genetic drift played important role in evolution
^CONTROVERSY IN EVOLUTIONARY BELIEFS
Importance of mechanism:
•Mutation and strength of selection
•Natural selection taking mutatuions out
•Differet selective forces |
Up until lately, the dream of 3-D printing was some far-off science-fiction story for late-night TV, however, recent innovations have brought many parts of 3-D printing into real-time reality. Making food products like they do on Star Trek and their “replicators” is probably still quite a few years away, if ever, however manufacturing products out of plastic is already a reality and the future looks bright right now.
I managed to call upon printing expert Tony Hunter from Leggero-Forte.Com, he has helped put this article together, which I’m sure you will find really useful.
New innovations in 3-D
printing is happening on a daily and weekly basis involving new materials, better programs, new designs, and higher-quality production. Right now many manufacturers of automobiles, boats, motorcycles, sporting goods, and other research and development reliant industries are using 3-D printers to speed up the process between the computer design and a finished prototype.
Years ago, artists would draw the prototypes on a piece of paper, then an engineer would decide whether the designs were actually physically possible, after that, a skilled craftsmen would begin building the prototype by hand, while being in constant contact with the engineer and the original designer. This process, including changes and updates, could take several years to finally hit the showroom floor as a finished product.
3-D printing on the other hand, has changed all that in a dramatic way. Now it is possible for a designer to sit at a computer and design a product that can be sent, at a touch of a button, directly to the 3-D printer, which can begin printing out a prototype in a matter of hours. The computer programs already know what is and isn’t physically possible in the prototyping and producing stages thus eliminating several steps.
Then, the 3-D printer can begin immediately building a prototype layer by layer, slowly, but perfectly, as the designer intended. Now, this whole process from designer to prototype can be done in just weeks or months for a mere fraction of the previous cost.
Very soon a 3-D printer that can work with carbon fiber will be available for purchase.
Carbon fiber is an amazing material that is both incredibly lightweight, and one of the strongest materials you can imagine. A boat built from carbon fiber material can strike a rock and never even crack the hull, or a car made from carbon fiber could possibly withstand a blow from a 2 pound hammer without even a noticeable dent. When we start making items out of carbon fiber, many of these products will have a lifetime of nearly 100 years with no rust, decay, or UV light damage to speak of.
Durability of purchased goods will never be a problem in the future, with possible lifetime guarantees.
Another fantastic feature that is already a reality with 3-D printing, is the ability to use recycled products in order to make new products. There is already available a 3-D printer that can grind up plastic soda bottles and re-melt them into the raw material needed to create new products from plastic.
Not only will this amount of waste from our landfills be reused making new products, but it will decrease our use of nonrenewable resources that cause a large percentage of our air, water, and land pollution. Even the products that this revolutionary 3-D printer makes, will be recyclable as well, imagine that.
In just a very short while, kids will be designing their own custom-made shoes, sandals, iPhone cases, and sunglasses on their own home computers, then printing them up and wearing them for all to see. A complete industry will emerge for millions and millions of designers who have the ability to design products that are able to be made on 3-D printers.
These designs will be bought, sold and delivered over the Internet, much the same way that e-books, and music is distributed today. It’s an exciting world that is on its way to literally every part of the planet very soon.
Amy Rice enjoys writing about 3d printing, when not writing she goes horse riding and swimming. |
Like in other countries home to diverse populations, residential segregation is an increasing challenge in the Scandinavian countries. Policymakers in Norway, Sweden and Denmark are adopting different approaches to combating it.
Three countries, three strategies
In the report Scandinavia’s segregated cities – policies, strategies and ideals (fagarkivet.oslomet.no), researchers from the Norwegian Institute for Urban and Regional Research (NIBR) at OsloMet together with colleagues from the Institute for Social Research take a close look at the policies the three Scandinavian countries use to fight residential segregation.
One way of summarising the differences is to conclude that the Swedish and Norwegian strategies have a greater focus on structures and limitations, while the Danish strategy focuses more on individuals.
Denmark: a focus on the individual
"The Danish approach is largely based on sanctions against individuals, while the other two countries focus more on strengthening individuals who meet structural obstacles such as discrimination and inequality," explains project head Anne Balke Staver, a researcher at NIBR.
"The difference in approach mirrors a difference in how research is applied in policy development. While expert knowledge is frequently used as a means of understanding the segregation issue and developing measures in Norway and Sweden, there are few such references to research in Denmark," she continues.
Staver points out that Denmark, which has more social housing and where a lower percentage of people own their own home, also employs some political instruments that are not available in Norway and Sweden.
The three countries all aim to devise measures to reduce segregation in cities, but they define the problem in different ways and have different theories about the causes.
While the Danish strategy focuses on areas with large immigrant populations, Swedish policymakers instead focus on neighbourhoods with high concentrations of low-income families.
Sweden and Norway: structural causes
In Sweden, ethnic segregation is typically treated as a symptom of underlying socioeconomic segregation, and not a problem in and of itself.
Norwegian researchers, for their part, tend to look at a broad range of factors that contribute to poverty and exclusion. The Norwegian and Swedish approach, in other words, are broadly similar.
Sanctions and control
Denmark distinguishes itself from the other Scandinavian countries in how policymakers there focus attention on the kinds of urban neighbourhoods people live in.
As Staver explains: "People who receive social assistance in Denmark can have their benefits reduced if they move to what are known as hard ghettos."
Denmark also focuses on crime, particularly by creating specific areas where breaking the law can lead to harsher punishment.
A number of measures also target children and schooling, including compulsory kindergarten and language testing. Sanctions have been introduced to enforce these measures—the child benefit families receive can be reduced if children do not attend kindergarten.
Addressing the root causes in Sweden
While the Danish strategy proclaims that the "ghettos" are to be eliminated by 2030, the other two countries have more modest ambitions to reduce segregation in the labour market, education and participation in order to improve living conditions in these areas.
The Swedish strategy has five areas of intervention:
- the labour market
- democratic participation
Simplifying planning processes to allow more homes to be built is a policy proposal that has significant support in Sweden. The self-settlement policy for asylum seekers is also slated for reform to encourage asylum-seekers to settle outside of areas with large numbers of immigrants.
Various labour market measures in Sweden target newcomers, young people, women and the long-term unemployed. The new coalition government that took office in January 2019, agreed to introduce a new "start job" scheme and to reform the Employment Service.
Combating segregation in Norway
The Norwegian strategy is organised around neighbourhood-based policies. As in Sweden, much of the focus of policymakers is on work and education initiatives, and measures targeting children.
"Like in Denmark, the objective is for every child to go to kindergarten, but rather than imposing sanctions, Norwegian policymakers want to achieve this through positive measures such as the provision of free kindergarten," Staver explains.
The housing-related measures in Norway mainly involve subsidies to help low-income families gain access to and remain in the housing market.
Disagreement on what causes segregation
The approaches the three Scandinavian countries adopt through their policies betray different views on the causes of segregation.
"The Danish strategy focuses on immigration from non-Western countries and the idea that insufficient demands have been made of immigrants," the researcher explains. "The result ,in their view, is that immigrants have decided to cluster together." This understanding of the cause of segregation explains why Denmark has introduced policies that amount to sanctions against individuals.
The Swedish strategy takes into consideration a broader range of factors, in particular growing socioeconomic inequality over a period of many years. It seeks to address different areas of life—housing, education, the labour market, participation in society and crime—and attempts to draw links between these different areas.
At the same time as a new integration strategy was launched in Norway, a commission was appointed to investigate the causes of segregation, with the mandate to look at challenges in living conditions and housing market factors in particular.
Immigration, integration and segregation
The Danish strategy rests on the assumption that the country will admit relatively few immigrants in the years to come. This assumption is not as explicit in the other countries, but both Sweden and Norway are now increasingly settling newly arrived refugees away from immigrant-dense areas in order to create better conditions for successful integration. |
Medical Definition of acid fuchsin
: an acid dye used chiefly in histology as a general cytoplasmic stain and for demonstration of special elements (as mitochondria)—called also acid magenta
Seen and Heard
What made you want to look up acid fuchsin? Please tell us where you read or heard it (including the quote, if possible). |
vero testis temporum, lux veritatis, vita memoriae, magistra
vitae, nuntia vetustatis .....
"For history is the witness
of the past, the light of truth, the survival of memory, the
teacher of life, the message of antiquity" (M. Tullius
("On Oratory") 2.36).
What did Cicero mean when he so defined history in his monumental
work on oratory and rhetoric? Does history really testify to
past events? Does it actually reveal certain truths? How accurately
does history preserve the memory of the past? Does it teach
us anything, and are we amenable to learn from it? If history
is the nuntia vetustatis, the "message of antiquity,"
which messages does history convey? Cicero's interpretation
of the value and meaning of history will guide us as we explore
the social, political, economic and artistic contributions of
the Romans to western civilization. Using literary, historical
and archaeological methodologies, we will examine the thousand
years of Rome's history - from its foundation by the mythical
Romulus, to its domination over the Mediterranean world and
central Europe, to its slow and gradual decline. As we study
Rome's storied past, you will develop proficiencies in the details
that comprise Roman history and an understanding of such broad
topics as the elegance of Etruscan civilization, Roman relations
with foreign nations, social and political institutions, imperialism,
the golden age of Latin literature, and the spread of Christianity.
In the latter part of the semester we shall give special attention
to daily life in ancient Italy and the provinces. |
Ecoregions of the Natchez Trace Parkway
- Grade Level:
- Seventh Grade-Eighth Grade
- Ecology, Environment, Science and Technology
- 2 class periods
- Group Size:
- Up to 36
- National/State Standards:
- 7th Grade Life Science: 3, 3a
8th Grade Life science: 3e
- biome, abiotic, biotic, ecoregion, ecosystem, landform, ecology, environment, Science and Technology
OverviewThe students will investigate an ecoregion of the NATR and fill out a worksheet with ecoregions characteristics. They will then collect the information into a class scrapbook. If necessary, the teacher will review with the class, the words; biotic, abiotic, biome and landform. The teacher will assign pairs or small groups of students to research the various ecoregions found along the Natchez Trace Parkway. The teacher will assist the students in compiling a scrapbook containing their research.
Enduring Understanding: An ecoregion is part of a biome that has a particular soil type and landform.
Essential Question: What are some different examples of ecoregions?
The students will:
1) describe the biotic and abiotic characteristics of at least one ecoregion found along the Natchez Trace Parkway
2) understand how a disaster might affect the ecosystem
An ecoregion (or bioregion) is a part of a biome. A biome may contain many different types of soils and landforms. An ecoregion is part of a biome that has a particular soil type and landform. There can be many different types of ecoregions in a biome. Even though two of the same kind of ecoregion may not be near each other, they will usually have many of the same types of biotic factors (plant and animals). An easy example to imagine is our country's Atlantic shoreline. The shore is within a deciduous forest biome. The shoreline in northern Florida has similar soil and land form as the shoreline in Maryland even though they are far away. Both are different than the Appalachian Mountains that are in the same biome. They may be considered the same ecoregion. They would have the same or similar plants and animals. Different scientists may have different definitions to various ecoregions. Ecosystems are smaller, localized areas within ecoregions.
The Natchez Trace Parkway has seven different ecoregions (see the diagram). See teacher answer sheet for summarized properties of the seven ecoregions.
General Characteristics of Natchez Trace Ecoregions
Maple- Oak - Hickory-Ash Forests
Gravely streams underlain with limestone
Transitional to Mixed Forest of the Appalachians
Home of rare cedar glade ecosystems
Fall Line Hills
Mixed Oak - Pine Forest
Ecoregion with highest number of currently threatened or endangered species (5)
Blackbelt Forest and Bluestem Prairie
Highly diverse 60+ bird species and 400+ plant species
Northern Hilly Gulf Coastal Plain
Mixed Pine - Oak Forest
Home to more than 30 species of reptiles and amphibians
Southern Rolling Hills
Oak -Hickory - Pine and Southern Floodplain Forests
Naturally fertile soils have largely been converted to agricultural uses.
MS Valley Bluff Hills and Loess Plains
Oak- Hickory-Pine Forests
Rare loess soil found in only one other North American location.
1.) Student Instructions and worksheets (3 pages)
2.) Access to internet
Student Task: The students will fill out the worksheets by researching their assigned ecoregion on the internet. They are encouraged to find pictures to illustrate their research. When all research is done, the students will share what they learned about their ecoregion and compile a scrapbook.
Student Instruction: See Student Instruction Sheet in Materials
Teacher Closure: The teacher will display the scrapbook in the classroom or library
AssessmentQuality of research and completeness of worksheets
Park ConnectionsDiscusses the ecoregions of the Natchez Trace Parkway.
1.) Relate Natchez Trace ecoregions to other areas
2.) Visit the Natchez Trace Parkway and discern various ecoregions and/or ecosystems. Compare the Natchez Trace ecoregions with those of other National Parks. |
- Wrist & Hand
Arthroscopy is the practice of looking inside a joint with a small camera. It allows a through assessment of the bearing surface of the joint (articular cartilage), the lining of the joint (capsule) as well as the surrounding muscles, ligaments and tendons.
A number of problems around the elbow can be addressed through keyhole surgery. Arthritic changes to the joint surface can be assessed in detail and damaged areas of cartilage can be removed. It is also possible to remove small areas of abnormal bone (osteophytes) which can restrict movement of the joint. In cases where arthritic change has affected the radial head, which may give rise to pain on gripping and rotational movements of the forearm, this can be removed.
Arthroscopy can also be utilized to release internal scarring within the joint in cases of elbow stiffness, either following a fracture or as part of the symptoms of arthritis. |
Characteristics of Wetlands
From just these definitions, it is clear that a wetland has soil that becomes saturated from precipitation, bodies of water such as rivers and oceans, or from ground water. The saturation must be predictable to some extent. The saturation may be relatively constant at the edge of a river or other permanent body of water like a lake. It may happen daily, where tides flood the area and recede. It may become saturated seasonally for extended periods by rain or snow raising the water table.
This saturation impacts the soil and what lives in it. Dry soil has pockets of air in it providing oxygen to plants, bacteria, and animals for respiration. When air in the soil is replaced by water, as in gleyed soil, it changes the types of bacteria that live in the soil. The type of bacteria will impact the acidity of the soil and decomposition. Hydric soil can be anaerobic with anaerobic bacteria or functional anaerobic bacteria living in wetland soil. These anaerobic bacteria give wetlands the methane and sulfur smell often associated with them. Some are responsible for maintaining the nitrogen cycle. Others maintain the sulfur cycle. The reducing of inorganic molecules by bacteria in wetlands give rise to the hydric soil.
Any mix of interdependent plants and animals are shaped by their physical environment of air, land, and water. In wetlands, ignoring latitude, water - how much there is, how often and long it saturates the soil - with the salinity and pH shapes everything else. Abiotic conditions shape plant species mix. Plants, as the basis for the food web, with hydrology and latitude shape what animals live in a wetland. |
Coronary artery bypass graft (CABG) surgery is indicated for patients with coronary artery disease to relieve symptoms, improve quality of life, and/or prolong life. More than 300,000 patients undergo CABG surgery annually in the United States with an initial hospital cost of approximately $30,000 per patient. As operative techniques continue to improve and perioperative care is enhanced, patients who were once denied surgery may now be surgical candidates. With this increase in the complexity of surgical cases, it becomes even more crucial that there be an effective collaboration among the surgeon, the anesthesiologist, the perfusionist, and the perioperative nursing staff.1
The patient undergoing CABG surgery deserves to have confidence that the professional nurse is knowledgeable, caring, efficient, and effective in providing necessary perioperative care. Proper preparation of the patient and significant others, expertise during the intraoperative phase, and a thorough knowledge base combined with skill and compassion of the nursing staff during the postoperative phase increase the likelihood of a positive outcome for the patient.
The Preoperative Phase
Preoperative preparation of patients and significant others is a well-established protocol in most institutions. Research has shown that education of the patient prior to surgery assists with recovery, increases patient contentment, and decreases postoperative complications.2 Appropriate timing of preoperative preparation is helpful for the patient's information retention. Because impending open heart surgery is anxiety provoking to most patients, it is imperative for the nurse to assess the patient for individual learning needs and provide the information in a timely manner to minimize as much anxiety as possible. It has been suggested that state anxiety levels are lower 5 to 14 days prior to CABG surgery, which makes this an ideal time for teaching.3 A high anxiety level is not conducive to retention of information. Benefits of preoperative teaching may be maximized when information is presented during the period when the patient has the lowest anxiety. Many patients are admitted on the day of surgery. Bringing them into the hospital for preadmission testing several days before surgery and completing the preoperative teaching during this time may be effective. Some patients want specific details about the perioperative experience, whereas others seem to need only the reassurance that a knowledgeable and compassionate caregiver will provide the needed perioperative care. The skilled professional nurse individualizes preoperative instruction to meet the specific needs of that patient.
Information when conducting preoperative teaching with a patient scheduled for CABG surgery may include sights and sounds that will be experienced, invasive lines that will be inserted, anticipated sensations from preoperative medications, and anticipated length of the operation. During the preoperative teaching session, the nurse should also provide information related to postoperative expectations.
Reassurance that pain will be managed during the postoperative period is important to communicate to the patient and significant other. Teaching about incision splinting and availability of effective pain medications should be emphasized.
Patients should be informed that an endotracheal tube will probably be in place postoperatively, resulting in a temporary inability to speak. Assure the patient that a competent caregiver will be in close proximity during the immediate postoperative recovery period and will be able to anticipate and provide for needs. The patient should be assured that the endotracheal tube will be removed as soon as it is no longer needed.
Pulmonary care is an important part of the postoperative care of the patient after CABG surgery. Preoperative practice with the equipment (such as an incentive spirometer) that will be used postoperatively is helpful. Teaching in the preoperative period assists the patient to comprehend the necessity of coughing effectively in spite of incisional pain to achieve positive outcomes postoperatively. Early mobilization is effective in improving postoperative pulmonary outcomes.4 Preoperative teaching might include information related to the potential for mobilization to a chair during the first evening postoperatively.
The significant other may be anxious and this may intensify as his/her loved one is taken to surgery. Separation is inevitable, but communication with the significant other during the intraoperative period is helpful to minimize anxiety. There are often questions about the length of the operation, the condition of the patient, and when the anticipated reunion will be possible.
Nursing interventions important for significant others include teaching them about the expected patient appearance. The patient may appear pale, cool, and edematous. The nurse should also discuss equipment that will be connected to the patient. This equipment will include the ventilator, chest tubes, nasogastric tube, invasive lines, and urinary catheter (see Table 1).
|TABLE 1 Important Preoperative Teaching Points|
The Intraoperative Phase
The intraoperative events during cardiac surgery influence nursing care postoperatively. A typical scenario will be discussed to assist the nurse in understanding rationale for postoperative care.
Prior to initiation of anesthesia, most cardiac surgery patients undergo the insertion of a large-bore peripheral intravenous catheter, an arterial line, and a pulmonary artery catheter. These are needed so intravenous fluids can be administered and hemodynamics monitored during the operation and in the postoperative period.
After the insertion of the invasive lines, anesthesia will be administered. It is important to provide anesthesia, analgesia, and amnesia with agents utilized during the operation. These effects may be accomplished with inhalation and intravenous agents. After anesthesia is induced the patient will be given a neuromuscular blocking agent, such as pancuronium or rocuronium, to facilitate endotracheal intubation and relax the skeletal muscles. Inhalation agents and intravenous narcotics are given to induce anesthesia. Examples of inhalation agents are desflurane and sevoflurane. Inhalation agents can be cardiodepressive, so providing the minimum dose for the therapeutic effect is desired. Narcotic agents such as fentanyl will assist with anesthesia and will also promote analgesia.5 Amnesia can be accomplished with the inhalation agents as well as with abenzodiazepine such as midazolam. After the patient is anesthetized, there will be a head-to- toe surgical preparation and insertion of a urinary catheter.
The standard surgical approach is via a median sternotomy. Sources of grafts can be the internal mammary artery, the radial artery, the gastroepiploic artery, and/or the saphenous vein. The internal mammary and the saphenous vein continue to be most commonly used for grafts. At 5 years postoperatively, 70% to 80% of saphenous vein grafts are patent compared with a 40% to 60% patency rate at 10 years. In comparison, there is a 90% patency rate of internal mammary artery grafts at 10 years.1 Heparin is administered to promote anticoagulation. The activated clotting time is measured during surgery to determine the effectiveness of the anticoagulation and therefore guide the amount of heparin that is administered.
The cardiopulmonary bypass (CPB) machine can be used during the operation to maintain cardiopulmonary function and tissue perfusion. Sites of cannulation for CPB are usually the aorta and the right atrium. After the aorta is cross-clamped, cardioplegia is administered to stop the heart. Cardioplegia can be a cold solution that is high in potassium. In certain patient populations, warm blood cardioplegia may be indicated.1 The surgeon performs the anastomoses while the heart is stopped. The shorter the time on the bypass machine, the less likely there will be complications related to extracorporeal circulation. The inflammatory response is activated secondary to cardiac surgery. This may be related to the manipulation of the heart and/or the effects of the CPB machine.6
During extracorporeal circulation, anesthesia may be maintained with propofol, an intravenous medication that provides anesthesia as well as amnesia. Propofol can cause myocardial depression and hypotension so the hemodynamic status of the patient should be closely monitored. Propofol is contraindicated in patients with allergies to soybean oil or eggs.7
Rewarming the body must occur prior to the completion of the operation to begin to offset the surgically induced hypothermia. Rewarming is initiated with the heat exchanger on the bypass machine while the surgeon finishes the anastomoses. The cross clamp is then removed from the aorta. The intrinsic cardiac rhythm is often spontaneously reestablished as blood begins to flow through the heart. Sometimes defibrillation is necessary if the heart does not automatically resume sinus rhythm. After the adequacy of the heart rate and blood pressure (BP) is certain, the patient is separated from the CPB machine and protamine sulfate is administered to reverse the effects of the heparin. Inotropic agents may be required to wean the patient from the bypass machine if cardiac index is diminished. Epicardial atrial and ventricular pacemaker wires may be inserted at this time. Mediastinal and/or pleural chest tubes will be inserted. The sternum is wired, the tissues are sutured, surgical dressings are placed, and the patient is transported to the recovery room.
Some surgeons elect off-pump coronary artery bypass (OPCAB). The potential complications of extracorporeal circulation are minimized with this surgical option.8 Research has been conducted related to the benefit of the OPCAB procedure. Potential benefits include a decreased need for blood transfusions, decreased time in the intensive care unit, and reduced hospital time with a potential decrease in hospital cost.1
With the OPCAB procedure, a [beta]-adrenergic blocking medication such as esmolol may be used to slow the heart for the anastomoses to be completed. Surgical stabilizers may be used to decrease the motion of the heart so that the surgeon can complete the anastomoses.8 Heparin is administered with the OPCAB to prevent potential clotting. The patient may receive protamine to reverse the heparin at the end of the operation. A smaller dose of heparin may be used with the OPCAB than if extracorporeal circulation is used. Fluid shifts and hematuria related to long pump times would be minimized and hemodilution from priming the CPB machine is not an issue with the OPCAB. Also, there may be fewer complications from the inflammatory response that appears to be related to blood contact with the bypass machine.9 The patient's postoperative body temperature may be lower than a patient who was on bypass because the heat exchanger on the pump cannot be utilized for warming. Because of the reduced body temperature, bleeding may be exacerbated. Because there is no need for cannulation of the aorta and the right atrium, there are fewer puncture sites for potential postoperative bleeding.
The Postoperative Phase
Postoperative care of the cardiac surgery patient is challenging in that changes can occur rapidly. The preoperative condition of the patient as well as intraoperative events should be considered in postoperative care. It is essential for the nurse to anticipate the possible complications so that appropriate interventions are initiated in a timely manner in order to ensure a positive outcome for the patient.
There is a flurry of activity as the patient enters the recovery room/ICU and the admitting nurse connects the patient and the invasive lines to the monitoring equipment while another staff member connects drainage devices appropriately and draws admission blood work. The operating room nurse and the anesthesiologist report the patient's condition to the receiving nurse.
Postoperative Pulmonary Management
Pulmonary dysfunction and hypoxemia may occur in 30% to 60% of patients after CABG.10 Patient history and intraoperative factors must be considered in the postoperative pulmonary management. A history of smoking, obstructive pulmonary disease, steroid use, gastroesophageal reflux disease, heart failure, and poor nutrition may increase postoperative pulmonary complications.11
Although there are some variations to this protocol, most patients will be intubated and mechanically ventilated upon arrival in the recovery room. Desired outcomes include adequate oxygenation and ventilation while the patient is intubated. Early extubation isalso a desired outcome as long as the patient is hemodynamically and neurologically stable. There is potential for an increase in postoperative complications when patients are intubated longer than 24 hours. The length of hospital stay may also increase with longer intubation times.12 The current trend is to extubate patients within the first 12 hours after surgery. On occasion, patients may be extubated in the operating room. Routine postoperative care to promote oxygenation and ventilation involves prevention and treatment of atelectasis and pulmonary infection as well as maintenance of effective gas exchange and breathing patterns.
There are several factors during heart surgery that increase the potential for pulmonary complications postoperatively. The length of the surgery and resultant increase in the amount of needed anesthetic agents, the amount of fluids administered during the intraoperative period, and prolonged time in the supine position increase the potential for pulmonary complications. Atelectasis can be related to cardiopulmonary bypass, surfactant inhibition, and stimulation of the inflammatory response.9 Atelectasis, as well as the inflammatory mediators, inhibits diffusion of oxygen and carbon dioxide across the alveolar capillary membrane and impairs effective gas exchange. Prolonged pump time causes fluid shifts, potentially increasing the amount of fluid in the pulmonary tissue, thus increasing the possibility of pulmonary complications. Pain caused from the sternotomy can impair breathing patterns. Some patients shiver after heart surgery and this response may lead to an increase in the carbon dioxide level or lead to lactic acidosis. Shivering may increase the body's oxygen consumption, therefore, oxygen levels should be monitored and adjusted accordingly. Shivering may be the result of the body compensating for the surgically induced hypothermia or a reaction to anesthetic agents. Shivering is usually managed by administration of sedation and neuromuscular blocking agents while the patient is being mechanically ventilated.
Postoperative management includes accurate and frequent physical assessment, arterial blood gas analysis, continuous pulse oximetry, pulmonary care (including suctioning while the patient is intubated and coughing and incentive spirometry after extubation), early mobilization, and control of pain and shivering. Most protocols require a chest x-ray after heart surgery to determine placement of the endotracheal tube, thermodilution catheter, and nasogastric tube as well as information about the width of the mediastinum, amount of atelectasis, presence of hemothorax or pneumothorax, and size of the heart.
Pain control is usually achieved with intravenous narcotics while the patient is intubated. Oral and/or intravenous narcotics may be used after extubation. The nurse must balance the need for pain control without respiratory depression with the patient's need to have his/her pain minimized to allow an effective cough.
The nurse must assess the patient for readiness for early extubation. Extubation should be considered when the patient is arousable, able to follow commands, hemodynamically stable, and initiating spontaneous ventilations without excessive respiratory effort. Typical intensive care protocols for the cardiac surgery patient include preprinted orders that facilitate the weaning process. As the patient is being weaned from the ventilator, ventilatory support is gradually withdrawn and the patient must sustain spontaneous ventilations. Physical assessment of effective ventilation, and laboratory analysis of arterial blood gases and specific ventilatory parameters must be completed prior to extubation. Protocols may vary, but some standards require a PO2 > 80 mm Hg on a FIO2 of 0.40 or less, a PCO2 less than 45 mm Hg, a pH between 7.35 and 7.45, and an oxygen saturation (SaO2) >92%. Ventilatory parameters include a maximum inspiratory pressure of at least -20, a tidal volume of at least 5 mL/kg body weight, and aminute volume of at least 5 liters per minute (see Table 2). During the weaning process, the nurse should assess the patient for an increase in respiratory and/or heart rates, use of accessory muscles, fatigue, and color changes because these findings may indicate the patient is not ready for extubation. An increase in pulmonary artery pressures can indicate an increase in PCO2 and give the nurse an early indication prior to arterial blood gas analysis that the patient is not ready for extubation. Early extubation is desirable but if parameters are not met and/or the patient is hemodynamically unstable, there may be detrimental effects of early extubation.
|TABLE 2 Extubation Weaning Parameters|
Postoperative Management of Hemodynamics
Movement of the patient from the operating room to the recovery room/ICU can create hemodynamic instability, and thus, reconnection to the monitoring equipment in a timely manner is of the essence. A cuff BP is usually taken to provide correlation of the BP obtained from the arterial line.
Intraoperative myocardial ischemia is a potential cause of low cardiac output (CO) during the immediate postoperative period. The nurse must continually assess the patient for cardiac dysfunction and hemodynamic instability. The receiving nurse must intensively monitor the interrelationship between heart rhythm and rate, preload, afterload, contractility, and myocardial compliance to achieve this outcome. Preload is determined by the volume of blood returning to the right atrium as well as by myocardial compliance. Preload is a measurement of end diastolic pressure. Afterload is the force the left ventricle must overcome to eject blood during systole. It is determined, in part, by myocardial contractility and systemic vascular resistance. Myocardial contractility refers to the force generated by the heart during systole.13 Myocardial compliance is the ease with which the heart distends during diastole.14
Blood pressure must be maintained within ordered parameters to provide tissue perfusion and prevent disruption of the surgical anastomoses. BP is CO multiplied by systemic vascular resistance (SVR). The nurse must monitor the volume in the system, which is reflected by the right atrial pressure (RAP) and pulmonary capillary wedge pressure (PCWP).
If the BP is too low, there is either too little volume (preload), a decrease in contractility, or the SVR is too low (the patient's blood vessels are dilated). If the BP, CO, and RAP/PCWP are all low, the patient probably needs volume (see Table 3). Volume is generally replaced as needed with a colloid such as hetastarch unless the hematocrit is low and then volume may be replaced with packed red blood cells. If the BP and CO are low but the PCWP is high, the patient may be experiencing decreased contractility and inotropic support may be instituted with an agent such as dopamine or dobutamine. If the BP is low and the CO is adequate or elevated, the systemic vascular resistance may be low and the patient may need a constrictive agent such as phenylephrine (see Table 3). Low BP can be temporarily increased by turning off positive end expiratory pressure (to decrease intrathoracic pressure and augment preload) and by position changes. The patient should be put in the supine position with legs elevated to allow the BP to increase until the cause of the low BP can be determined and corrective measures are taken. Although not universally utilized, some institutions continue to place patients in the Trendelenburg position. The Trendelenburg position can offer symptomatic relief from low BP, especially in the early postoperative phase, by shifting volume from the legs to the chest and increasing preload. The positive changes identified with Trendelenburg positioning seemed to provide only temporary improvement in the clinical picture.15
|TABLE 3 Potential Treatment for Hemodynamic Changes After CABG|
If the BP becomes too high, especially in the early postoperative period, the surgical anastomoses may become disrupted, which could cause significant intrathoracic bleeding, hemodynamic instability, poor tissue perfusion, and necessitate a return to the operating room. It is important for the nurse to carefully monitor the patient for high BP and quickly intervene per institution protocol. Nitroprusside, a vasodilator, is often administered to lower the BP to the ordered parameter. Nitroglycerine, a nitrate, may also be used to cause vasodilation and lower the BP (see Table 3). These medications should be started slowly so patient response can be evaluated. The patient must be monitored closely as the BP may drop as the patient's body temperature increases.
The nurse must rewarm the patient after surgery if hypothermia persists. The negative effects of hypothermia include depression of the myocardium, ventricular dysrhythmias, vasoconstriction, and depression of clotting factors (increasing the risk of bleeding postoperatively).13 Many surgeons attempt to achieve normothermia because of the deleterious effects of hypothermia. If the patient is hypothermic, rewarming may be accomplished by the use of warm blankets, warm humidified oxygen, convective air mattresses, and other individual institutional approaches.13 Vasoconstriction induced by hypothermia may increase BP. Because of the potential for issues with graft anastomoses and the importance of maintaining BP within the reference range, a vasodilator may be needed while the patient is rewarming. As normothermia is achieved, if the patient's systemic vascular resistance decreases significantly, additional intravenous fluids may need to be administered.
The nurse should carefully monitor the pulmonary artery pressures and the CO as well as the BP when interventions are instituted to assess the effect. Some references suggest that hemodynamic parameters be rechecked every 30 to 60 minutes after each intervention during the early postoperative period.14
It is important to maintain effective CO after open heart surgery to provide adequate tissue perfusion. Cardiac index can be decreased if the heart rate increases to the point of compromised ventricular filling with a resultant decrease in the stroke volume. Cardiac index (CI) can also be decreased with bradycardia. Cardiac index can be decreased if the SVR (afterload) is elevated, making it more difficult for the ventricles to eject the end diastolic volume of blood. One factor that can cause an elevation in afterload is the surgically induced hypothermia leading to vasoconstriction. A decrease in myocardial contractility or circulating volume can further compromise CI. If the patient is hypothermic, this may result in myocardial depression, thus compromising contractility.13 After the cause of the decrease in theCO/CI is determined, management can be initiated. If the CO/CI is low and the PCWP is high, inotropic support is probably needed. If the CO/CI is low and the PCWP is low, volume is likely needed (see Table 3). If the SVR is elevated in the early postoperative period, it may be due to hypothermia or the patient may need volume.
It is easy to rely only on the values obtained with hemodynamic monitoring when assessing a patient. The nurse must also use effective clinical assessment skills. Peripheral perfusion assessment data are vitally important in the evaluation of effective CO.16 The nurse should regularly perform neurovascular assessments of the lower extremities to provide information about the effectiveness of CO.14
Dysrhythmias are common after CABG surgery. Constant assessment of the patient, as well as continuously monitoring the cardiac rate and rhythm, is imperative. Ventricular dysrhythmias are more common in the early postoperative period and supraventricular dysrhythmias are more likely 24 hours to 5 days postoperatively.17 The incidence of atrial fibrillation ranges from 10% to 65% depending on many factors including patient history, preoperative medications, and type of surgery.18 Hypothermia, inhaled anesthetics, electrolyte disturbances (ie, hypocalcemia, hypercalcemia, hypomagnesium, and hypokalemia), metabolic disturbances (such as acidosis), manual manipulation of the heart, and myocardial ischemia may be factors in postoperative dysrhythmias. Dysrhythmias can also be the result of an increase in catecholamine levels secondary to pain, anxiety, and inadequate sedation.17 Management depends on the type of dysrhythmia and the patient's clinical response. The nurse must treat the patient and not only the monitor. Effectiveness of BP and CO should be considered when evaluating dysrhythmias. Often, cardiac surgeons place epicardial wires on the atrium and/or the ventricle during the operation. Temporary pacing can be instituted to override a slow intrinsic rhythm so CI and BP can be maintained. Atropine may be given to increase the heart rate in the absence of epicardial pacing wires. Tachydysrhythmias are usually controlled pharmacologically. The specific medication utilized will depend on hospital protocols and physician preference. The critical care nurse should utilize standing orders in the institution as well as current advanced cardiac life support protocols.
Postoperative Management of Bleeding
The postoperative period may be complicated by excessive bleeding. Many factors should be considered when assessing the patient's potential for bleeding. Patients who were on anticoagulants and antiplatelet agents (including glycoprotein IIb/IIIa receptor antagonists such as abciximab) prior to surgery are at an increased risk of postoperative bleeding.19 The aorta and the atrium are cannulated during surgery. The grafts have proximal and distal anastomosis sites. Other potential sites for bleeding include the internal mammary site, the chest wall, and chest tube sites. Induced hypothermia, the use of the CPB machine, and the administration of heparin for anticoagulation can all contribute to postoperative bleeding. The nurse should be aware that heparin can be stored in adipose tissue and some patients may have an increase in bleeding 4 hours postoperatively depending on the body's adipose composition. Some surgeons utilize an intravenous infusion of aprotinin intraoperatively to minimize the risk of postoperative bleeding. This drug is a protease inhibitor that inhibits fibrinolysis.20 Aprotinin may also have some anti-inflammatory effects and therefore be beneficial to the patient after CABG.21
The nurse should monitor the patient for signs of bleeding from the chest tubes and the surgical sites as well as clinical signs of hypovolemia related to blood loss. Hemoglobin and hematocrit should be monitored at regular intervals during the postoperative period according to institution protocol. Sometimes the surgeon orders serial coagulation profiles for a patient at risk for bleeding. If bleeding is an issue, drugs such as protamine sulfate (to reverse the effects of heparin) or antifibrinolytic agents such as aminocaproic acid or desmopressin (DDAVP) may be ordered.22 Blood products such as fresh frozen plasma and platelets may also be ordered.
When bleeding occurs there is potential for the blood to accumulate in the pericardium, and therefore, the nurse must be cognizant of the potential for cardiac tamponade. The clinical manifestations of cardiac tamponade include lack of chest tube drainage, decreased BP, narrowed pulse pressure, increased heart rate, jugular venous distention, elevated central venous pressure, and muffled heart sounds.13 Emergency reoperation would be required.
Postoperative Neurologic Management
Patients who require coronary artery bypass surgery are at an increased risk for neurologic complications. Stroke can be caused by hypoperfusion or an embolic event during or after surgery. Manipulation of the aorta has been implicated in embolic events.23 Other risk factors for stroke may include age, previous stroke, carotid bruits, and hypertension.24 The incidence of stroke is approximately 2.5%.23
The nurse should be particularly astute to neurologic assessment in the postoperative period. When the patient is admitted to the intensive care unit, he/she will likely be intubated and unconscious. The effects of the neuromuscular blocking agents will be apparent. Pupils should be assessed initially, however, normal size and reactivity may not return until agents utilized intraoperatively have been metabolized. Over the first few hours after surgery, the results of the neurologic assessment should improve gradually. By the time the patient is ready for extubation, he/she should follow commands and have equal movement and strength of the extremities with neurologic function approaching the patient's normal. It is difficult for significant others during this time because waiting during the awakening process can be anxiety provoking. Patients and significant others are informed prior to surgery of the risk for stroke and want that to be definitively ruled out as soon as the patient returns to the intensive care unit. The nurse should provide needed comfort but not give false hope, as the neurologic status cannot be completely assessed until the patient is fully awake and extubated. At that time, the patient should be assessed for orientation to person, place, time, and circumstance. A motor and sensory assessment should also be performed. A positive result is a good indication that an intraoperative stroke can be ruled out. Neurologic assessments must continue because the risk of stroke does not end with the operation.24
Postoperative Renal Management
There is a potential for renal dysfunction in the postoperative cardiac surgery patient. One reference suggests that the incidence is approximately 8%.1 Renal insufficiency may be related to advanced age, hypertension, diabetes, decreased function of the left ventricle, and length of time on the CPB.25 One indicator of effective CO is adequate renal perfusion as evidenced by urinary output of at least 0.5 mL/kg/h. The nurse must monitor the urinary output at least hourly during the early postoperative period. The urine should be assessed for color and characteristics as well as amount. Diuresis is likely in the postoperative period when renal function is adequate, as the fluids mobilize from the interstitial to the intravascular space. The patient's potassium level should be monitored at least every 4 to 6 hours for the first 24 hours, as potassium is lost with diuresis. Intravenous potassium replacement should be administered to keep the serum potassium levels within normal limits. The patient should be astutely monitored for cardiac dysrhythmias if the serum potassium level is abnormal. Other laboratory values that should be monitored at least daily are the blood urea nitrogen and serum creatinine.
Postoperative Gastrointestinal Management
Gastrointestinal complications range from 0.12% to 2%.26 Complications include peptic ulcer disease, perforated ulcer, pancreatitis, acute cholecystitis, bowel ischemia, diverticulitis, and liver dysfunction. Some risk factors for gastrointestinal dysfunction include age over 70, a history of gastrointestinal disease, a history of alcohol misuse, cigarette smoking, heart valve surgery, emergent operation, prolonged CPB, postoperative hemorrhage, use of vasopressors, and low postoperative CO.26 If the gastroepiploic artery is used as a conduit for bypass, this may also increase the risk of gastrointestinal dysfunction. Anesthetic agents, analgesics, and hypoperfusion of the gut during surgery can also contribute to gastrointestinal dysfunction. The nurse should monitor the patient for bowel sounds, abdominal distention, and nausea and vomiting. The intubated patient will have a nasogastric tube to low intermittent suction or Salem sump to continuous suction. Placement and patency should be assessed as well as amount, color, and characteristics of the drainage. Prior to extubation, if bowel sounds are present, the nasogastric tube will be discontinued and the nurse should continue to assess the patient for potential gastrointestinal disturbances. The nurse should administer antiemetic agents as ordered if the patient is nauseated. The comfort of the patient as well as the sterility of the sternal dressing must be maintained. Some surgeons order a histamine blocker to minimize acid secretion until normal dietary patterns are resumed. When the nasogastric tube is removed, the patient will be started on a clear liquid diet and this can be advanced as tolerated by the patient.
Postoperative Pain Management
Dependent upon surgical approach, the patient may have a median sternotomy incision, leg incision(s), and/or a radial incision. Manipulation of the chest cavity, use of retractors during surgery, and electrocautery may all contribute to postoperative pain.27 In addition, positioning on the operating room table and length of time of the surgery may also be factors in pain experienced postoperatively.
Poorly controlled pain can stimulate the sympathetic nervous system and lead to cardiovascular consequences. The heart rate and BP can increase and the blood vessels can constrict, causing an increase in the cardiac workload and myocardial oxygen demand.27 Effective pain control is essential for patient comfort, hemodynamic stability, and prevention of pulmonary complications.
Nurses must individualize pain assessment and control for each patient as responses vary among individuals.27 Opioid analgesics, positioning, mobilization, distraction, and relaxation techniques are among some of the methods of pain control. Keeping serum levels of opioid analgesics in the therapeutic range is beneficial. Nonsteroidal anti-inflammatory agents may be used in conjunction with opioid agents to control pain and minimize the amount of narcotic needed. Ketorolac is a nonsteroidal anti-inflammatory agent that can be administered intravenously in the early postoperative period while the patient is still intubated. The nurse must monitor renal status of patients taking ketorolac, and the drug may be discontinued if the serum creatinine is elevated. The patient is at an increased risk of gastrointestinal bleeding when a nonsteroidal anti-inflammatory agent is used. Pulmonary care is more effective for the patient when pain is effectively managed. Teaching the patient to splint the incision when coughing and moving improves pain control. The nurse should evaluate the effectiveness of pain management interventions regularly. Significant others are often concerned about the postoperative pain experienced by the patient. Explanations about interventions utilized and outcomes achieved can decrease anxiety.
Another source of pain for the patient after CABG is the removal of the chest tubes. This usually occurs 24 to 48 hours postoperatively when the amount and characteristics of chest tube drainage meet ordered parameters as long as there is no air leak noted in the water seal chamber. Pain medication should be administered prior to removal of chest tubes per institution protocol to minimize the trauma of the procedure.
Additional Postoperative Management
The incidence of infection of sternal and leg incisions after cardiac surgery is less than 3%.13 Risk factors for infection include diabetes, malnutrition, chronic diseases, and patients requiring emergent surgery or prolonged surgery. Assessment for, and prevention of, infection is part of the nurse's role in the postoperative period. The patient should be assessed for local and systemic signs of infection. Postoperative antibiotics may be ordered. Dressings should be removed and incision care should be completed according to institution protocols. Control of blood glucose level may help with prevention of infection. It is desirable to control blood glucose levels of greater than 150 mg/dL with a continuous intravenous infusion of insulin versus intermittent subcutaneous insulin injections. This practice is thought to be helpful in the prevention of deep sternal wound infection.1
Some surgeons order corticosteroids postoperatively. When used, these drugs are intended to minimize the potential risks of inflammation after heart surgery. Patients should be monitored for suppression of the immune system, as this can be an adverse effect of corticosteroid administration. Patients need to be taught how to slowly discontinue the medication after discharge per physician orders. The other potential effect of corticosteroid administration is an elevation in serum glucose levels. A sliding scale insulin order may be needed to maintain blood glucose levels within normal limits while the patient is in the hospital.
The nurse must intensively care for the patient in the early postoperative period. This intensive monitoring and postoperative discomfort can interfere with the patient's need for sleep. There is a potential for sleep disturbance as the patient is recovering from CABG. Lack of sleep may negatively affect postoperative outcomes.28 Organization of needed care and provision of time for uninterrupted sleep cycles is important for effective outcomes. Some of the postoperative confusion experienced by patients may be minimized and positive outcomes maximized when time for sleep is provided. Hospital routines and too many visits by well-meaning significant others may add to the sleep deprivation problem. Significant others should be able to spend time with the patient, but it is the role of the intensive care nurse to balance the need for visitation with the need for rest and sleep.
It can be frightening for significant others to visit the patient during the early postoperative period because of the monitoring equipment and appearance of their loved one. Explanations regarding the equipment and physical appearance may be helpful. Often significant others need to overcome fear of touching the patient postoperatively and receive reassurance from the professional nurse that no harm will come from the touch.
A compassionate, knowledgeable, and skilled nurse caring for the patient after open heart surgery is an asset in the achievement of positive outcomes for the patient and his/her significant others. The care of the CABG patient is intense, complex, and rewarding. The patient is admitted to the intensive care unit unconscious, intubated, and completely dependent on advanced technology as well as the expert care of the health team. Typically 24 to 48 hours after the surgery, the invasive lines have been discontinued, the patient no longer needs to be mechanically ventilated, organ system function is returning to normal, and the patient is now ready to work toward increasing independence. Cardiac surgery is not the cure for coronary artery disease. It gives the patient the opportunity to make needed lifestyle adjustments and achieve the highest degree of health possible. Nurses are a part of the team that makes this return to health a possibility for the patient. |
KEEP YIELD IN MIND WHEN MAKING TILLAGE DECISIONS
THE 2009 GROWING season was cool and for much of the province it was also wet. Under these slow growing conditions no-till soybean production can lag behind. The question is “how much?”
About two thirds of Ontario soybeans are grown under some form of minimal or no-till production system. Many research trials over the last 20 years have shown relatively small yield differences between the two systems. The yield difference has usually been about two bushels per acre in favour of tillage in Ontario.
LEFT SIDE: PRE-TILLAGE; RIGHT SIDE NO TILL
This small yield advantage to tillage actually makes no-till more economical. We conducted seven no-till versus pre-tillage trials in southwestern Ontario in 2009. Despite huge visual differences in growth, yields were essentially the same for no-till versus tilled ground. It should be noted that the trials were conducted on well drained soils in southern counties.
On the other hand growers in lower Crop Heat Unit areas or in fields with poor drainage did see significant yield reductions in 2009. But even for many of them, reductions were below five bushels per acre. At the end of the day the provincial average yield for 2009 was 41 bushels per acre. The 10 year average is 38 bushels per acre.
Keep these numbers in mind when considering tillage for soybeans. Although visual differences can be great, most of the time the yield drag to no-till soybeans is very small on well drained soils. • |
- Fun workbooks teach children about a variety of creatures in nature, including birds, insects and animals.
- Ideal for use with children from ages 6 to 10 years.
- Includes 1 each of: 2963 Birds Build Nests, 2964 Insects Visit Flowers, 2965 Seeds Travels and 2966 Birds Use Their Bills.
- Also includes 1 each of: 2967 Animals Hatch From Eggs, 2968 Spiders Spin Silk, 2969 Animals Hide and 2970 Animals Grow New Parts.
- Also includes 1 each of: 2971 Animals Prepare for Winter, 2972 Insects Grow and Change, 2973 Animals Are Poisonous and 2974 Plants Eat Insects.
|brand name||Creative Teaching Press|
|educational content||science readers; nonfiction reading; life cycles; insects; animals; plants|
|grade||1st grade; 2nd grade; 3rd grade; 4th grade|
|total recycled content||0%|
|size||6" x 9"|
|age recommendation||6 - 10 years|
|manufacturer||Creative Teaching Press|
|postconsumer recycled content||0%|
|number of pages||16|
Thank you, you will now be redirected to comparisons.
Just a moment while we prepare the page
Check In-Store Availability |
For over 400 years, more than 15 million men, women and children were the victims of the tragic transatlantic slave trade, one of the darkest chapters in human history.
Every year on 25 March, the International Day of Remembrance for the Victims of Slavery and the Transatlantic Slave Trade offers the opportunity to honour and remember those who suffered and died at the hands of the brutal slavery system. The International Day also aims at raising awareness about the dangers of racism and prejudice today.
In order to more permanently honour the victims, a memorial has been erected at United Nations Headquarters in New York. The unveiling took place on 25 March 2015. The winning design for the memorial, The Ark of Return by Rodney Leon, an American architect of Haitian descent, was selected through an international competition and announced in September 2013.
Slavery Rememberance Day |
Tinea infections are commonly called ringworm because some may form a ring-like pattern on affected areas of the body. Tinea corporis, also known as ringworm of the body, tinea circinata, or simply as ringworm, is a surface (superficial) fungal infection of the skin. Ringworm may be passed to humans by direct contact with infected people, infected animals (such as kittens or puppies), contaminated objects (such as towels or locker room floors), or the soil.
There are several kinds of ringworm, including:
- Majocchi's granuloma, a deeper fungal infection of skin, hair, and hair follicles. It is most common in women who shave their legs.
- Tinea corporis gladiatorum, a special name given to ringworm spread by skin-to-skin contact between wrestlers.
- Tinea imbricata, a form of ringworm seen in Central and South America, Asia, and the South Pacific.
Who's At Risk
Ringworm may occur in people of all ages, of all races, and of both sexes.
Ringworm is most commonly seen in children. Other people who are more likely to develop ringworm include:
- Women of child-bearing age who come into contact with infected children.
- People who have another tinea infection elsewhere on their bodies: tinea capitis (scalp), tinea faciei (face), tinea barbae (beard area), tinea cruris (groin), tinea pedis (feet), or tinea unguium (fingernails or toenails).
- Athletes, especially those involved in contact sports.
- People in frequent contact with animals, especially cats, dogs, horses, and cattle.
- People with weakened immune systems.
- People who sweat heavily.
- People who live in warmer, more humid climates.
Signs and Symptoms
The most common locations for ringworm include:
- Trunk (chest, abdomen, back)
Ringworm appears as one or more red, scaly patches ranging in size from 1–10 cm. The border of the affected skin may be raised and may contain bumps, blisters, or scabs. Often, the central portion of the lesion is clear, leading to a ring-like shape and the descriptive name ringworm (a misnomer since the condition is not caused by a worm).
Ringworm may cause itching or burning, especially in people with weak immune systems.
If you suspect that your child has ringworm, you might try one of the following over-the-counter antifungal creams or lotions:
Apply the cream to each lesion and to the normal-appearing skin 2 cm beyond the border of the affected skin for at least 2 weeks until the areas are completely clear of lesions. Because ringworm is very contagious, have your child avoid contact sports until lesions have been treated for a minimum of 48 hours. Do not allow your child to share towels, hats, or clothing with others until the lesions are healed.
Since people often have tinea infections on more than one body part, examine your child for other ringworm infections, such as on the face (tinea faciei), in the groin (tinea cruris, jock itch), or on the feet (tinea pedis, athlete's foot).
Have any household pets evaluated by a veterinarian to make sure that they do not have a dermatophyte infection. If the veterinarian discovers an infection, be sure to have the animal treated.
When to Seek Medical Care
If large areas of the body are affected, or if the lesions do not improve after 1–2 weeks of applying over-the-counter antifungal creams, see your child's doctor for an evaluation.
Treatments Your Physician May Prescribe
In order to confirm the diagnosis of ringworm, your child's physician might scrape some surface skin material (scales) onto a slide and examine them under a microscope. This procedure, called a KOH (potassium hydroxide) preparation, allows the doctor to look for tell-tale signs of fungal infection.
Once the diagnosis of ringworm has been confirmed, the physician will probably start treatment with an antifungal medication. Most infections can be treated with topical creams and lotions, including:
Rarely, more extensive infections or those not improving with topical antifungal medications may require 3–4 weeks of treatment with oral antifungal pills or syrups, including:
The ringworm should go away within 4–6 weeks after using effective treatment.
Bolognia, Jean L., ed. Dermatology
, pp.1174-1185. New York: Mosby, 2003.
Freedberg, Irwin M., ed. Fitzpatrick's Dermatology in General Medicine
ed. pp.1997-1998, 2239-2243. New York: McGraw-Hill, 2003. |
Dry brush and scrub, as well as closed canopy forest in Madagascar.
Fruit, leaves, flowers, bark and sap from over 30 plant species.
These medium-sized lemurs move on all fours and spend more time on the ground than other lemurs.
Either a single baby or twins are born each year, after a gestation period of four months. The young are independent of their mothers within a year and may start to breed at 15 months.
The dry, scrubby forest where they live is being destroyed by slash and burn agriculture, charcoal production and mining for gemstones such as sapphires and other minerals.
Ring-tailed lemurs are found in all five of the protected areas in their range, as well as two private reserves. Research is being carried out on their populations to discover more about their behaviour, ecology and movements. There is a large captive breeding programme for this species with over 1,000 being kept in zoos around the world.
- Latin Name: Lemur catta
- Class: Mammals
- Order: Primates
- Family: Lemuridae
- Conservation status: Endangered
BE THE FIRST TO KNOW!
If you'd like to stay informed of new products, events and special offers then please join our mailing lists.SIGNUP HERE |
By Gary Alan Dorris
His existence tale is instructed in a typically chronological sequence of chapters fascinated about a time or particular occasion in Lincoln's lifestyles from his early life to his not going upward push to turn into President of the USA; and explores his shut dating with buddies, his political profession, his kinfolk, and the huge judgements he confronted throughout the Civil War.
There are millions of books and articles approximately Lincoln starting from these which merely extol his virtues (and he had many) to these which try and "de-myth" his legacy through exaggerating his faults (and he had a few). the truth is that Lincoln's lifestyles defies uncomplicated characterizations. He had antagonistic President Polk's "Unconstitutional use of strength" through the Mexican warfare, yet Lincoln later assumed warfare Powers past the other past President. He used to be referred to as "Honest Abe" or even political competitors remarked that "his playing cards have been regularly face-up," yet he as soon as deliberately misled Congress. He agonized over the carnage inflicted on either side of the warfare, yet always ordered his Generals to "push the struggle" to the Southern armies. To Lincoln, even if, those activities have been essential to finish the conflict and to accomplish his overarching objective, the upkeep of the Union. whereas Lincoln's own and political philosophy towards slavery developed over the years, he constantly believed secession was once unlawful and has to be prohibited and the Union restored.
Mr. Dorris selected not to comprise a close account of the assassination conspiracy opposed to Lincoln or the condition of his demise, focusing as a substitute on his existence and how he lived it. the writer assumes the position of a narrator and easily tells his rendition of the attention-grabbing lifestyles tale of
Read Online or Download Abraham Lincoln - An Uncommon, Common Man: A Narrative of His Life PDF
Best american history books
Dr. William Henry generators, fellow within the Royal collage of Surgeons and the Royal collage of Physicians in London, arrived in San Bernardino, California in February 1903. Recruited by means of Dr. George Rowell as a scientific accomplice, Dr. generators speedy discovered that surgical amenities in San Bernardino have been woefully insufficient.
Buffalo, the county seat of Johnson County in northeastern Wyoming, begun in 1878 as a military city adjoining to fortress McKinney (1877-1894). considering that beginning used to be laid, Buffalo has been witness to gold prospectors and settlers as a waypoint alongside the Bozeman path, within reach battles through the resulting Indian Wars, and the livestock battle of 1892.
Situated at the southern seashores of Lake Erie, Cleveland used to be based in 1796 by way of basic Moses Cleaveland, an agent of the Connecticut Land corporation surveying the Western Reserve. The modest frontier cost grew to become a village in 1815 and an included urban in 1836. by means of 1896, Cleveland boasted the Cuyahoga construction, the warriors and Sailors Monument, the Arcade, and the stately mansions of Euclid road.
Fascinating INSIGHTS INTO BILLY THE child's TRAIC trip. because the first in-depth fictional exploration of a undying legend, here's the most likely fact contained in the secret of the West's favourite outlaw this can be a attractive compliation of finely wrought and skillfully thorough old narration, as certain because it is pleasing.
- Woody Guthrie: Writing America's Songs (Routledge Historical Americans)
- Shaking the Nickel Bush
- Historic Photos of Gettysburg
- Holiday World (Images of America)
- Roosevelt Dam (Images of America)
Additional info for Abraham Lincoln - An Uncommon, Common Man: A Narrative of His Life
Abraham Lincoln - An Uncommon, Common Man: A Narrative of His Life by Gary Alan Dorris |
Introduction: A sunny day is a great day to learn something new and eat a delicious snack. Making a solar oven with a few household supplies can engage participants and foster better understanding of the sun and solar energy!
Supplies: 1 pizza delivery box, clear tape, plastic wrap, aluminum foil, black construction paper, newspaper, ruler, thermometer, plastic plate.
Objective: Teachable moments, education, project completion, food safety.
Description: Use box knife to cut open a flap on the lid of the pizza box. The flap should be able to be bent up to create an opening for the box top. Next wrap foil to the bottom side of the flap and tape it down on the top to keep it in place. On the inside of the lid, tape a layer of plastic wrap to create an air tight barrier. Line the inside of the box’s bottom and sides with black construction paper and used rolled up newspapers along the edges of the bottom to reinforce the oven’s insulation. Put food on a plastic plate (NOTE: utilize this oven to cook items that are NOT raw for safety) then place under lid. Open the flap and position the oven so the sun hits the open flap and into the plastic covered opening. This a great, creative way to make s’mores or other outdoors treats. |
A binocular might not function as expected if all the elements are not in line. In such instances, you may end up having a product that is not working. The procedure of aligning the lenses is uniform for all the binocular brands.
Many people, however, are not aware of how to undertake this procedure. They would prefer to take it for repairs in the assumption that their tool is damaged. Such a process is costly in the long run, and hence the reason for this article. It carries all the information that you need to know about adjusting your binoculars. Please read on for more updates about how to collimate binoculars?
What is the meaning of collimating binoculars?
It is the process of aligning all the optical elements of a binocular along an optical axis. Initially, it was by adjusting the objective lens and the prism tilt screws. Once collimation occurs, you can see clearly through the binoculars, as there would be nothing to prevent the sighting!
When the binoculars are out of collimation, the images become blurry or dim. In some cases, it might even become impossible to see through it.
Why do I see double images through the binoculars?
It could be that the objective lens might be out of alignment. That means the optics are not well collimated, and hence letting in light irregularly. You should check the prisms as one of them could have fallen out of the alignment. Readjust the prims until you achieve a perfectly straight line.
Whenever you notice a double vision of the same object, the first place to consider rectifying is the prism. If that is not the case, then proceed to the objective lens. In some cases, it just needs some cleaning, and the vision restores to normalcy!
Disassembling the binoculars
Sometimes, it might not be possible to adjust the prism before dismantling the binocular. To disassemble, turn the external guard ring gently in an anticlockwise direction and take it off. But if it does not turn by hand using either a strap wrench or even an Allen wrench. The ring would come out freely if the counterclockwise motion is successful!
When putting it back, you should repeat the same process, but in a clockwise direction. Do not tighten it so much, such that it becomes problematic to open then next time you want to adjust.
What can make it not to focus?
The focusing ring could be out of the way and needs some adjustments. Look through the binocular with your left eye to find out of it is focusing or not. The ring should always be at the center of the binoculars, failure to which focusing might become impossible. The eye-ring brings the object that you are looking at into a sharp focus. On the other hand, the diopter on the right eyepiece justifies the difference between the left and the right eye.
How to clean binoculars?
If you do not clean your tool, it might not focus well, especially if the dirt is on the glasses. Brush off, or blow away the loose items on the lenses. Take a soft cloth or a brush. Do not use hard components as it might damage the binoculars.
Spray the lens cloth with a thinner or a cleaning solution and use it to wipe off the remaining dirt. Take great caution not to spray the binoculars directly. That is because it may corrode it or even erode the markings on its surface.
You can always use more than one cloth while cleaning it. One is for removing the tough stains and dust, and the second one can serve for rinsing!
How to adjust the diopter?
Look through the camera as you turn the little knob next to the viewfinder. Continue moving the knob until when the image gets sharp and clear. You can adjust it in either direction. If you do it to the left, then that is a negative adjustment, and when you do it to the right, that is a positive one.
Cleaning the interior of the binoculars
It does not get dirty often, but it is necessary to clean it quite often. If you let the dirt build-up, the binoculars might not focus in the long run. To clean the interior, get a specific cleaning solution and a clean, soft cloth that will not damage the lens. Take off the bottom plate and clean the inside of the objective lens.
Some binoculars have tiny screws. If you have one, then you must first remove them before opening the interior for cleaning. Remember to put back your tool it was before so that it can function as expected.
Can the diopter affect focus?
No, it does not affect. It only adjusts the clarity of the reflection that is from the prism and does not focus on the image itself.
The diopter balances the image quality between the left and the right eye hence producing an ultra-clear output.
How to choose a camera diopter?
A diopter plays a role when collimating your binoculars. It is, therefore, wise to choose the appropriate one for your product.
The procedure is to measure the distance in meters between the lens and the ground. It should be the point where you would get the best focus. Proceed and divide 1 meter by that distance. It will give you the value of the diopter of the lens. If, for example, your lens focuses the object at 2.5m, then the diopter value of the lens is +0.4 diopter lens. To obtain that value, you will divide one by the distance, as illustrated below:
1m / 2.5m equals to 0.4.
How a diopter works?
It is a unit of measurement for binoculars and other optical power devices. It helps in actualizing the collimation process of the binocular. It works on a curved mirror and hence equal to the reciprocal of the focal length of the lens. Its unit of measurement is in meters.
Frequently Asked Questions (FAQs).
How do you know if binoculars are out of collimation?
You would be able to determine by observing a bright star and then defocusing on the right-hand eyepiece. If there is any displacement of the object from the center, you would need to adjust the binoculars to correct it.
Why do I see double through my binoculars?
It happens when the objective lens drifts away from the center of the binocular. The prism also might have fallen out of the adjustment!
How to adjust binoculars with double vision?
The procedure is similar to the one above. However, you may require to get a wide field of view, similar to a football pitch. Look through it, focus on the objects and adjust appropriately.
Collimation is a necessary process in binoculars. As you put it to constant use, the prisms tend to fall off the location. Focusing is, therefore, impaired, and you will need to make some adjustments to rectify the error! |
A team at the Massachusetts Institute of Technology (MIT) has developed a way to make cement without heating, potentially eliminating the carbon produced in the process.
Portland cement, the most widely used variety, is made by grinding up limestone then cooking it with sand and clay at temperatures of up to 1,500°C, releasing carbon from the fuel used to obtain the heat energy and the limestone itself.
Altogether, this accounts for an estimated 8% of global carbon emissions, and the development of low-carbon production is presently one of the most urgent challenges facing the industry.
The MIT team’s idea is to use electrochemical processes rather than heat. Limestone is first dissolved in acid, then placed in a tank with an electric current passing through it.
This splits water molecules into oxygen and hydrogen, creates an acid at the positive electrode and an alkali deposit of calcium hydroxide at the negative. This deposit, which forms in flakes, can then be used to produce Portland cement.
The findings were published in the Proceedings of the National Academy of Science, in a paper by Yet-Ming Chiang, professor of materials science, and researchers Leah Ellis, Andres Badel and others.
Chiang commented in MIT News: “About 1kg of carbon dioxide is released for 1kg of cement made. That adds up to 3-4 gigatons of cement and of carbon dioxide produced annually, and the number of buildings worldwide is expected to double by 2060, which is equivalent to building one new New York City every 30 days.”
So far, the team has demonstrated the process at laboratory scale, with the process looking “a bit like shaking a snow-globe” as lime precipitates out of the solution.
The researchers said it could be scaled up to an industrial process, but warned it would be difficult to change such a basic process in such a large industry.
Researcher Leah Ellis said that a typical cement plant produced about 700,000 tons a year. “How do you penetrate an industry like that and get a foot in the door?”
Chiang says the team wants to “get people in the electrochemical sector to start thinking more about this” and come up with new ideas. “It’s an important first step, but not yet a fully developed solution,” he adds.
Image: The MIT experiment, showing the separation of acid and alkali liquids (Felice Frankel/MIT News) |
Taxonomy may sound complicated, but it? a straightforward way to order and classify organisms. The way organisms are named and arranged explains their biological relationship to each other. It is also commonly known as binomial nomenclature, and it simply means a naming system that assigns each species two names: the genus and species.
The individual who designed this formal system of labeling was Swedish botanist Carolus Linnaeus. In the early 1700s, Linnaeus attempted to come up with formal names for everything in nature and gave every organism a two-part name. He originally included minerals in his system of classification, but it has since been dropped.
Biology instructor Jason Crean explained, “Linnaean mineral classification was very artificial because it was difficult at that time to analyze minerals with anything more than the naked eye. That system would have perhaps worked if Linnaeus had knowledge of the chemical elements that composed the minerals he studied. His classification system was built simply on visual physical characteristics, which works better for living organisms.?
Two-Word Naming System
There is an advantage to this naming and classification because it is easy to identify any species with just two words. These two words can be used all over the world in any language. When a species is referred to by its Latin or scientific name, a person will know precisely what organism it refers to, and this aids in avoiding confusion. For instance, there are many common names for the species Psittacus erithacus erithacus: Congo African grey, grey parrot or red-tailed parrot. Using the scientific name removes all doubt as to what species and subspecies is being discussed.
Originally, the two-word naming system was referred to as the organism? Latin name, but it has become preferable to refer to it as the scientific name. This is due to the fact that, in the ongoing process of naming organisms and reclassifying others, not all names have a Latin origin to them. The people who work in this field of naming organisms and maintaining the clarity of these names are called taxonomists.
Since Linnaeus set up the original two-part naming system, a seven-category system was created, and it is set up in a way that each category includes the smaller, more specific categories below it. They are: kingdom, phylum, class, order, family, genus and species. The species can be broken down even further to include subspecies of the species if there are members of the species with even smaller differences but there there will always be two or more subspecies or none at all. The differences in subspecies is usually attributed to their evolvement as a result of geographical distribution.
The groups at the top are broad with many members, and the categories get narrower as you work down the list. Each classification gets smaller, further defining each set of organisms until you have only the two-part scientific name for any particular organism. The position of each organism in a category denotes its relationship to other organisms. For instance, the largest group is the kingdom. The kingdom level identifies whether an organism is an animal or one of the other five classifications that are kingdoms including plants, fungi, bacteria (single-celled organisms that don? have a nucleus) and protists (single-celled organisms that have a nucleus).
The next and narrower group or rank is the phylum. There are 36 animal phyla, but only nine include more than 96 percent of animal species.
Birds are in the kingdom Animalia and belong to the phylum Chordata, which designates them as being vertebrates. The class Aves identifies them as being birds and having feathers. From there on, birds are classified according to various characteristics. The chicken, while it belongs to the class Aves, is in the next classification down, in the order called Galliformes where it shares this classification with similar species, such as the turkey, grouse and quail. This is where these species part company. While they share some similar characteristics, they are divided further into another defining group called the family. The chickens are in the family Phasianidae, which include pheasants, partridge and junglefowl due to some similarities they have with these other species. But, again, the narrowing of the species types continues and they are classified into a smaller group called the genus. The term genus is derived from the same Latin word that means kind, sort, class and category. Chickens belong to the genus Gallus, because it is thought that they are descendants of the red junglefowl who also belong in Gallus.
Finally, the chicken has come home to its own species, Gallus gallus, although it still shares that species with junglefowl. The next step was to give the chicken its own home in the subspecies, Gallus gallus domesticus, which indicates the domestication of this bird.
What’s A Subspecies?
What exactly is a subspecies? A subspecies is two or more members of the biological classification species. They generally share many of the same characteristics but do have their slight differences. Two members of the same species could possibly interbreed, and produce fertile offspring, but usually don? very often due to differences in breeding season, geographic separation or a myriad of other factors.
The term, nominate subspecies simply means that the subspecies shares the same name as the species and can be identified by the repetition of the species name, such as the Congo African grey? name, Psittacus erithacus erithacus.
Sometimes there is question as to whether a member is a full species or a subspecies. In these cases where there is doubt or disagreement, the species name is usually written in parentheses.
There are some simple rules when using these terms. When referring to a specific species, the genus name is used first and is capitalized, followed by the species name (which is not capitalized) and is called the specific descriptor or specific name. In print, species names are usually in italics and if it is handwritten, each word is underlined individually.
There are certain customs in naming each organism, but it is generally accepted that the person who discovers it has the privilege of naming it. The name can come from anywhere. Latin or Greek terms are commonly used in the two-part name, but sometimes inside jokes and puns are used. There are numerous references in these species names to favorite books, people, sport figures, actors and film makers.
The Far Side cartoonist and creator Gary Larson has three species named after him, including the beetle Garylarsonus and the Serratoterga larsoni butterfly. There is a species of ant named after the actor Harrison Ford as a way of honoring his work in conservation. It? called Pheidole harrisonfordi. Mastophora dizzydeani (Eberhard) is a spider named after the famous baseball player, Dizzy Dean. This species is the name of a spider that uses a sticky ball on the end of a thread to catch its prey. If you were to translate the name of the dinosaur Dracorex hogwarsia into English, you would get “Dragon King of Hogwarts.?lt;/span>
A clam taxonomist waited many years for the opportunity to name a clam from the genus, Abra. His choice? Abra cadabra, of course! Apparently even taxonomists have a sense of humor!
Scientific Classification Using the Congo African Grey As An Example
This denotes that the Congo African grey is an animal as opposed to a plant or belonging to another kingdom.
This explains that the Congo African grey belongs to the group that includes vertebrates.
This states that the Congo African grey is a bird.
This notes what type of bird the Congo African grey. The Congo African grey is a parrot.
This defines the Congo African grey as a “true parrot.?amp;nbsp; There is some controversy about the term true parrot, as there is disagreement over the inclusion of cockatoos in this family. There are some distinct anatomical differences between “true parrots?and members of the Cacatuidae or cockatoo family. Some of these differences, including the cockatoo? movable head crest, a distinctly different layout of the carotid arteries and possessing a gall bladder are just a few of the unmistakable dissimilarities between cockatoos and other parrots.
This means that this bird is a grey parrot.
Species: P. erithacus
This narrows this bird down to a large grey parrot with a red-colored tail and commonly found in Africa. The word erithacus comes from the Greek word erythro, which means red.
Subspecies: P. erithacus erithacus
The Congo African grey now has its own place as the large grey bird with a bright red tail and a black beak.
There is another common subspecies, the Psittacus erithacus timneh, or the Timneh African grey. This subspecies is distinct from the Congo African grey with its smaller size, maroon-colored tail, and horn-colored beak. As with many species, there are some aviculturalists who recognize a third, and sometimes even a fourth subspecies, but they have not yet been proven to be distinct in scientific studies. |
Just over a month after launch, Jason-3, a U.S.-European oceanography satellite mission with NASA participation, has produced its first complete science map of global sea surface height, capturing the current signal of the 2015-16 El Niño.
The map was generated from the first 10 days of data collected once Jason-3 reached its operational orbit of 1,336 kilometers on Feb. 12. It shows the continuing evolution of the ongoing El Niño event that began early last year.
After peaking in January, the high sea levels in the eastern Pacific are now beginning to shrink.
Launched Jan. 17 from California’s Vandenberg Air Force Base, Jason-3 is operated by the National Oceanic and Atmospheric Administration (NOAA) in partnership with NASA, the French Space Agency Centre National d’Etudes Spatiales (CNES) and the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT).
Its nominal three-year mission will continue nearly a quarter-century record of monitoring changes in global sea level. These measurements of ocean surface topography are used by scientists to help calculate the speed and direction of ocean surface currents and to gauge the distribution of solar energy stored in the ocean.
Information from Jason-3 will be used to monitor climate change and track phenomena like El Niño. It will also enable more accurate weather, ocean and climate forecasts, including helping global weather and environmental agencies more accurately forecast the strength of tropical cyclones.
Jason-3 data will also be used for other scientific, commercial and operational applications, including monitoring of deep-ocean waves; forecasts of surface waves for offshore operators; forecasts of currents for commercial shipping and ship routing; coastal forecasts to respond to environmental challenges like oil spills and harmful algal blooms; coastal modeling crucial for marine mammal and coral reef research.
“We are very happy to have been able to deploy so quickly the JASON-3 satellite on its orbit, just behind JASON-2, Gérard Zaouche, CNES project manager said, allowing us to begin the mission product comparison with JASON-2 so easily.
“The performances of this new mission are already very promising. Thanks to the good behavior of the instruments, the satellite and all the elements of the system, users will be able to benefit soon from this new high-accuracy mission.”
That record began with the 1992 launch of the NASA/CNES TOPEX/Poseidon mission (1992-2006) and was continued by Jason-1 (2001-2013); and Jason-2, launched in 2008 and still in operation.
Data from Jason-3’s predecessor missions show that mean sea level has been rising by about 0.12 inches (3 millimeters) a year since 1993.
Over the past several weeks, mission controllers activated and checked out Jason-3’s systems, instruments and ground segment, all of which are functioning properly.
They also maneuvered Jason-3 into its operational orbit, where it now flies in formation with Jason-2 in the same orbit, approximately 80 seconds apart.
The two satellites will make nearly simultaneous measurements over the mission’s six-month checkout phase to allow scientists to precisely calibrate Jason-3’s instruments.
Remko Scharroo, Remote Sensing Scientist at EUMETSAT said. “Jason-3 is continuing the climate data record of sea level change as measured by altimeters going back to 1992. The Jason missions have become the reference for all satellite altimeters.
“Until the summer, Jason-2 and Jason-3 overfly the same spot of ocean just 80 seconds apart. This allows us to cross-calibrate those missions with extreme precision of less than one millimeter of sea level, thus ensuring a consistent time series.
“With the Sentinel-3 just launched as well, one of our first efforts during the commissioning of the Sentinel-3 SRAL altimeter will be to calibrate it against the Jason-2 and -3 missions.
“Taken together, these missions will help us not only to monitor the large-scale changes of the ocean but also those at smaller scales.
“The myriad of benefits of Jason-3 include near real-time applications such as hurricane forecasting, monitoring of El Niño, and modeling of ocean currents. And also societal benefits for the long term, such as the monitoring of sea level rise.”
Once Jason-3 is fully calibrated and validated, it will begin full science operations, precisely measuring the height of 95 percent of the world’s ice-free ocean every 10 days and providing oceanographic products to users around the world.
Jason-2 will then be moved into a new orbit, with ground tracks that lie halfway between those of Jason-3.
This move will double coverage of the global ocean and improve data resolution for both missions. This tandem mission will improve our understanding of ocean currents and eddies and provide better information for forecasting them throughout the global oceans.
EUMETSAT, CNES and NOAA will process data from Jason-3, with EUMETSAT being responsible for data services to users of the EUMETSAT and EU Member States, on behalf of the EU Copernicus Programme.
Data access in Europe will be secured via the multi-mission infrastructure available at EUMETSAT and CNES, including EUMETSAT’s EUMETCast real-time data dissemination system, Earth Observation Portal and archives, as well as the CNES/AVISO data system.
Jason-3 is the result of an international partnership between EUMETSAT, the French Space Agency (CNES), the US National Oceanic and Atmospheric Administration (NOAA), the US National Aeronautics and Space Administration (NASA), and the European Union, which funds European contributions to Jason-3 operations as part of its Copernicus Programme.
Within Copernicus, Jason-3 is the reference mission for cross-calibrating Sentinel-3 observations of sea surface height and the precursor to the future cooperative Sentinel-6/Jason-CS mission also implemented in partnership between Europe and the United States . |
This article has been written by early years consultant, Anne Rodgers, from ATR Consultancy. What can children learn and experience outdoors? Children can learn and experience…
5 Benefits of Outdoor Learning in Early Years
The benefits of learning outside the classroom are endless. Being outside allows children to express themselves freely and unlike an indoor classroom, there aren’t any space constraints meaning children can jump, shout and explore to their hearts content. The sense of freedom playing outdoors brings is fantastic for a child’s development, both physically and mentally.
The importance of outside play in early years can’t be underestimated and below are just some of the many benefits it offers to children:
Encourages an Active Lifestyle
Children who learn to play outdoors are much more likely to continue to enjoy outdoor activities such as walking, running and cycling as they get older. Given the number of gadgets and new technology available to us all, outdoor play is an extremely important factor in combatting an increasingly sedentary lifestyle.
Appreciation of Nature and the Environment
Learning in an outdoor environment allows children to interact with the elements around us and helps them to gain an understanding of the world we live in. They can experience animals in their own surroundings and learn about their habitats and lifecycles.
Develops Social Skills
Indoor spaces can often feel overcrowded to children and naturally, they may feel intimated in this type of environment. More space outdoors can help children to join in and ‘come out of their shells’. Giving children outdoor learning experiences offers them a chance to talk about what they have done with their friends, teachers and parents.
The extra space offered by being outdoors will give children the sense of freedom to make discoveries by themselves. They can develop their own ideas or create games and activities to take part in with their friends without feeling like they’re being directly supervised. They’ll begin to understand what they can do by themselves and develop a ‘can do‘ attitude, which will act as a solid foundation for future learning.
Being outdoors provides children with more opportunities to experience risk-taking. They have the chance to take part in tasks on a much bigger scale and complete them in ways they might not when they’re indoors. They can learn to make calculated decisions such as ‘should I jump off this log?’ or ‘can I climb this tree?’
Outdoor learning resources don’t have to be expensive. You can utilise objects you’ll find outside such as logs, tree stumps and sticks. Add in some fabric or an old sheet and you can create a simple den which will provide hours of play for children.
Need some outdoor learning inspiration? Try our Top 10 Sand and Water Play Activities! |
One recent afternoon at Bluff Plantation, residents gathered to create and paint clay pumpkins, a craft that would look lovely as part of anyone’s fall décor.
But the art is about more than making knickknacks. During art therapy, Bluff Plantation residents express themselves through color, design and creativity.
The purpose is to express their thoughts, feelings, and emotions about things that they may not be able to say with words initially. Art therapy helps turn the confusion and chaos that is in their minds into a meaningful discussion.
Art therapy is a type of expressive group therapy, in which patients use creativity to convey emotions and experiences they may have difficulty expressing in conversation. Other forms of expressive group therapy include writing, dancing, music and acting.
Research indicates art therapy may help patients process and resolve past trauma that may have contributed to their addiction, according to the Substance Abuse and Mental Health Services Administration. Research also suggests art therapy can help foster self-awareness, improve self-control and self-esteem, and develop social skills.
In the recent activity, patients were asked to create a pumpkin to depict their feelings about themselves – either their “current self” or their “ideal self.” As they work on their craft, patients may talk about what they’re creating, and why. In reality, it’s themselves that they’re discussing. It’s a way to get meaningful conversation started. By allowing them to talk about their pumpkin, it makes it easier to open up and share their thought processes.
In addition to giving patients an alternative way to share their feelings, art therapy can promote relaxation, and instill a sense of pride and happiness in creating something beautiful, personal and worth holding on to.
Those suffering from addiction need instant gratification and this exercise also teaches delayed satisfaction, because the artwork is taken back to get fired in kiln. Processing those feelings both when the artwork leaves and then comes back is powerful. |
Climate scientist Kerry Emanuel of MIT calls Hurricane Sandy a hybrid storm, a rare type that scientists don’t know much about. He says its damaging rainfall is the sort of thing we’ll see more of in the future due to climate change.
Lisa Palmer: Some scientists say Sandy’s enormous size is not related to climate change. Others say that all storms now have a global warming component because climate change has altered the background state. What does the science say?
Kerry Emanuel: It is correct to say that in no individual [weather] event can you really make an attribution to anything, whether it is climate change or El Nino or your grandmother had her tooth pulled this morning. You just can’t do it for a single event. It is just the nature of the game. Now, Sandy is an example of what we call a hybrid storm. It works on some of the same principles as the way hurricanes work but it also works on the same principles as winter storms work. Hurricanes and winter storms are powered by completely different energy sources. The hurricane is powered by the evaporation of sea water. Winter storms are powered by horizontal temperature contrasts in the atmosphere. So hybrid storms are able to tap into both energy sources. That’s why they can be so powerful.
LP: What do we know about climate change and hybrid events?
KE: My profession has not compiled a good climatology of hybrid events. We have fantastic climatology of hurricanes, but we don’t have a good climatology of hybrid events. It is really because we haven’t done our homework. We don’t have very good theoretical or modeling guidance on how hybrid storms might be expected to change with climate. So this is a fancy way of saying my profession doesn’t know how hybrid storms will respond to climate. I feel strongly about that. I think that anyone who says we do know that is not giving you a straight answer. We don’t know. Which is not to say that they are not going to be influenced by climate, it’s really to say honestly we don’t know. We haven’t studied them enough. It’s not because we can’t know, it is just that we don’t know.
LP: Is hurricane season going to last longer with climate change?
KE: No, I don’t think so. I mean there are indications in both directions, but nothing I’ve looked at shows major shifts in the season of hurricanes. It doesn’t mean that it isn’t going to change, but the best estimates we are making are only slight changes in the seasonality of storms.
LP: What is the biggest climate change-related factor with storms like Sandy?
KE: With Sandy, a big factor is the coastal waters. For whatever reason, coastal waters are warmer than normal this year. That means that there is more water vapor in the atmosphere. Sandy will certainly produce more rain than if we didn’t have these warm waters near the shore. So you can say that. One of the very definite predictions of climate research is that all storms, regardless of exactly what kind of storm, should rain more going forward because there is just more water vapor in the atmosphere when it gets warmer. And that’s a big deal because of freshwater floods. The second greatest hurricane disaster in the whole Western Hemisphere was the hurricane of 1998 [Hurricane Mitch], and that was all freshwater flooding. It wasn’t wind or even storm surge that caused damage. It was 11,000 people taken out by flash floods in Central America. Don’t underestimate the rain part of it. We think of hurricanes as wind storms and maybe surges, but the rain is a big deal. |
Cypress Wood Lumber is a hardwood species commonly used for Exterior construction, docks, boatbuilding, interior trim, and veneer. Cypress wood is popular choice in construction applications where decay resistance is needed. Color tends to be a light, yellowish brown. Sapwood is nearly white. Some Cypress wood lumber can have scattered pockets of darker wood that have been attacked by fungi, which is sometimes called pecky cypress wood. |
Indoor plants provide extra color, warmth, and texture to your home. Not only will they keep you engaged with gardening all-year round, but they also improve air quality indoors. While many indoor plants won’t give you a hard time growing them, they must receive proper care for them to thrive.
It is because there is a difference between growing plants in an ideal condition (as in a greenhouse) and growing them indoors. For you to achieve growing plants indoors, you must take a bit of adjustment.
For example, proper watering and the roles temperature and humidity play are the things to keep in mind for your indoor plant care. You must also not forget the appropriate lighting for your indoor plants. The ultimate trick is to imitate the condition of the plant’s natural habitat. Grow lights are what you need to attain this purpose.
Here’s an explanation of how grow lights help your indoor plants grow healthy.
The Workings of Grow Lights
Plants thrive and stay healthy through the chemical process called photosynthesis. In this process, there are closely related green pigments that are called chlorophyll, which help the plant grow and achieve its green color. When you have chlorophyll, water, nutrition, and the right amount of light, you will have no problem growing your plants.
But inside your home, there’s no sufficient light for you to grow plants, this is where grow lights enter into the equation.
Grow lights mimic daylight so your indoor plants will grow. They are essential for timing and forcing seedlings, growing plants in the winter, for commercial hydroponic plant production, and more.
The Best Bulbs for Grow Lights
For sure, any light can technically work as a grow light. But incandescent and halogen bulbs work too hot to keep your plants safe. Thus, it is advisable that you use LEDs and CFLs to grow your indoor plants because they run cool, energy-efficient, and provide a broad spectrum of lights.
Plants need an amount of light on red and blue wavelengths. They appreciate a full spectrum light bulb for their growth and development. As such, if you cast a full spectrum light on them, they will grow thick and healthy.
The good thing is that LEDs and CFLs are popular in the market nowadays and can fit into regular lighting fixtures.
High-End Grow Lights
There are also high-end grow lights with designer lighting to grow your plants indoors. This type of grow lights specifically designed for nurseries housing a lot of plants are bulky and expensive. They cast a bright light and known to be not energy efficient.
But if you want your indoor plants to grow healthy, go for high-end grow lights!
Place Your Grow Lights in Their Proper Placement
The right placement of your grow lights is essential. For example, if you place them just too close to the plant, there is a possibility you put your plant under a too intense light which is certainly not good. Thus, you need to be creative and resourceful when it comes to the proper placement of your grow lights.
Turn Them Off at The Right Time
You need to disable your grow lights for at least eight hours unless your plants have special needs. You can opt for manual switching, or you can use a timer to turn it off. Timers are affordable and easy to use, and they give you a total control of your grow lights.
Growing plants indoors is a fun and exciting hobby. There are a lot of benefits you can glean from it such as improving the air quality of your home. Thus, it is important that we take proper care of them so that they will grow healthy. One way to care for indoor plants is to provide it the appropriate and right amount of artificial lighting. |
© 2012 – Routledge
280 pages | 10 B/W Illus.
Now in its third edition Shakespeare: The Basics is an insightful and informative introduction to the work of William Shakespeare. Exploring all aspects of Shakespeare’s plays including the language, cultural contexts, and modern interpretations, this text looks at how a range of plays from across the genres have been understood. Updates in this edition include:
With fully updated further reading throughout and a wide range of case studies and examples, this text is essential reading for all those studying Shakespeare’s work.
Introduction Part I: Understanding the Text 1. Shakespeare’s Language 2. Shakespeare’s Theatre 3. Shakespeare on Stage 4. Shakespeare on Film How do you film Shakespeare? Part II: The Genres 5. Shakespeare’s genres 6. Understanding comedy: The Taming of the Shrew, The Merchant of Venice, Measure for Measure, As You Like It and Twelfth Night 7. Understanding History: King Richard II, King Henry IV Part 1, King Henry V and King Richard III 8. Understanding Tragedy: Hamlet, King Lear, Macbeth and Othello 9. Understanding Romance: The Winter’s Tale and The Tempest Conclusion: The Future of Shakespeare Studies Appendix: Chronology Glossary
The Basics is a highly successful series of accessible guidebooks which provide an overview of the fundamental principles of a subject area in a jargon-free and undaunting format.
Intended for students approaching a subject for the first time, the books both introduce the essentials of a subject and provide an ideal springboard for further study. With over 50 titles spanning subjects from Artificial Intelligence to Women’s Studies, The Basics are an ideal starting point for students seeking to understand a subject area.
Each text comes with recommendations for further study and gradually introduces the complexities and nuances within a subject. |
The United States has less than 5 percent of the world’s population, yet nearly 25 percent of its prisoners. Mass incarceration has crushing consequences — racial, economic, social — and it doesn’t make us safer. The Brennan Center creates innovative solutions, driven by data, to end mass incarceration.
Mass incarceration rips apart families and communities, disproportionately hurts people of color, and costs taxpayers $260 billion a year. At the same time, crime continues to drop to 30-year lows — and harsh punishments aren’t the reason. The Brennan Center works to expose the huge social and economic costs of mass incarceration. We debunk false claims about rising crime. We fight for reforms to sentencing and bail. And we develop transformative legislative proposals like the Reverse Mass Incarceration Act.
Fighting fear with facts.
Some politicians would have us believe crime rates are soaring. They’re wrong. Brennan Center’s research shows crime rates in America’s 30 largest cities remain near historic lows – and that now is the time to end mass incarceration and policies that unfairly target immigrants and communities of color. |
The Broad Categories of Computer Networking
The arena of computer networking has allowed efficient communication between various computer systems for example computers, servers, mainframes and peripheral devices like printers, scanners etc. Once a computer gets to be a a part of a network, you’ll be able to share information and data easily and with minimum hassle. However, this data sharing has given rise to a host of security issues and thus computer network security is among the most highly discussed topics these days. In actual, the computers in the network will almost always be at the likelihood of unauthorized access from hackers within the network. A prime instance of here is the Internet where insufficient security measures can result in your valuable and highly confidential information being stolen.
– Technology is improving all the time, and put, this will not stop people from getting access to the computers
– Whether we like it or otherwise not, they may be always going being there, and as technology continually improves so, does the ability of these bad website visitors to go around the barriers and access your personal computer where it’s least detected
– Most households possess a computer, and I’m sure and we don’t want anyone helping themselves to information we do not want the crooks to have
– It’s probably a …Continue reading |
Kerala has two geographical 'faces'. On one side, the State is bordered by a long coastline and on the other it is covered with the relief of the Western Ghats, mountains that rise about 3000m high. Wedged between the two mountains, the valleys host rice fields and tea, coffee, pepper plant, teak wood and bamboo plantations. In the centre, the city of Munnar and its surrounding areas are the commercial centre for Kerala's tea production.
Kerala's beaches are renowned for their stunning beauty. Among the most popular ones are Kovalam and Kervala. Other ones that also attract a lot of crowds are Thanagasseri, Cheria, Tanur, Beypore and Kappad. This last one, located near Calcutta, is admittedly small but no less important than the others from a historical point of view. Indeed, it was in Kappad that Vasco de Gama landed with 170 men in three ships on 27 May, 1498.
Wooden sculptures are the state's main art from, where the artists use all their know-how to create extremely impressive pieces out of the most modest of materials.
Kathakali is a style of dance more than 300 years old, which is only practised in Kerala. It combines a mix of different influences ranging from opera and ballet to masquerade and pantomime, and is a mixture of colours, movements, music, drama and expressions.
Kerala has also given birth to various styles of music, like the Panchavadyam, the Nadanpattu, the Omanathinkal Kidavo, among several others.
Keralan cuisine is known for its many varieties of crepes and steamed rice cakes.
Kerala has several National Parks which are mainly former private hunting reserves for the Indian and British aristocracy. The best time to visit these reserves is at sunlight or sunest, where the plants, animals and their surrounding scenery look their best. |
Heart disease is the number one killer in America, but the good news is there is much we can do to prevent it. Dietician and author Elizabeth Somer has simple tips to help you lower your risk for heart disease.
How to lower heart disease risk with lifestyle changes
• 1 out of 2 Americans will die from heart disease. Heart disease is a “disease of civilization”, found in cultures such as the United States. In other cultures, where people live a more vigorous lifestyle and eat a more natural, hunter and gatherer diet, heart disease is virtually non-existent.
• Although some people are more prone to developing heart disease, risk factors have a lot more to do with lifestyle than genetics.
• Risk factors are showing up in children as young as nine years old. This is due to the pediatric obesity epidemic. The medical community is seeing elevated cholesterol levels and elevated blood sugar levels in children that predispose them to Type II Diabetes, which is a huge risk factor for heart disease. It is essential to establish a heart-healthy lifestyle for your children from birth.
• The good news is that there are simple diet and exercise changes that can help lower the risk for heart disease!
• Load your diet with fruits and vegetables, soy and whole grains. Make sure the red meat you eat is extra lean and limit your intake to two servings per week.
• Aerobic activity such as brisk walking is very important in the effort to reduce your risk for heart disease. You want to get your heart rate up, break a sweat and burn fat.
• To determine your risk of heart disease you need to examine how much you exercise and what’s on your plate. These two factors contribute to the size of your waist -- how overweight you are. Your weight and waist measurement determine your risk for heart disease.
• To help prevent heart disease you want a total cholesterol level of less than 200. And, the HDL cholesterol (the good cholesterol) should be at least 60 or more.
• Again, parents should establish heart-healthy eating habits for their children from day one. When they are off the breast (or bottle) and onto the food, it should be fruits, vegetables and lean meats. Parents should model behavior at the dinner table because what children learn in the early years will set the pattern for how they eat for the rest of their life. It’s never too soon to get started!
Howdini is life’s little instruction manual, in HD. We’re all about bringing together the top, most respected experts in their fields to help us be the best we can be at all of the little and not-so-little challenges of our complicated lives. Howdini is the place to be for the know-how you want, when you need it. Or maybe it’s the know-how you need, when you want it. Whatever. We’re here to help. So come in and look around, won’t you?
We think you’ll love finding everything you want to learn about in one convenient place, and as we grow and add more categories and more Howdinis, you’ll be doing less surfing and more learning right here. And unlike television, Howdinis aren’t limited by time—we don’t have to break for commercials, and we’re always on.
Who is Howdini?
People often ask us, is there an actual person who is Howdini? And the answer is, it’s kind of like Lassie. Just as there were many Lassies, there are many individuals who are called Howdini. In fact, each of our experts is a Howdini, and, like all those Lassies, they really know their tricks. (Although so far there is no ‘How to tell your master that Timmy is trapped in the old abandoned mine’ segment)
Our gurus are people you know and trust because you’ve been getting advice from them for years, at places like Good Morning America, The Today Show, Money, Prevention, and Food and Wine (to name just a few). Many are best-selling authors. Others, like our medical experts, are respected leaders in their fields.
The first Howdini was Joanna Breen, who left a comfortable career at ABC’s 20/20 to create a how to video website after one too many frustrating experiences with handymen who weren’t that handy. Joanna had traveled the world reporting with Barbara Walters and others on injustice, outrage, and tragedy, but now it was time to turn her talents to dealing with crises closer to home, like what do you do if you drop your diamond ring down the drain. Joanna is the quintessential can-do girl, so she didn’t find the prospect of launching a gigantic website the least bit daunting. (Ok, that last part isn’t entirely true.)
Joanna convinced an old ABC News buddy, Shelley Lewis, to join her. Shelley had supervised roughly 9.7 million helpful how to segments during a long career executive producing television shows like Good Morning America and CNN’s American Morning. A self-described “info-pig” who loves all kinds of information programming, she is never happier than when she’s learning an amazing new tip that she can annoy share with everyone she knows. Needless to say, Howdini was a dream gig for her. A career woman, a wife, a mother, and author of two books, Shelley considers herself equally challenged by all the facets of her life.
Joanna and Shelley were introduced to marketing executive Alison Provost by a mutual friend who knew that Alison had what they needed - entrepreneurial experience, patience, and a checkbook that still had checks in it. Joanna and Shelley could see right away that Alison should join Howdini. They figured that they would take care of the programming, and Alison would bring trustworthy sponsors to help pay the bills. It took Alison significantly longer to be convinced, maybe because she was crazy busy running a marketing firm called PowerPact, which she continues to oversee while serving as the biggest of big cheeses at Howdini. But whether it’s playing Suduko or launching a new business in a field she knows little about, Alison loves the challenge of a good puzzle, It wasn’t long before she began dropping obscure internet terms like “user-interface” and “googlebot” into casual conversation.
What’s Next for Howdini?
Our goals are modest. Complete and total domination of the internet, crushing Google, Microsoft, and any other punks who get in our way. (Hey, it’s a just a goal.) But until then, we will content ourselves making the best, most professional, most credible how to videos you can find anywhere. We want to help you solve your career issues, your parenting problems, your money troubles. We want you to be more glamorous, healthier, and less stressed out. We want you to check Howdini every day for fun, interesting, useful advice from experts you know and trust.
We want to make Howdini the community you love to be part of every day, To do that, we need to hear from you. Please share your suggestions, rate and comment on the Howdini videos, and the blog, (The Howdini blog). Tell us what you’d like us to create for you.
And then, when we’ve achieved that, it’s back to working on complete and total domination of the internet. |
Discuss how gender relations are constructed between washerwoman, Delia Jones and her abusive, wayward husband, Sykes. How does Delia Jones learn the act of self-defense in her marriage?
1 Answer | Add Yours
Unlike the other African American writers of the period, Zora Neale Hurston focuses on gender more than race. As the woman, Delia is the center of the home, ultimately responsible for stereotypical female jobs: cooking and cleaning. The first image of her is her bend over clothes as she "sorted and put the white things to soak" on Sunday night. Sykes then moves into a stereotypical "male" postion as he tries to enforce the rules of the household, telling her about her habit of working ont he Sabbath, "Ah done promised Gawd and a coule of other men, Ah ain't gointer have it in mah house." He tries to control her, and when the author says "Delia's habitual meekness seemed to slip from her shoulders like a blown scarf," she makes it clear that Delia usually allows Sykes to take that role as head of household.
The men in town also see the world in terms of men's work and women's work. Walkter Thomas points out that, "He useter be so skeered uh losi' huh, she could make him do some parts of a husband's duty." This clearly demonstrates that they believe that men and women both have particular roles to fulfill.
Delia, however, has to learn to defend herself when her husband fails to meet his obligations as her husband. Even the men in town consider taking Sykes and the woman with whom he's cheating and laying "on de rawhide till they cain't say Lawd a'mussy." However, Delia only turns to self-protection when Sykes breaks the most basic "masculine" rule to protect the family. When she comes home to find the snake out of the box, she felt "a new hope" that he had changed. Only after she found the snake in her basket in a clear attempt to murder her did she decide to allow Sykes to step into the trap.
Join to answer this question
Join a community of thousands of dedicated teachers and students.Join eNotes |
Since magnetic forces can do no work, what force IS doing the work when a bar magnet causes a paper clip to jump off a table and stick to the magnet?
Asked by: Steven Leduc
The original assumption that a magnetic field can
do no work is incorrect. A magnetic field has
an energy density that is equal to the magnetic
induction (B) squared divided by twice the
permeability (mu sub zero). If you were to sum
(integrate) this energy of the magnet over all of
its field before it picked up the paper clip and
compared it to the same sum after you picked up
the paper clip, you would discover that there was
a loss of field energy. The paper clip has in
effect 'shorted out some lines of magnetic flux'.
How much energy was lost? If you took hold of
the paper clip and pulled it out to such a
distance that the magnetic pull was insignificant,
the work you did in this process would exactly
equal the amount of energy lost when the clip was
on the face of the magnet. When you picked up
the clip with the magnet the clip was accelerated
toward the magnet acquiring kinetic energy. This
kinetic energy will equal, ignoring air drag,
the loss of magnetic energy in the field.
This kinetic energy will be dissipated in the
form of heat on impact of the clip with the
For further understanding of the energy
in a magnetic field, you may want to study
magnetic fields in solenoids. See the Reference
Physics, Volume 2 by Halliday and Resnick
Answered by: Robert Gardner, M.S., Retired Physicist
'If I have seen a little further it is by standing on the shoulders of Giants.' |
Discuss the impact of the Declaration of Independence on the British-Colonial relations at the 1790s?
1 Answer | Add Yours
I think that Jefferson's document helped to clearly establish that the relationship between the British and the Colonists was past the point of no return. The idea of declaring Independence in no uncertain terms was a way in which the bonds between both nations were "dissolved." Even from the most basic points of view, the rhetorical element made it difficult to ensure that the relationship between both nations was going to be changed forever. The demonstrative manner in which the British leadership was derided and publicly criticized was not meant to bridge the gap between both nations. If anything, the intent was directly meant to inspire the Colonists into action and to pull other nations such as France into supporting the Colonial cause. In this, the document served to transform the relationship into one of an adversarial nature and not something conciliatory. In the document, Jefferson understood clearly that the need to convey the "expression of the American mind" was one that had to trade off with the idea that the relationship between both nations could be repaired into embracing the Status Quo.
Join to answer this question
Join a community of thousands of dedicated teachers and students.Join eNotes |
What is LTE ?
LTE (Long Term Evolution) is a 4G mobile broadband communication standard. LTE is part of the GSM evolutionary path, following EDGE, UMTS, HSPA and HSPA+.
The overall objective for LTE is to provide an extremely high performance radio-access mobile data technology that offers full mobility (vehicular speed) and that can coexist with HSPA and earlier networks. LTE assumes full IP network architecture, it uses OFDMA (Orthogonal Frequency Division Multiple Access), boosts spectral efficiency, and operates in various radio channel sizes ranging from 1.4 to 20MHz.
LTE features include:
- scalable channel bandwidth from 1.4MHz to 20MHz
- downstream peak data rates of up to 326 Mbps (20MHz bandwidth)
- upstream peak data rates of up to 86.4 Mbps (20MHz bandwidth)
- increased spectral efficiency over rel.6 HSPA by two to four times
- reduced latency (up to 10ms rtt between user equipment and the base station, less than 100ms transition times from inactive to active) |
It is often said that slavery was our country’s original sin, but it is much more than that. Slavery is our country’s origin. It was responsible for the growth of the American colonies, transforming them from far-flung, forgotten outposts of the British Empire to glimmering jewels in the crown of England. And slavery was a driving power behind the new nation’s territorial expansion and industrial maturation, making the United States a powerful force in the Americas and beyond.
Slavery was also our country’s Achilles heel, responsible for its near undoing. When the southern states seceded, they did so expressly to preserve slavery. So wholly dependent were white Southerners on the institution that they took up arms against their own to keep African Americans in bondage. They simply could not allow a world in which they did not have absolute authority to control black labor—and to regulate black behavior.
The central role that slavery played in the development of the United States is beyond dispute. And yet, we the people do not like to talk about slavery, or even think about it, much less teach it or learn it. The implications of doing so unnerve us. If the cornerstone of the Confederacy was slavery, then what does that say about those who revere the people who took up arms to keep African Americans in chains? If James Madison, the principal architect of the Constitution, could hold people in bondage his entire life, refusing to free a single soul even upon his death, then what does that say about our nation’s founders? About our nation itself?
Slavery is hard history. It is hard to comprehend the inhumanity that defined it. It is hard to discuss the violence that sustained it. It is hard to teach the ideology of white supremacy that justified it. And it is hard to learn about those who abided it.
We the people have a deep-seated aversion to hard history because we are uncomfortable with the implications it raises about the past as well as the present.
We the people would much rather have the Disney version of history, in which villains are easily spotted, suffering never lasts long, heroes invariably prevail and life always gets better. We prefer to pick and choose what aspects of the past to hold on to, gladly jettisoning that which makes us uneasy. We enjoy thinking about Thomas Jefferson proclaiming, “All men are created equal.” But we are deeply troubled by the prospect of the enslaved woman Sally Hemings, who bore him six children, declaring, “Me too.”
Literary performer and educator Regie Gibson had the truth of it when he said, “Our problem as Americans is we actually hate history. What we love is nostalgia.”
American slavery is the key to understanding the complexity of our past. How can we fully comprehend the original intent of the Bill of Rights without acknowledging that its author, James Madison, enslaved other people? How can we understand that foundational document without understanding that its author was well versed not only in the writings of Greek philosophers and Enlightenment thinkers, but also in Virginia’s slave code? How can we ignore the influence of that code, that “bill of rights denied,” which withheld from African Americans the very same civil liberties Madison sought to safeguard for white people?
The intractable nature of racial inequality is a part of the tragedy that is American slavery. But the saga of slavery is not exclusively a story of despair; hard history is not hopeless history. Finding the promise and possibility within this history requires us to consider the lives of the enslaved on their own terms.
Trapped in an unimaginable hell, enslaved people forged unbreakable bonds with one another. Indeed, no one knew better the meaning and importance of family and community than the enslaved. They fought back too, in the field and in the house, pushing back against enslavers in ways that ranged from feigned ignorance to flight and armed rebellion. There is no greater hope to be found in American history than in African Americans’ resistance to slavery.
The Founding Fathers were visionaries, but their vision was limited. Slavery blinded them, preventing them from seeing black people as equals. We the people have the opportunity to broaden the founders’ vision, to make racial equality real. But we can no longer avoid the most troubling aspects of our past. We have to have the courage to teach hard history, beginning with slavery. And here’s how. |
What is the most useful type of mathematics for physics?
This is the first time I am answering such a 'highly subjective' question. So, parts of my answer will probably range from 'educated advice' to 'wild speculation'.
Physics is probably the one area of science where many areas of mathematics have been directly applied. The reason is simple; nature seems to obey 'mathematical rules' rather than acting whimsically. In other words, it seems that natural laws can be expressed in terms of mathematics. Why this should be so, nobody knows.
If I were asked to single out one area of mathematics that is of absolutely maximum use in the study of physics, I would probably pick calculus. All of classical mechanics, thermodynamics, fluid dynamics, classical electromagnetism, statistical mechanics, and many other fields of physics make extensive (and sometimes exclusive) use of calculus.
Is this sufficient? Probably not for all areas of physics you might work in. The very next requirement would probably be differential equations, and can be thought of as part of calculus (although it is a vast area of study within itself). In addition, you may need probability theory and statistics, linear algebra, numerical methods and the like depending on the field you choose. If you are lacking in mathematical skills you can find an algebra tutor to get you up to speed. Some more recent theoretical work requires more mathematics than mere mortals such as me can hope to know.
The truth of the matter is, you can never know enough mathematics. To a physicist, mathematics is a toolbox. Before attacking a particular problem, you should have the necessary tools for the job. There are some tools (such as calculus) that should be in any physicist's toolbox, but as they specialize, they will add extra tools needed for the specific problems at hand.
Yasar Safkan, Ph.D., Software Engineer, Noktalar A.S., Istanbul, Turkey
The fact is, many different branches of mathematics are useful in a wide variety of physics applications so I will fall back on a personal view rather than seek a definitive, universal answer.
When I first formally studied Physics in high school, I started by examining kinematics with algebra. This generally worked because the curriculum stuck to simpler topics which algebra could handle, if the physics were approached intelligently (e.g. ignoring friction assuming g was constant over the altitude of a projectile, etc.). However when I learned calculus a whole new appreciation for kinematics (and physics in general) blossomed. I went from being able to do the problems (most of the time) to truly having a feel for what was going on. For this reason I have always felt that calculus is the keystone necessary for deep physics understanding.
With quantum mechanics and relativity, it also helps to have an appreciation of probability and statistics, but even here the mathematical techniques of these disciplines are not as necessary as calculus, in my opinion. You can get a pretty good sense of warped 4 D space time with out knowing what eigenvalues are, for example.
Mathematics is crucial to analysis in many fields of endeavor, so don't limit your studies based on this answer - keep learning and pursue many branches of math! It will train your mind as well as open the doors to success.
Rob Landolfi, Science Teacher, Washington, DC
Well there are lots of very useful maths. I'll take you through the most important in terms of the developments in fundamental physics:
Classical Mechanics - Calculus
Electromagnetism - Vector Calculus
General Relativity - Differential Geometry
Quantum Field Theory - Matrices, Group Theory
Superstring Theory - Knot Theory
Each new development in physics often requires a new branch of mathematics. I would say that the older maths are the most widely used in physics now such as calculus - so are probably the most useful.
Martin Archer, Physics Student, Imperial College, London, UK
In my opinion, one has to view physics as a branch of applied mathematics. So, the question of what the most useful mathematics is, is a rather tricky one. In a way, it depends on what branch of physics you're interested in, as different maths is applied in different fields.
If you're interested in 'classical' physics (for eg. mechanics, thermodynamics and electrodynamics), then the field of calculus would suit you far better than subjects like algebra or statistics (This of course depends on how in-depth you go into the subject) On the other hand, algebra (and group theory) is very important in the quantum fields. I'll try to answer this question by basing it on my own experience.
To start of with, one should have a very good understanding of calculus, especially matters
like vector calculus where topics like curl and divergence are covered (A good understanding
of calculus of variations, where topics like maxima and minima of integrals are covered
would also be very useful). Calculus lays the foundations for more advanced maths. Subjects like (partial) differential equations, and mathematical analysis all have their roots in calculus. These subjects are all essential in the 'classical' field. I've done courses in
classical mechanics, and in reality, it's applied mathematics through and through.
On the other hand, in quantum mechanics you will deal more with algebraic techniques. For example matrix operations and transformations are very common. So, you might argue that algebra has more use. But before you think that quantum mechanics is predominantly a
discrete field, I would like to make you aware that partial differential equations do creep in here (for eg. solving Schroedinger's equation for given energy values). However, a good
understanding of modern physics can only be based on a good understanding of classical ideas. Of course, algebra can be extremely powerful in many fields, even beyond quantum
mechanics (for eg. DSP and cryptography).
As a whole, I'd have to say that if you plan to pursue physics, you should do as many
maths courses as you can. Many of the courses link up to each other very subtly. For example, eigenvalues and eigenvectors is an algebraic idea, but it is used widely in solving systems of differential equations; analysis (I like to think of this as the proof of mathematics) and calculus share very similar ideas (and in some sense are the same ideas-series etc).
So what do I think is the most useful type? I'd have to say a well rounded combination of all the above mentioned subjects. But if I have to choose one; I'd say differential equations. Many physical systems have a 'differential' touch to them. For example, spring systems or
RLC circuits (electrodynamics) are actually differential eq's. Knowing how to solve differential
equations (and how to set them up) is an invaluable tool in physics. Just imagine the difficulty you would have trying to do fluid mechanics without differential techniques.
For those interested in a good book on differential techniques (where topics like springs, circuits and population growth are covered), please refer to the book I've included as a |
Will there really be enough sustainable palm oil for the whole market?
Will there really be enough sustainable palm oil for the whole market?
6 mins read
More and more corporate palm oil users are promising to clean up their supply chains. But a new report says firms may have underestimated the availability of ethically produced oil, jeopardizing those pledges.
In the summer of 2016, two of the world’s largest producers of palm oil, which is found in everything from lipstick to margarine, lost the ability to market what they sold as certified by the Roundtable on Sustainable Palm Oil, the world’s largest association for ethical production of the commodity.
The companies had been responsible for nearly a fifth of the global supply of RSPO-certified palm oil, which is increasingly sought after as corporate palm oil users commit to purge their operations of environmental destruction and human rights abuses, ills often associated with the industry. The subsequent strain on supply sent prices for the premium stuff soaring.
A report (pdf) released in December by London-based nonprofit CDP, formerly known as the Carbon Disclosure Project, found that last summer’s spike in prices could be just a taste of what’s to come.
CDP surveyed 187 of the world’s largest and most impactful companies in relation to deforestation risk. Although 75 percent of companies reporting on palm oil said they have identified sufficient sources of sustainable oil for future operational needs, CDP concluded that “this confidence may be misplaced.”
“It is not clear, at this point, that sufficient supplies of sustainable commodities will be available to meet all of these targets, raising risks that some companies will be in breach of their commitments, or will otherwise face spiralling costs as demand races ahead of supply,” the report said.
As the rapid expansion of oil palm plantations in Southeast Asia and, increasingly, in Africa and Latin America fuels rainforest loss, land grabbing and labor abuses, many industry players have set goals for cleaning up their supply chains by 2020.
“It’s now 2017, and we’re getting really close to those targets,” Katie McCoy, head of CDP’s forests program, told Mongabay. “As part of delivering on those commitments, thorough risk assessment that goes into the future is needed.”
Oil palm plantation in Riau, Sumatra. Photo by Rhett A. Butler.
In 2016, the ‘fragility’ of sustainable palm oil supply was revealed
Seventy-seven percent of the companies reporting on palm oil told CDP they rely on RSPO certification to ensure that what they buy is sustainably produced. In April 2016, the RSPO suspended the credentials of one of the biggest certified palm oil suppliers, IOI Group, after the Malaysian firm was accused of a raft of violations in Indonesia. Shortly thereafter, another major supplier, Felda Global Ventures, voluntarily withdrew certification from 58 of its mills in Malaysia after acknowledging sustainability problems.
With these two suppliers out of the game, 18.5 percent of all the RSPO-certified palm oil in the world evaporated, according to RSPO affiliate GreenPalm. As a result, “the premium for sustainable over conventional palm oil jumped from $25/metric ton to $30-35, while that on palm kernel oil doubled, from $80-100/metric ton to more than $200,” reported CDP.
A statement issued by GreenPalm at the time described the lessons learned: “News of the ending of IOI and Felda’s RSPO certification has thrown a spotlight on the fragility of physical supply. At the same time, it demonstrates the importance of building resilient supply chains.”
The RSPO reinstated IOI’s certification four months after the suspension — too early, according to observers who were unsatisfied with the company’s roadmap for change and wanted to see more progress on the ground first. Nevertheless, the impact on the supply of RSPO-certified oil has lingered, as many companies remain reluctant to buy from IOI.
McCoy said the spike sent a positive signal to producers, showing there is demand for sustainable palm oil. GreenPalm’s general manager, Bob Norman, also noted in a statement that the high prices were a boon to sellers, but an obstacle for buyers hoping to use RSPO-certified oil.
In other words, 2016 showed that relying too heavily on a few certified suppliers can create risk. Price hikes can encourage plantation firms to create a certified product that’s in demand; but such hikes can also discourage some buyers from choosing the more costly certified products, undermining efforts for change.
Oil palm fruit in Indonesia’s Aceh province. Photo by Rhett A. Butler
Mixed messages: shortage or plenty?
While it’s possible enough certified palm oil will be available to meet growing demand, the key will be making sure buyers and sellers connect, according to RSPO communications chief Stefano Savi.
He pointed to a supply-demand paradox: “From the growers’ perspective, some will say, ‘There’s not enough demand.’ From the buyers’ perspective, some will say, ‘I can’t find enough certified sustainable palm oil.’ In a way, both of these answers are true.”
RSPO-certified palm oil is a niche product that still requires companies to deal one-on-one with suppliers, Savi explained. “It’s not something that you can pick up a phone and call a commodity trading desk and say, ‘I would like X-tons of certified sustainable palm oil.’ That’s not the way it works today.”
Some of what the RSPO certifies ends up being sold under other schemes, such as the International Sustainability and Carbon Certification system, which is popularly used for palm oil destined to become biofuel in Europe. The remainder is sold as conventional palm oil.
A more stable supply could be established by clearer communication between certified producers and prospective buyers, Savi said.
At present, demand for sustainable palm oil largely comes from Europe and North America, which consume a combined total of about 8 million metric tons of palm oil annually, according to the Sustainable Palm Oil Transparency Toolkit. In 2016, the RSPO certified about 11 million metric tons. Enough certified sustainable palm oil currently exists to meet the entire needs of these regions, if producers can successfully connect with buyers.
At the same time, China and India combined import some 16 million tons annually. If demand for RSPO-certified palm oil grows in these areas, it could really strain supply if that supply doesn’t grow in tandem, said John Buchanan, senior director of sustainable food and agriculture markets at Conservation International, an NGO.
“Just cleaning up a part of the industry to supply the U.S. or Europe doesn’t solve the problem,” he said. “I think the key is to really emphasize that at the end of the day we need the demand.”
He added, “I would love to see demand outpace supply. That might have some price implications…[and] it’s possible that there could be some short-term blips like we saw with IOI and Felda last year, but…the most important thing is that demand continue to grow, because at the end of the day we should all be pro-sustainable palm oil.”
The sun rises behind an oil palm plantation in Indonesia’s North Sumatra province. Photo by Rhett A. Butler
Building a stable supply
McCoy, the CDP forests program chief, said the best way to ensure a stable future supply is for companies to foster sustainability in their existing supply chains instead of just looking for new sources that are already certified.
While the CDP report highlighted the dearth of companies rigorously planning for long-term supply stability, some companies — like Latin American palm oil giant Agropalma — have solid plans for decades to come, McCoy said. That includes working intensively with suppliers, who often lack funds and a familiarity with the needs of international markets, to help them clean up their operations, she explained.
McCoy highlighted some figures from the CDP report to illustrate what kinds of collaboration are generally lacking: “Thirty-seven percent are auditing the suppliers, 31 percent run workshops and training, 17 percent do joint projects, and only 9 percent offer technical support.”
When these numbers are boosted, she said, a stable supply of sustainable palm oil is likely to grow.
An oil palm plantation in Malaysian Borneo. Photo by Rhett A. Butler
Building a solid standard
And then there is the issue of the quality of the RSPO’s green label itself.
The roundtable’s standard only protects old-growth primary forests and deep peat swamps. But the trend among companies now is to swear off clearance of any tract of forest or carbon-rich peat soil.
Emma Lierley of Rainforest Action Network told Mongabay: “RAN seriously questions the legitimacy of the ‘certified sustainable’ palm oil on the market. The RSPO is inadequate as a standard, and cannot be trusted as a source of truly responsible palm oil — responsible palm oil being made without deforestation, the destruction of peatlands and the abuse of workers’ and human rights.”
It’s a frequently leveled criticism, to which the RSPO typically replies that it doesn’t want to raise the bar so high as to discourage uncertified firms from pursuing the existing standard. “Shouldn’t we get everyone to really start jumping before we move the post?” Savi asked.
The RSPO did launch RSPO NEXT last year as an optional zero-deforestation tier for members. But there’s a worry that creating different levels of certification may devalue lower levels, Savi explained, making it tricky to find a balance that helps all players.
Savi acknowledged cases in which RSPO has failed to enforce its own standards, which undermines the value of its certified palm oil in the eyes of many. “We acknowledge there are a lot of issues in implementation. Sometimes things shy away from perfection,” he said.
“This is a process,” he added. “What we should focus on is improvement.”
Banner image: A Sumatran orangutan. The critically endangered great ape’s numbers are dwindling as oil palm plantations expand into its forest home. Photo by Rhett A. Butler |
Measurement centre will save 2% of UK's annual carbon footprint
The Centre for Carbon Measurement will deliver eight megatonnes of carbon emissions reductions and more than half a billion pounds in economic benefit over the next decade, according to an independent report.
Recently commissioned by the National Physical Laboratory (NPL), where the Centre launched in March 2012, the report evaluates the impact of projects under the Centre's remit.
It evaluated the Centre's portfolio of projects, including those that aim to improve the accuracy of climate data from satellites, assess the potential to use biomass in end-of-life coal power stations, and create a temperature based control circuit to make energy efficient lighting even more efficient.
Examples of projects analysed by the report include optimising a new design of electrode for organic photovoltaic cells - cells that turn sunlight into energy.
According to the NPL, the work underpinned the design of flexible electrodes which are essential if the promise of organic photovoltaics - initially in areas such as wearable electronic devices - is to be realised. It added that this project is estimated to reduce carbon emissions by 257 kilotonnes over 8-10 years.
Energy and Climate Change Minister Greg Barker, who recently visited the Centre, said: "The Centre for Carbon Measurement at NPL has supported businesses large and small with their innovations and in some cases has even designed its own emissions savings technologies".
The NPL said the report confirmed that measurement plays an important and economically advantageous role in carbon reduction and concluded that measurement adds value across all the current areas of the Centre's focus.
Head of the Centre for Carbon Measurement, Jane Burston, says "We are only one year in but already the Centre is showing its value. This analysis confirms we are on the right track and that carbon measurement brings a clear, quantifiable economic and environmental benefit.
"We can now assuredly forge ahead with a diverse range of projects that use our measurement expertise to provide vital support to climate science, low carbon technologies and carbon reduction initiatives."
Read more on the Centre for Carbon Measurement here |
Sex and Sea Turtles: Climate Change, Sea Level Rise Impacts The shift in climate is shifting turtles as well, because as the temperature of their nests change so do their reproduction patterns. Dedicate marine labs have been attempting to get them to reproduce, with mass sperm and eggs releases and proximity key it was one lucky break and a pipette in hand that gave researchers their first new twenty animals and its only been increasing each year. Another 15 percent are probably over-qualified for the roles they're going for. See how progressive seahorses really are with the males being the pregnant ones. I learned a lot in this book and some of it was incredibly interesting like how coral spawn or how the heck Do animals find each other in the vast ocean?? How they do this is often a mystery that only years of detailed observation can solve.
This book is so Despite the occasional 'too clever by half' passages, this book was a delight and an education. Cuttlefish often pretend to be female to sneak in past an unsuspecting larger male to mate with his female. All species were compared with humans, and dubbed kinky or perverted when their reproductive habits deviate from what conservative people think is normal. Start a discussion by leaving a comment below: Filed Under:. Hardt, and I wonder if that was to make it less intimidating. I find it endless fascinating to study how marine animals mate. A coral reef ecologist by training, Marah J Hardt keeps one foot wet in the field, while the other roams the worlds of creative storytelling and problem-solving, with a focus on ocean conservation issues.
I am now determined to find somewhere at least semi local to volunteer to help out all those beautiful oceanic creatures! That way you can monitor whether certain foods, with say high levels of sodium are causing any fluctuations in your pressure. Learn how the tiny whitespotted pufferfish create these amazing nests over 6 feet in diameter by wiggling his butt. From the tiniest copepo Getting Together: What a marvelous book! Do they use the stars, or sea floor topography, or Earth's magnetic field? In the name of science, of course. Steve has learned to lub the land, after the better part of a decade working on cruise ships himself. Anyway, everything was fascinating to read about.
Less distance required, now advertising size. With wit and scientific rigor, Hardt introduces us to the researchers and innovators who study the wet and wild sex lives of ocean life and offer solutions that promote rather than prevent, successful sex in the sea. In the 21st century, the French Foreign Legion no longer absolves you of all your murders if you sign up to subdue Moroccan tribesmen. For overall health, sex is very important. And oh am I glad I read this one.
People are paying thousands of dollars; they demand someone who can schmooze them. Within half an hour the queues were five-deep. Whales can use sound for navigation and communication with other whales. This book is incredibly engaging, and it was fun to hear the students remark about how much enthusiasm they had for reading it. We drifted between stands collecting free gifts: mints, compasses, pens, branded wine-pouring spouts, whistles, mirrors. For example, a ten foot great white might easily have a pair about three feet long.
For example, Male seahorses are the pregnant ones. The remaining six participants of the Acali raft expedition. When it comes to inventive sex acts, just look to the sea. Each room is individually customised. And by promotions, I generally mean elevation from a particularly toilsome job to one that's physically easier say, from waitress to hostess — a drop in pay but an easier life. While the reproductive details and diversity between species is often fascinating, I was put off by the way in which the book was written.
I was in my late twenties. I knew I needed two things to make the film: the participants and the film footage. There's an increasing distinction between the guy who gets things done, and the guy who is paid to schmooze with the passengers in the dining halls. I called 'Venomous' biology porn, but that applies in a much more literal sense here. And, her last section is all about action we can take to better protect fish.
Of the other two, Norwegian represents 10 percent, Royal Caribbean is 23 percent. I was in my late twenties. Marah also describes how very susceptible marine life is to changes in their environment. Beyond a deliciously voyeuristic excursion, Sex in the Sea uniquely connects the timeless topic of sex with the timely issue of sustainable oceans. If you want to grow your orgy numbers, nothing beats a full-blown belly flop to attract the neighbors.
The rainfall data were graphed in temporal synchrony with sand temperature for each depth. Who could resist that pampering? It's time to make gastronomic discoveries and save the planet! And if you follow a certain path, your pet goldfish will grow into a person-sized fish for a passionate one night stand. Last Ranger Sex in the Sea opens the proverbial door on the sex lives of marine creatures from sex-changing slipper snails to male-birthing seahorses to lovenest-building puffer fish to super-penised barnacles to rainbow-hued hermaphroditic sea slugs to sex-for-fun dolphins—plus many more. Wednesday 16th January at 20:00 at and at the on Friday 18th January at 18:15. The title describes the book perfectly as we get a look at the matting rituals for marine life.
She includes a surprising array of human activities that play upon the survival or demise of the ocean populations. Nearly all of the multiple species of mobulas seem to have this penchant for aerial acrobatics, leaping up to six feet out of the water before splashing down with a crack at the surface. In this boiler room atmosphere, relationships of vast intensity are born, then die away within weeks. . I'm not sure how to summarize it, because really, the subtitle says everything you need to know. We also learn about our impact on these breeding practices from over-fishing and climate changes as well as the efforts to help protect, preserve, and increase populations. |
Summary: 1 Summary of things you should already know
"I think I can safely say that nobody understands quantum mechanics" Richard Feynman
1.1 Operators and Observables
It is a premise of QM that any measurable quantity is associated with a Hermitian operator.
So far we are used to analytic expressions for wavefunctions and operators. But if a ket is full
description of the state of a system, it must also contain some implicit information. The abstract
bra-ket notation includes this.
Consider the electric charge. Obviously this is measurable, so it should be associated with an
operator ^Q, such that e.g.
^Q| = -e|
where is the wavefunction of an electron. -e meets all the criteria for a quantum number,
and the above equation is obviously a true representation of reality, but it cannot be proved
algebraically. Thus the meaning of the ket | is broader than a simple spatial function, and
operators can also be non-algebraic. This is especially important in particle physics where all
manner of quantum numbers appear (isospin, strangeness, baryon number etc. etc.)
1.2 Hamiltonians and eigenstates
Schroedinger's equation ^H = i¯h/t shows us that the Hamiltonian (energy operator) is related
to the change in wavefunction in time. A system prepared in an eigenstate of the Hamiltonian
has time invariant probability density. A system prepared in an eigenstate of a non-commuting
operator has a probability density which varies in time. It is this time independence (conservation |
Evo Morales changed the history of Bolivia when he was elected in December 2005 as the country’s first indigenous president, and the first to get a majority of 54 percent. On Sunday he expanded his mandate considerably in a referendum, with 68 percent of voters opting to keep him in office.
The conventional wisdom in Washington–where the foreign policy establishment is decidedly not sympathetic to Morales’s populist agenda–has been that the referendum would settle nothing. Bolivia remains divided, say the pundits, along geographic (eastern lowland states versus the west), ethnic (indigenous versus non-indigenous) and class (rich versus poor) lines. Maybe so, but apparently it is less divided than when Morales was first elected, an event that was widely celebrated as a milestone akin to the end of apartheid in South Africa. Was that election also meaningless?
Bolivia’s indigenous majority had previously been excluded from the corridors of power, and the results can be seen in their lower living standards: indigenous Bolivians have less than half the labor income and 40 percent less schooling than non-indigenous.
Morales had promised to regain control over the country’s hydrocarbon resources–mostly natural gas. This was accomplished and has brought in an extra $1.5 billion of revenue to the public treasury. (For comparison, imagine an extra $1.6 trillion, or four times the current US federal budget deficit, in the United States.)
Morales and his party had also promised a new constitution, and that is where things got bogged down. The main stumbling blocks revolve around the distribution of the country’s most important natural resources. These are the hydrocarbon revenue and also Bolivia’s arable land.
In developing countries throughout the world that are dependent on hydrocarbons, these revenues generally belong to the central government, not the place where they are located. Bolivia is unusual, in that half of the hydrocarbon revenue goes to the provinces and local governments.
But the four eastern lowland provinces–sometimes called the “Media Luna” or “half-moon” because they form a crescent along the eastern half of the country–wanted even more control over these revenues.
These provinces produce about 82 percent of Bolivia’s natural gas, and get nearly three times the gas revenue per person as do the other five provinces. The Media Luna states have a per capita income that is about 40 percent higher than the other five states. Their population is also much less indigenous: ranging from 16 percent in Pando to 38 percent in Santa Cruz, as compared to 66 percent to 84 percent in the other states.
The Media Luna states also have the big landholdings that give Bolivia one of the most concentrated land distributions in the entire world. Well under one percent of landowners control two-thirds of the country’s farm land. These include the big soybean producers of Santa Cruz, Bolivia’s largest province and bulwark of the Media Luna alliance. Some of the big landowners are leaders of the political opposition.
Land reform is understandably a central political and economic issue. With 40 percent of the labor force in agriculture and more than three-quarters of rural Bolivians in poverty, a redistribution of arable land is not only a central demand of the voters but an important part of an economic development strategy that can boost employment and income in the countryside.
The recent referendum shows that the Morales government has increased its mandate to a landslide margin, by delivering on some of the changes that the electorate had voted for, and offering the majority of Bolivians a realistic hope for a better future. It casts doubt on the claim that this government has simply pursued its own, polarizing, leftist agenda, without regard to the concerns of the broad electorate. Its victory is all the more impressive in that it has been handicapped by an overwhelmingly hostile Bolivian media.
Bolivia is South America’s poorest country, with 60 percent of the population living below the poverty line, and 38 percent in extreme poverty. The voters have overwhelmingly decided that they want their government to do something about that. This should be possible, even if it means redistributing some of the country’s most important natural resources. |
5 Reasons Teachers Using Tech are Super Heroes with Kecia Ray
AUGUST 25, 2017
If you don’t “get” why technology is important, or know people who don’t, take a listen to understand and learn about transformative practices that work in education. They’re doing awesome things to drive inquiry based learning. Idea #1: Use Formative Data to Improve Learning. Kecia: So, the first thing that make a teacher a superhero is the way they use data to improve learning and mastery for their students. Learn about AR and VR. |
A neighborhood that embraces the arts as a elementary tool to reinforce learning. Defining artwork as a communicative system that conveys ideas and concepts explaining why it’s doable for a similar brain buildings that supports other cognitive functions corresponding to human language to be concerned in arts comparable to music or drawing.
This isn’t to say, in fact, that the sciences, math, and engineering do not require or engender creativity—they absolutely do. But on the elementary and center faculty levels, a lot of the coursework in these matters relies on recreating information that is already nicely established in those fields—learning foundations in order that in the future creativity can happen.
The troublesome task of understanding and effectively enhancing studying throughout disciplines, ages, and cultural specificities is a high priority all through the world, and may be notably benefited by training in and even exposure to the arts.
In an in depth ethnographic evaluation of high quality visible arts classes for adolescents, psychologists Lois Hetland, Ellen Winner, Kim Sheridan, and Shirley Veenema discovered that the important thing ideas being taught in arts classes—past learning how one can maintain a paintbrush or mold clay—have been to stretch and discover occupied with supplies and matters and to observe and mirror on easy methods to engage in artistic work.
Re-invigorating the Seoul Agenda – The UNESCO Chair in Arts and Studying has shaped a partnership with the Canadian Fee for UNESCO and the Canadian Network for Arts and Studying to guide a nationwide and worldwide initiative to re-have interaction determination makers and humanities educators in the objectives and methods of this international action plan that was unanimously endorsed by the Normal Convention of UNESCO in 2011. |
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work!
Józef Chłopicki, in full Grzegorz Józef ChŁopicki, (born March 14, 1771, Kapustynie, Volhynia, Pol. [now in Ukraine]—died Sept. 30, 1854, Kraków, Pol., Austrian Empire [now in Poland]), general who served with distinction with the armies of Napoleon and was briefly the dictator of Poland after the November Insurrection of 1830.
Chłopicki enlisted in the Polish army in 1785 and fought in the campaigns of 1792–94 before and after the Second Partition of Poland. He then took service under the French in the new Polish legions and distinguished himself in the Italian campaigns of 1797 and 1805. He commanded Napoleon’s First Vistula Regiment in Poland in 1807 and from 1808 served in the Peninsular War in Spain, receiving the Legion of Honour for heroism at Epila and in the storming of Saragossa. He accompanied Napoleon’s Grande Armée into Russia in 1812. On the reconstruction of the Polish army under Russia in 1814, he was made general of a division but resigned his commission after a quarrel in 1818 with Grand Duke Constantine of Russia.
Chłopicki at first kept aloof from the November Insurrection of 1830 but accepted the dictatorship at his countrymen’s request on Dec. 5, 1830. Lacking faith in the war’s success, he clung to the hope of negotiation with Russia and acted purely on the defensive until he was forced to resign on Jan. 17, 1831, and became nominally a private soldier. Actually, he retained his military command until he was seriously wounded during the Battle of Grochów (near Warsaw) on Feb. 25, 1831, and was forced to retire from public life.
Learn More in these related Britannica articles:
Napoleon I, French general, first consul (1799–1804), and emperor of the French (1804–1814/15), one…
Congress Kingdom of PolandCongress Kingdom of Poland, Polish state created (May 3, 1815) by the Congress of Vienna as part of the political settlement at the end of the Napoleonic Wars. It was ruled by the tsars of Russia until its loss in World War I. The Kingdom of Poland comprised the bulk of the former Grand Duchy of…
November InsurrectionNovember Insurrection, (1830–31), Polish rebellion that unsuccessfully tried to overthrow Russian rule in the Congress Kingdom of Poland as well as in the Polish provinces of western Russia and parts of Lithuania, Belorussia, (now Belarus), and Ukraine. When a revolution broke out in Paris (July… |
Definition of falafel in English:
A Middle Eastern dish of spiced mashed chickpeas or other pulses formed into balls or fritters and deep-fried, usually eaten with or in pita bread.
- The chickpea croquettes called falafel and the ever-popular chickpea dip, hummus, are both very good.
- Chickpeas feature in the majority of meze, in either hummus, falafel or salad, and are often spiked with coriander or mint or given a welcome boost from red chilli.
- Many dine on falafel, sandwiches made with balls of deep-fried hummus, or grilled lamb sandwiches, called shwarma.
From colloquial Egyptian Arabic falāfil, plural of Arabic fulful, filfil 'pepper'.
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed. |
Therapeutic Properties of
Turquoise is universally considered a lucky stone.
It is believed that Turquoise helps one to start new projects; can warn the
wearer of danger or illness by changing color; and protects the wearer from
falling - especially from horses.
Some Native Americans believed that if Turquoise was
affixed to a bow, the arrows shot from it would always hit their mark. It is
also believed to bring happiness and good fortune to all.
The blue of Turquoise was thought to have powerful metaphysical properties by
many ancient cultures. In Mexico, Turquoise was reserved for the gods,
it could not be worn by mere mortals.
In Asia Turquoise was considered a protection against the evil eye. Tibetans carved
Turquoise into ritual objects as well as wearing it in traditional jewelry.
Ancient manuscripts from Persia, India, Afghanistan, and Arabia report that the
health of a person wearing Turquoise can be assessed by variations in the color
of the stone.
In Europe Turquoise rings are given as forget-me-not gifts.
Turquoise is said to attract prosperity and success. It has the power to
influence creative powers. It enhances the ability to communicate.
has long been prized as a powerful talisman with healing properties. It can help balance the throat chakra (the blue chakra), increasing
resistance to viruses and helping to relieve sore throats, lung infections and
the side effects of allergies.
Turquoise is an important gemstone in subduing an overactive 5th Chakra.
Natural Healers consider Turquoise a Master Healing gemstone.
Called as a balancing stone, Turquoise helps balance the spiritual and the
physical--as the sky connects the heavens to the earth. Ancient Indians believed
in its ability to heal and every shaman possessed at least one piece of
Turquoise. It may also be useful in relieving migraines.
Products with Turquoise
Gemstone Infocenter Home
Gift Selections incorporating Gemstones
Guide to Gem Caring |
Battle for Australia
The Battle for Australia was a series of military actions fought from 1942-43 during World War 11.
Prime Minister of Australia The Hon.John Curtin announced the Battle for Australia when Singapore fell to the Japanese on the 15th February 1942.
On the 26th June 2008 the Governor General Major General Michael Jeffery, officially proclaimed that the first Wednesday in September would be known as the Battle for Australia Day, thus becoming a day of national observance.
It is important to note that the proclamation of Battle for Australia will not detract from the importance of Australia’s two most significant day of commemoration, Anzac Day and Remembrance Day, when we remember all Australians who served and died in war and conflict and peace operations.
Locally the Battle for Australia was first observed by Smithfield RSL Sub-Branch on the first Wednesday in September 2001. On this day, veterans and their families came to honour sacrifices made for the protection of Australia itself.
Smithfield RSL Sub-Branch has continued a service for the Battle of Australia on the first in September every year since 2001.
Smithfield RSL Sub-Branch will conduct a service for the Battle of Australia in the Memorial Park, Cumberland Highway, Smithfield (opposite the Smithfield RSL Club Ltd) at 11am on Wednesday 3rd September 2014.
An invitation is extended to any person to attend this service. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.