content
stringlengths 275
370k
|
---|
Math Lesson ("Systems of Equations")
by Julie Poth: Elementary Science TSA
Pinellas County School District, Largo, FL
Students look at the total costs of rows and columns of whooping crane
food items and figure out how much each food item costs.
Students will explore logical reasoning using unknowns in studying
systems of equations. Students will use logical thinking instead of guessing,
checking, and revising.
out the Meal Deal handout
(1 per student). Also make a transparency of it to use on an overhead
projector. Each of the 9 squares on the handout contains a picture of
a meal worm, bowl of crane chow, or bunch of berries, all of which are
part of the Whooping Crane diet.
Note: The project costs are not actual costs. They
were adapted so the numbers were manageable for students.
the students that the total costs for each row or column is recorded
at the end of each column and row.
that each food item: meal worms, crane chow, and berries -- costs a
specific amount of money. Their goal is to look at the rows and columns
and figure out how much each of the food item costs.
- Ask the
students to look at the rows and columns and put an X on the one row
or one column they think would be the easiest to use to figure the cost
of one food item. Ask them to explain why they chose the ones they did.
will most likely identify row 3 with the three crane chows. Since the
total cost for three crane chows is $15, each crane chow must cost $5.
3 x ? = 15. If the students can’t explain this, guide them to
- The students
now have a piece of the puzzle to begin solving a system of equations:
They know that the crane chow cost $5. Ask, How can you use this
information to determine the cost of the meal worms or berries?
around the room to observe how the students are solving each element.
When they have determined the cost of the meal worms and the berries,
ask them to explain how they did so. After one student identifies how
he or she solved the problem, ask if anyone used another method.
Have the students design their own 3 x 3 Square Systems of Equations.
Meal Deal handout; teacher questions and observations |
Strainmeter Helps NASA Identify Continental Drift
- Created: Saturday, 01 December 2007
TEGAM 1949-0001 ratio transformer
The EarthScope Project is an undertaking funded by the National Science Foundation in partnership with the United States Geological Survey and NASA to characterize the geology of North America, including continental drift caused by earthquake faults. Capturing the slow movement (which in some areas happens over centuries) can be done by measuring strain. Capturing strain (the deformation caused by stress) also can highlight areas that are moving faster along a fault relative to other areas.
The strainmeter is put into service by drilling a borehole into the earth. Although initial accuracy is important, it is critical that the ratio transformer have extremely low drift. Once the GTSM device is embedded in the rock, it is there for the life of the device. Component drift would produce unacceptable results, and the ratio transformer assures there will be insignificant drift for that component over the life of the project. Combined with GPS data that measures the overall motion of the earth surface, the equipment can distinguish regions of energy accumulations from regions of simple deformations.
The EarthScope project has funded a significant increase in the number of strainmeter sites. As the density of strain measurements grows, changes in earthquake models are likely as a result of better information, particularly over long time frames.
For Free Info Click Here |
Flash storage refers to any data storage device that utilizes the NAND type of flash memory. A standard flash storage system consists of two parts: a memory unit and an access controller. The memory unit allows the system to store data, while the access controller governs the access to the storage space. Typical examples of flash storage devices are solid-state drives and flash memory cards.Continue Reading
One of the advantages of flash storage is its energy efficiency. When compared to traditional hard drives, it consumes five times less power. It is also immune to wear since it lacks mechanical parts. However, the downside of flash storage is that write speeds are slower in comparison to traditional hard drives, particularly in the case of single-level cell devices. Additionally, it features a limited tolerance for write-erase cycles.
The NAND flash memory that flash storage devices employ operates by storing data in an array of memory cells comprised of floating-gate transistors. These transistors are arranged in a grid and feature two gates, unlike traditional transistors that only feature one. That allows it to retain voltage between the gates, making the stored data non-volatile, meaning it is retained even after a device powers down. The only way to remove the data is to drain the voltage between the gates by using a feature unique to flash memory.Learn more about Digital Storage |
Watch some scenes from the movie version of this book that closely follow the book. Have the students discuss whether or not the images in the movie were how they imagined them while reading the book.
Split the class into groups and assign each group a different setting from the book. Have each group create a diorama of this setting along with a description of how this setting affected the plot of the book. Arrange these dioramas in chronological order.
Have each student create one page of a children's picture book version of this play. Assign each student a specific scene and compile all the pages together.
More and Cromwell both give dramatic and opposing speeches at the last trial of the book. Hold a speech contest and award extra credit points to the winner. Present the class with a prominent theme...
This section contains 688 words
(approx. 3 pages at 300 words per page) |
Unit 0. IntroductionRevision Date: Sep 07, 2015 (Version 1.2)
Summary: This lesson is a basic introduction to algorithms and the nature of intelligence. Students will play tic-tac-toe (noughts and crosses is the British version) between a “highly intelligent piece of paper” and a human. Students will explore how to create an algorithm and the concept of computer intelligence.
Source: This lesson is adapted from a lesson created by Paul Curzon, Queen Mary, University of London.
Student computer usage for this lesson is: none
A PowerPoint for this lesson is included in the Lesson Resources folder - IntelligentPaper.pptx and IntelligentPaper.pdf
Copies for student pairs of "intelligent paper directions" with tic-tac-toe directions on one side, and blank on the other - in the Lesson Resources folder - IntelligentPaperDirections.pdf
The wrap-up questions are available in the Lesson Resources folder as Questions To Consider.docx
Optional: a musical greeting card, a paper folded into a fortune teller (http://en.wikipedia.org/wiki/Paper_fortune_teller), a page of equations
The Python program for the optional activity is located in the Lesson Resources Folder - TicTacToeAI.py
What could make a piece of paper intelligent? (Think-Pair-Share)
(Use IntelligentPaper.pptx in the Lesson Resources folder to help deliver this lesson.)
Challenge the students by saying that you have a piece of paper that is at least as smart as any human. (Show the blank side of the paper, don't tell the students yet, but it has directions on how to play tic-tac-toe on the back.) Ask if anybody believes that this is possible.
Show students examples of "smart papers," such as:
Encourage discussion and debate, prod students to argue their point for or against intelligence, and get them to develop their own criteria and definition for intelligence. Write the class definition and criteria on the board.
Tell the class that the paper has never lost a game: it has perfect intelligence.
Challenge students to play a game against the paper. The paper is peripherally challenged (it has no arms, and thus needs somebody to do its work for it). One person represents humankind, while the other person represents the paper. Play tic-tac-toe with a partner. The paper must begin the game.
But, the paper WILL NOT LOSE.
Try letting humankind go first. (Wait and try it: The paper will lose. Why?)
Challenge students to write out detailed directions (an algorithm) that will never lose the game whether it goes first or second.
Students should use their new algorithm to play against each other. Follow the same model for the paper versus the human game.
Discuss how testing is essential in order to figure out whether the algorithm works for every possible game.
Additional Possible Activities and Discussions with Time Permitting:
Have students write their own definitions for the four words at the end of the presentation:
(Use PyCharm or some other Python environment to show the TicTacToeAI.py program from the Lesson Resources folder.)
Assign homework for Lesson 1-1: Provide students a copy of the “Questions to Consider” in the resources folder and assign the reading:
Blown to Bits – Chapter 1, can be found here http://www.bitsbook.com/wp-content/uploads/2008/12/chapter1.pdf and is available in the lesson resources folder for Unit 0 Lesson 3.
Extension: If you have extra time, have a championship contest between one set of student-generated instructions and another, alternating who goes 1st and 2nd. You can work in groups of three, with one person acting as the judge if desired.
Vocabulary entries in journals from the end of the PowerPoint presentation
Group participation in interactive activity
Writeup about a more general solution |
solid-state device for amplifying, controlling, and generating electrical signals. Transistors are used in a wide array of electronic equipment, ranging from pocket calculators and radios to industrial robots and communications satellites.
The transistor was invented in 1947 by three American physicists at the Bell Telephone Laboratories, John Bardeen, Walter H. Brattain, and William B. Shockley. It proved to be a viable alternative to the vacuum tube, and by the late 1950s supplanted the latter in many applications. Transistors played a pivotal role in the advancement of electronics, their small size, low heat generation, high reliability, and relatively small power requirements making possible the miniaturization of complex circuitry such as required by computers. During the late 1960s and '70s individual transistors were superseded by integrated circuits in which a multitude of transistors and other components (e.g., diodes and resistors) are formed on a single tiny wafer of semiconducting material. See also integrated circuit.
Transistors are made up of layers of different semiconductors produced by the addition of certain impurities (e.g., arsenic or boron) to silicon. These impurities affect the way electric current moves through the silicon. In semiconductors known as n-type, the primary electric charge carriers are free electrons; in those called p-type, the principal charge carriers are positively charged holes.(That is to say, when an atom such as boron that has only three outer electrons is substituted for a silicon atom with four electrons, a vacancy, or hole, is created in the valence band.) Holes in semiconductors move about as readily as electrons do, but their motion is in a direction opposite to that of electrons since they are positively charged.
There are two general types of transistors: (1) the bipolar junction transistor
(BJT), and (2) the field-effect transistor (FET). The BJT, composed of two closely coupled p-n junctions, is bipolar in that both electrons and holes are involved in the conduction process.
It is readily able to deliver a change in output voltage in response to a change in input current. This type of transistor is widely used as an amplifier and is also a key component in oscillators, high-speed integrated circuits, and switching circuits.
In contrast to the BJT, the FET is a unipolar device—i.e., its conducting process primarily involves only one kind of charge carrier. It can be built either as a metal-oxide-semiconductor field-effect transistor (MOSFET) or as a junction field-effect transistor (JFET).
Since the mid-1980s the MOSFET has surpassed the BJT in importance and become the predominant component of very-large-scale integrated (VLSI) circuits. Not only does it consume much less power than the BJT, but it can be scaled down to smaller dimensions than the latter with greater ease. Other commercially important types of FETs are the metal-semiconductor field-effect transistor (MESFET) and the closely related JFET. The MESFET can be employed in both analog and digital circuits, and is particularly useful for microwave amplification. |
Aptamers (from the Latin aptus - fit, and Greek meros - part) are oligonucleotide or peptide molecules that bind to a specific target molecule. Aptamers are usually created by selecting them from a large random sequence pool, but natural aptamers also exist in riboswitches. Aptamers can be used for both basic research and clinical purposes as macromolecular drugs. Aptamers can be combined with ribozymes to self-cleave in the presence of their target molecule. These compound molecules have additional research, industrial and clinical applications.
More specifically, aptamers can be classified as:
DNA or RNA or RNA aptamers. They consist of (usually short) strands of oligonucleotides. Peptide aptamers. They consist of a short variable peptide domain, attached at both ends to a protein scaffold. |
Essay Topic 1
Arab tradition guides the lives of several characters in this novel. Choose three elements of Arab tradition and explain how they impact the characters in this novel.
Essay Topic 2
Nada defies the role her father wants her to have, and therefore she shames the family and must die. Explain what Nada represents in this novel and how she changes throughout the course of the book.
Essay Topic 3
Choose three of the leaders from this novel and compare and contrast them. Consider the way they rule their people, their political beliefs, and their successes/failures.
Essay Topic 4
Research the history of Palestine during this time. How does this novel remain true to history? Where does the novel diverge? Why do you think this happens?
Essay Topic 5
Consider the role of outside countries on Palestine and the characters of this novel. Write about what the countries actually do...
This section contains 657 words
(approx. 3 pages at 300 words per page) |
Tuberculosis, also called TB, is an infectious disease caused by a bacterium named Mycobacterium tuberculosis. TB usually involves the lungs (pulmonary TB) but can infect almost any organ in the body. TB is almost always curable with antibiotics.
Tuberculosis kills more people today than any other infectious disease. About 2 million people a year die from TB worldwide. However, death from TB is rare in the United States. Clark County reports 5-10 cases of people with active TB per year and of these people the cure rate is almost 100%.
- Health care providers may report actual or suspected TB by calling (360) 397-8182
- For questions about TB, call (360) 397-8182
- Cough (usually for more than 3 weeks).
- Coughing up blood or phlegm from deep inside the lungs and pain in the chest.
- Weight loss.
- Night sweats.
Symptoms usually come on gradually over a period of weeks.
How TB is spread
TB spreads when someone who has pulmonary TB coughs. TB bacteria from that person's lungs are expelled into the air, and may be inhaled into the lungs of another person. TB is not very infectious and is much harder to catch than the common cold. Usually a lot of time needs to be spent with a person with pulmonary TB for someone to catch it. It's not possible to get TB from sharing a glass with a person with TB or touching a doorknob after someone with TB has used it.
Once people with TB are on medication they quickly become non-contagious and can quickly resume their normal patterns of life without fear of spreading the disease.
What is the difference between TB Infection and Active TB?
If you have TB disease, you are made sick by active germs in your body. Often you will have several symptoms like persistent cough, fever and weight loss. If the disease is in your lungs, you can give the disease to other people. Permanent damage and death can result from this disease. Medications to cure TB are almost always effective.
If you have a TB infection, you have germs that can cause TB in your body. However, you are not sick because the germ is inactive. You can’t make other people sick. Medication is often given to prevent you from developing TB disease in the future.
Treating active TB
To treat TB several antibiotics need to be taken together over a period of 6 to 12 months. For this treatment to work it's vital that antibiotics be taken regularly and that the treatment be completed. Lengthy treatment is necessary because it is difficult to remove TB bacteria from the body.
To help you successfully complete your TB treatment, Public Health provides Directly Observed Therapy. This therapy involves observing you as you swallow your medication.
Treating a TB infection
TB infection means you have bacteria sleeping in your body. You’re not sick or contagious because the bacteria are dormant. TB infection is detected when you have a positive skin test but a normal chest x-ray and no other sign of tuberculosis disease. To kill these sleeping bacteria and to prevent the development of active disease, you are often advised to take several months of treatment, usually with only one or two medications.
What is the TB Skin Test?
The TB skin test is performed by injecting a small amount of testing liquid into the skin of the forearm. The test needs to be read 48 to 72 hours later by someone trained in reading skin tests. If it's positive then a chest x-ray is done to rule out active disease. If the chest x-ray is normal then you are likely to have TB infection. Once a skin test is positive it will most likely stay positive and should not be repeated. Unless you develop symptoms, one chest x-ray is all that's needed. If you have an active case of TB, skin tests are provided to your family and close contacts. Most other requests for TB skin tests are referred to an individual’s primary care provider. |
It is believed that the black hole at the center of our own Milky Way Galaxy, Sagittarius A, is 4 million solar masses. This is the the most massive object in our galaxy. Even so, it is dwarfed in comparison with the black hole located at the center of NGC 4889, a galaxy 308 million light-years away at the center of the Coma Cluster. This elliptical galaxy is one of the brightest and largest galaxies in the Coma Cluster, and even though it doesn’t display much activity, it contains a black hole with a mass 21 billion times that of our Sun.
If you think about it, it’s amazing enough that we reside in a solar system within a galaxy that is spinning around a black hole. The sheer enormity of this galaxy is astonishing on its own. But I bet there were some things about our galaxy that you didn’t know: How many possible intelligent alien civilizations could be out there? What is The Great Attractor? And how about that raspberry smelling gas cloud? Yes, there are some very curious details about our galaxy that not a lot of people know. So let’s get into the Ten Most Amazing Facts About The Milky Way Galaxy:
During the last century telescope and image capturing technology grew by leaps and bounds. Astronomers for the first time could peer into the far reaches of space to study the hundreds of billions of galaxies that lie beyond our own. Over the years, astronomers developed a systems of galactic classification that categorized galaxies based on their shape and composition. Edwin Hubble created the original system of classification in the early 1900s, which was later expanded to include various other sub-categories as galactic observations improved. A majority of galaxies fall into the following categories…
These galaxies are spherical or ellipsoid in shape, with very few visible features. They contain up to one trillion stars, and very little interstellar dust and gas. Research has indicated that stars in elliptical galaxies are often very old, which is why the galaxies themselves glow with a yellowish-white hue. Also there is less star formation occurring in this type of galaxy.
Lenticular galaxies can be imagined as a mid category between that of the featureless elliptical galaxies and the dramatic spiral galaxies. These galaxies have a defined disk of gas and dust, as well as a glowing bulge at their middle. They do not exhibit spiral arms, but do have large amounts of gas and dust within their disks. This leads to high amounts of star formation within.
Spiral galaxies have very distinct shape and structure consisting of a central bulge of bright stars, and bright arms. Spiral galaxies contain large amounts of gas and dust, and stars of varying ages. Recent theories have explained that the arms are shaped by slowly rotating matter density waves that compress the interstellar gas and dust triggering star formation.
So what type of galaxy is our own Milky Way? That has proven to be a tricky question to answer, since we live inside of it, we can’t simply take a picture of it as we do with the countless other galaxies in our universe. You can take pictures of your neighbors’ houses from out your window, but you can’t take a picture of your whole house unless you go outside and walk away to get a view. The Voyager Spacecraft launched in the 1970s are the furthest man-made probes, and they have only recently exited the Solar System; we are not anywhere close to taking a picture like this of our home galaxy any time soon.
There are some simple observations anyone can make to help them classify our Milky Way.
1. The Milky Way appears as a thin strip across our sky, which implies that it is a thin disc, rather than a sphere of stars.
2. The center of the Milky Way is visible in the southern sky each Summer. This shows that our galaxy has a definite bulge at its middle.
These facts, paired with other astronomical observations have indicated that our Milky Way is a spiral galaxy. However, the question still remains… How many arms does it have?
The interstellar dust in our galaxy blocks our view of faraway stars in the visual wavelengths. Radio telescopes allow astronomers to see through the dust to identify the locations and motions of these stars. Using this information, astronomers extrapolate the shape of our galaxy. Up until recently, data from the Spitzer Space Telescope indicated that there were two distinct spiral arms (where previous theories had suggested four arms). Spitzer was targeting middle-aged cooler stars like our Sun.
A very recent study that targeted supermassive hot young stars painted a different picture: four arms. Due to the large amount of star formation that occurs in the arms of spiral galaxies, these types of stars are found nearly exclusively in the arms. Though these stars live short lives, the high rate of star formation in the arm regions replenishes the populations of them.
Some astronomers theorize that gravitational forces within the Milky Way may have lead to an uneven distribution of the middle-aged and older cooler stars into two of the arms more than the other two, leading the Spitzer data to indicate two arms. Meanwhile the populations of supermassive hot young stars flourish in all four arms.
Truly understanding our home galaxy is a unique challenge that motivates astronomers world wide. Our picture of our galaxy and the universe beyond continues to evolve as we continue to look outward. |
The release of dyes into wastewater from textile, cosmetic, paper and coloring industries poses serious environmental problems . Dye availability in water, even if it is just small in quantity is unwanted and highly visible. Color prevents the proper entrance of sunlight into water bodies; it also retard photosynthesis; hinder the growth of aquatic biota and affect the solubility of gases within the water bodies. Dyes role in connection with several lungs, skin and many other respiratory problems have been reported globally . Direct release of dyes containing wastewater into municipal environment can cause the production of poisonous carcinogenic products. The highest degrees of toxicity were discovered in direct and raw dyes . Therefore, before wastewater is released into municipal environment, it is very important to reduce dye amount or concentration present in it. The commonly applied methods of treating wastewater are coagulation and flocculation, electrochemical treatment, liquid-liquid extraction, chemical oxidation and adsorption. Many methods have recently been used to remove both MB and BG from industrial effluents. Among these methods, adsorption is the most effective way for the removal of organic compounds from solution in term of its low cost of operation, ease of design, sensitivity to poisonous materials and simplicity of operation . But its use is limited because of high cost and associated problems of regeneration, and this problem has initiated a constant and continuous search for cheaper alternatives. The search for the removal of organic pollutants using alternative low cost adsorbents is now on the rise by many researchers . Wide varieties of high carbon content materials such as wood, coal, peat; nutshells, sawdust, bones, husk, petroleum coke and others have been utilized to produce activated carbon of varying efficiencies . These materials, usually in irregular and bulky shapes, are always adjusted to exhibit the desired final shapes, roughness and hardness. Generally, the production of activated carbon involves pyrolysis or carbonization and activation as the two main production processes . Numerous carbonaceous materials, particularly, those of agricultural base, are being investigated to possess potential as activated carbon. The suitable ones have minimum amount of organic material and a long storage life. Similarly they consist of hard structure to maintain their properties under usage conditions. They can be obtained at a low cost. Some of the materials that meet the above conditions have been used, in past works, to produce activated carbons which were subsequently used for the treatment of wastewater and adsorption of hazardous gases. Agricultural by-products like rice straw, soybean hull, sugarcane bagasse, peanut shell, pecan shell and walnut shells were used by V. Ponnusami et al., (2007) to produce Granulated Activated Carbons (GACs). The choice of a particular material for the production of effective adsorbent (activated carbon) is based on low cost, high carbon and low inorganic content. Agricultural materials have attracted the interest of researchers for the production of adsorbents because of their availability in large amount and at a low cost . The selected materials employed in this study were coconut shell, corn cob, flamboyant pod and eucalyptus tree. Use of agricultural by-product for the production of activated carbon is primarily for economic and ecological advantages . Commercial activated carbon used in surface and wastewater treatment is largely derived from coal. The advantages of coal-based carbons can be seen in their ability to remove toxic organic compounds from industrial and municipal wastewater as well as potable water. Another significant application of coal-based carbons is decolorization. The feedstock for these carbons, usually bituminous coal, is a non-renewable resource. The long-term availability of coal and its long-term environmental impact coupled with its potentially increasing cost has prompted researchers to consider renewable resources such as agricultural by-products as an alternative. Many efforts have been made to use low cost agro-waste materials in substitute for commercial activated carbon . Some agro-waste materials studied for their capacity to remove dyes from aqueous solutions are coir pith, pine sawdust, tamarind fruit shell, bagasse, rice husk, orange peel, palm kernel shell, cashew nut shell, wallnut shell . The present investigation is an attempt to remove Methylene Blue (MB) and Brilliant Green (BG) from synthetic wastewater by adsorption process using a low cost activated carbon prepared from agricultural wastes as an adsorbent. The coconut shell and corn cob are considered as agricultural wastes, therefore using them as raw materials for production of activated carbon is much more economical than the coal based activated carbon. In this study, the carbon adsorption method will be investigated for its efficiency in color removal from water bodies.
2. Materials and Methods
1 kg of each agricultural waste material (coconut shell, eucalyptus tree, corn cob and flamboyant pod) was charged into electric muffle furnace, which was then heated in the absence of oxygen at a temperature of 300˚C - 600˚C for one hour. The resulting charred materials were collected and cooled at room temperature.
Samples of all the carbonized material were weighed on electric weigh machine, soaked in one mole of phosphoric acid (H3PO4) 63% concentrated solution for 24 hours. The materials were then removed from the soaking after the 24 hours lapsed, and then washed with distilled water until leachable impurities due to free acid and adherent powder were removed and the pH of the water was 7. Finally, the samples were drained and dried in an oven at 80˚C for another 12 hours.
2.3. Preparation of Dye Solution
The accurately weighted (0.003 gram) of each MB and BG were dissolved in a distilled water to prepare dye mixture stock solution. 1000 mg/L stock solution of synthetic wastewater was prepared using blue and green dyes and from which other desired concentrations were obtained. The pH of the working solutions was adjusted to the required values of 4 and 11 by adding 0.1 M HCl.
2.4. Batch Adsorption Studies
For batch studies, the aim was to increase the dye adsorption efficiency during purification operation. To systematically explore process options, a full-factorial two-level design on the key factors was set up. Only three factors affecting dye adsorption efficiency were studied in this work, including contact time, dosage and pH. The symbols of minus (−) and plus (+) were used to designate low and high levels, respectively. Batch adsorption experiment was carried out in a set of 250 conical flasks containing 80 mL dye solution. The experiments were run according to 23 factorial design template designed for operation. The shaker speed was kept constant throughout the experiments run. After the completion of each experiment, the conical flask was withdrawn from the shaker at the pre-deter- mined time interval and the supernatant solution was separated by filtration using whatman filter paper. Standard calibration curve was developed at wavelength of 664 nm MB dye and 629 for BG dye. The final concentration was then analysed for absorbance by using Ultra-violet spectrophotometer with a blank solution as control. The percentage colour removal efficiency was calculated using:
where, Ci (mg/L) is the initial dye concentration, Cf (mg/L) final colour concentration.
The uptake of dye at equilibrium time was calculated from
where qe is the amount of adsorbed dye in mg/g. Co and Ce are the initial and equilibrium dye concentration in mg/L, V is the volume of the aqueous solution in ml and w is the mass of the adsorbent in g. Langmuir and Freundlich adsorption isotherm models were used to explain the sorption data obtained in this work.
3. Results and Discussion
3.1. Carbons Adsorption Comparison
Figure 1. MB removal efficiency.
batch experiments using activated and un-activated carbon. The graphs clearly show that activated carbon is better during adsorption process than un-activated carbon. It also shows that, all the activated carbons used during the experiments adsorbed dye from the dye solution more than all the un-activated carbons used for the experiments . The graphs were plotted using the dye highest percentage removal in each experimental run for both activated and un-activated carbon. All the activated carbons used in the experiments have adsorption removal efficiency above 94.9% to 99.9% for both MB and BG. All the un-activated carbons have their highest between 72.5% to 76.3%. All the carbons flamboyant pod were activated using the same conditions of carbonization of 500˚C for 1 hour, and activation was done with 1 mole of 63% phosphoric acid and soaked for 24 hours.
3.2. Isotherm Models
Figure 2. BG removal efficiency.
Table 1. Freundlich data.
Table 2. Langmuir data.
correlation coefficients for adsorption of MB and BG dyes onto coconut shell carbon, eucalyptus carbon, corn cob carbon and flamboyant pod carbon at ambient temperature.
The initial dye concentration varied as 5, 10, 15, 20 and 25 mg/L, and four different adsorbent dosages of each 5 g was used for comparison. All constants and R2 values obtained for both models are summarized in Table 1 and Table 2. According to Table 1 and Table 2, the highest correlation coefficient (R2 values) were obtained from Freundlich equation; thus, the Freundlich model displayed better fit to the adsorption data than Langmuir model . This result indicated that the surface of the produced carbons for adsorption of MB and BG were made up of heterogeneous adsorption patches .
3.3. Two-Level Factorial Design of Experiments
Dye removal by an adsorbent in a batch experiments usually depends on many factors. These factors include contact time, adsorbate dosage, pH, and so forth. To systematically explore process options, a full-factorial two-level design on the key factors was set up as shown in Table 3.
Only three factors affecting dye adsorption efficiency were studied in this work, including contact time, adsorbent dosage and pH. The first factor, contact time is numerical because it can be adjusted to any level. Also, the second factor, dosage and the third factor, pH are also numerical. None of the factors used in this design is categorical, that is factor that cannot be adjusted. The experiments for two-level factorial design were carried out in a set of 250 ml conical flasks containing 80 ml MB and BG dyes solution of known pH, adsorbent dose and agitated for predetermined contact time. After thirty and ninety hours of agitation, the suspensions were filtered and dye concentrations in the supernatant solutions were measured using a UV-Vis spectrophotometer. The experimental design results were analysed using DESIGN EXPERT software to estimate the statistical parameters as well as effects, such as half-normal probability of the standardized effects, Pareto, interaction and main effects plots. 23 two-level factorial design with 8 runs experimentation for MB and BG removal was studied, and amatrix was developed according to their low and high levels, represented by −1 and +1, respectively. The coded values of variables with the responses (% removal efficiency) were illustrated in Table 4.
The interactions between independent variables were determined with ANOVA and the main effects of MB and BG adsorption were identified based on the P value with >95% of confidence level.
The codified equation below was used to explain the 23 factorial designs of MB and BG removal by four different activated carbons:
Table 3. Test factors for dye adsorption.
where Y is the predicted response, β0 is the intercept, βi is the regression coefficient related to the interactions and the main factors effects A is the contact time (mins), B is the adsorbent dosage (g), and C is the pH. The interaction and main effects, regression coefficients, coefficients of the model, standard deviation of each coefficient, standard errors, and T and P values were also done.
From Table 5 and Table 6, when the factor effect is negative, removal efficiency decreases as the factor is changed from low to high levels (as seen from dosage and pH). In contrast, if the effects are positive, removal efficiency increases for high level of the same factor (as seen from adsorbent dosage and pH). When the factor effect is negative, removal efficiency decreases as the factor is changed from low to high levels (as seen from pH and contact time). Furthermore, the fit models, submitted square correlation coefficient (R2) of 0.9992 for MB and 0.9939 for BG, were in good agreement with the statistical model. The factors A, B and C are the frequently called a main effect as it refers to the primary factor of interest in the experiment and factors AB, AC and BC are interactions effects. In this work, factors A, B, C AC, and BC are all significant effects. The AB and ABC effect were insignificant, when it was compared with other
Table 4. Design matrix for MB and BG.
Table 5. Interaction of dosage versus pH.
Table 6. Interaction of dosage versus pH.
effects. Thus the AB and ABC effects were neglected and did not included in the model equation. Then, model equation, related to the level of parameters and removal efficiency, was obtained by substituting the regression coefficients in for both MB and BG:
Equations (4) and (5) indicated that two-variable interactions were significant. Proof of positive (BC) and negative (AC) interactions was quite strong, thus, could not be ignored from the model. Although the main effects gave a clear idea, the interaction between those two parameters would favor a better statement of the process.
The interactions between independent factors were determined with analysis of variance (ANOVA) and the main effects of dye adsorption were identified based on the P value with >95% of confidence level are presented in ANOVA Table 7 and Table 8. The tables presented the possible positive and negative two-variable interactions among the variables A, B, and C for the removal efficiency (%).
Table 7. Statistical parameters and ANOVA for removal efficiency (%).
Table 8. Statistical parameters and ANOVA for removal efficiency (%).
It was observed that the effect of pH was more noticeable when the adsorbent dosage was high but at lower adsorbent dosage, effect of pH was not so high. On the contrary, pH effect and adsorbent dosage effect were high at higher contact time. The interaction effects are easily estimated and tested by using the ANOVA as shown in Figure 3. The ANOVA results of MB and BG were shown in Table 7 and Table 8 above. The sum of the squares used to estimate factors affect and Fisher’s F ratios (defined as the ratio of mean square effect and the mean square error) and P values (defined as the level of significance leading to the rejection of the null hypothesis) were also represented. Relative importance of the individual and interaction effects was given by the Pareto chart of the standardized effects in Figure 4. In order to identify whether the calculated effects were significantly different from zero, Student’s t-test was performed and horizontal columns in Pareto chart showed these values for each effect. The normal probability plot of the standardized effects with P = 0.05 to evaluate the significance of each factor and its interactions on removal efficiency (%) are presented in Figure 5. Half-Normal probability plot could be separated into two regions, the region on the right side has all the effects that were significant. For a
Figure 3. Interaction graph of dosage versus pH for MG and BG.
95% confidence level and seven degrees of freedom, Bonferroni limit was 11.7687 and t-value limit was 4.30265 for MB and BG adsorption. The minimum statistically significant effect magnitude for 95% confidence level is represented by the vertical line in the chart. Five values higher than 4.30265 (P = 0.05) were located at left of the dash line and were significant. It can be concluded that the adsorbent dosage was the strongest effect of the overall adsorption procedure for both MB and BG SCAC. The first (strongest effect), second, third, fourth and fifth important factor to the overall optimization of the batch adsorption process were arranged in sequential order in Pareto chart for both MB and BG. Factor B, A, C, AC and BC show the order of significant of each effect on MB adsorption
Figure 4. Pareto chart of effects for MB and BG adsorption.
Figure 5. Half-normal probability plot of the standardized effects for MB and BG at P = 0.05.
process. Similarly, factor B, C, A, BC and AC show the order of significance of each effect on BG adsorption process. The coefficient of AC interaction for both MB and BG adsorption had the negative value; thus, a decrease in contact time of the solution with a deduction of the pH caused an increase in removal efficiency (%). This antagonistic effect would not be distinguished in the univariate optimization of the dye removal process.
Carbon produced by chemical activation of carbonized coconut shell, eucalyptus tree, corn cob and flamboyant pod with an activation agent 63% H3PO4 were all capable of removing Methylene Blue and Brilliant Green dye molecules from aqueous solutions. The adsorption was favoured at Basic medium with pH value 11 and adsorption efficiency (%) was also found to increase with the increase in adsorbent dosage and contact time.
The equilibrium data were in good agreement with the Freundlich model. Additionally, the influence of pH (4 and 11), adsorbent dosage (2 g and 5 g) and contact time (30 mins and 90 mins) on removal efficiency (%) was designed by using 23 two level factorial design. It was examined by using analysis of variance (ANOVA), t-test, and Bonferroni ? test. According to Pareto Chart, half-normal, normal probability plot, main effects, interaction plots in variance analysis, the most significant factors on removal efficiency (%) were found to be adsorbent dosage (B), contact time (A), pH (C), the interaction between adsorbent dosage and pH (BC) and interaction between contact time and pH (AC), respectively.
Due to the obtained results, black carbons which are produced from chemical activation of carbonized coconut shell, eucalyptus tree, corn cob and flamboyant pod could be employed as an effective and low-cost adsorbent. Therefore, these successful adsorbents could be considered as an alternative to commercial activated carbons for the removal of reactive Methylene blue and brilliant green from aqueous solutions.
ElQada, E.N., Allen, S.J. and Walker, G.M. (2008) Adsorption of Basic Dyes from Aqueous Solution onto Activated Carbons. Chemical Engineering Journal, 135, 174-184.
Jadhav, S.B., Phugare, S.S., Patil, P.S. and Jadhav, J.P. (2011) Biochemical Degradation Pathway of Textile Dye Remazol Red and Subsequent Toxicological Evaluation by Cytotoxicity, Genotoxicity and Oxidative Stress Studies. International Biodeterioration and Biodegradation, 65, 733-743.
Gupta, V.K., Gupta, B., Rastogi, A., Agarwal, S. and Nayak, A. (2011) A Comparative Investigation on Adsorption Performances of Mesoporous Activated Carbon Prepared from Waste Rubber Tire and Activated Carbon for a Hazardous Azo Dye—Acid Blue 113. Journal of Hazardous Materials, 186, 891-901.
Bonnamy, S. (1999) Carbonization of Various Precursors. Effect of Heating Rate. Part II. Transmission Electron Microscopy and Physicochemical Studies. Carbon, 37, 1707-1724.
Bourrat, X. (1997) Structure in Carbons and Carbon Artefacts. In: Marsh, H. and Rodriguez-Reinoso, F., Eds., Sciences of Carbon Materials, Universidad de Alicante, Secretariado de Publications, Spain, 1-97.
Marsh, H. (2001) Carbon Mesophase. In: Buschew, K.H, Cahn, J., Robert, W., Flemings Merten, C., IIscher, B., Kramer Edward, J., Subhash, M. and Patrick, V., Eds., Encyclopedia of Materials: Science and Technology, Elsevier Science Ltd., Amsterdam, 926-932.
McGuire, M.J. and Suffet, I.H., Eds. (1983) Treatment of Water by Granular Activated Carbon, Advances in Chemistry Series, 202. American Chemical Society, Washington, 599. |
The growing cholera crisis in Yemen has, unfortunately, earned the title of "the largest cholera outbreak in the world."
When we covered this outbreak last month in the article Yemen may collapse under cholera outbreak, the number of deaths was roughly 800, with the country reporting about one death per hour. According to the latest figures, the number of deaths is currently hovering around 1,500 people with an estimated 200,000 - 250,000 total cases suspected in the country and an average of 5,000 new cases each day.
Let me write that again, because it is worth repeating - there are 5,000 new cases each day.
A few aspects of cholera outbreaks make this news even more devastating. The first is that, like many infections, children are a more vulnerable population than adults and they are bearing the brunt of this outbreak, accounting for about a quarter of the dead.
The second reason is because the solution is (seemingly) so simple. Unlike other infectious outbreaks (like Ebola or ZIka) we know how cholera can be stopped and have the tools on hand. Essentially, clean water is all that is necessary to both prevent and treat the disease. Because the causative agent of cholera is the aquatic bacteria Vibrio cholerae, it is spread through contaminated water. But when the water is cleaned, the spread of the infection can be stopped.
Once an outbreak occurs, people can be treated using the same means (clean water) in the form of rehydration therapy. In most cases, rehydration therapy (an IV of saline) works to restore the hydration that the infection robs a victim of and can give that person's body time to help combat the infection. Also, there is a vaccine available. However, the number of vaccines available to Yemen cannot hold a match to an infection that spreads as effectively as Vibrio cholerae.
Lastly, if we look to the colossal failure of the UN's handling of the cholera epidemic that they themselves brought upon Haiti, people dying of cholera, put simply, do not seem to be a priority in today's world.
A map showing the location of the cholera outbreaks. |
Place yourself in a bumper car at a carnival waiting to bump into your friends. Soon enough you hear the small engine of your bumper car start and you begin to move around, bumping into anyone in your way. While the motion of your car is mostly controlled by the steering wheel, random events—like fluctuations in the motor power, your car hitting small bumps on the floor, and other cars hitting you—can affect the motion as well. What if I told you that a cell and its parts function in a similar way? Just as your car is powered by electricity, molecular motors—bio-molecules that can convert chemical energy into mechanical work—power the movement of living organisms by generating forces. In order to produce these forces, molecular motors depend on an organic molecule called ATP. |
Endoscopes play a crucial role in diagnosing and investigating the various causes of numerous health issues in patients.
With the help of an endoscope, doctors are able to identify obstructions, investigate the digestive system, cauterize wounds, and perform biopsies. Endoscopes are complex systems that are designed to offer a vital role in medical care.
By understanding what an endoscope is and what are its main components, it is possible to know how it functions and how you can operate it.
What is an Endoscope?
Endoscopes are medical equipment used in endoscopy, which is a medical process performed to diagnose the internal health condition of a patient. Unlike scans or x-rays, endoscopy delivers a clearer view of the internal health condition of a patient.
There are different types of endoscopes available each one of them is used to perform an endoscopy of a specific body part.
What are the Main Components of an Endoscope?
An endoscope comprises numerous primary components. But depending on the type of endoscopy and the endoscopy process, the specifics may vary a little bit.
A standard endoscope comprises of various parts, such as:
- A rigid or flexible tube
- A system to emit light to improve the visibility of the area being diagnosed. This source of light is based on the external body of the endoscope and is focused through optical fibers.
- It comprises a lens to emit the image of the internal system of the patient. The image is emitted to the viewer or operator. The lens is either a rigid lens or a multi-fiber optic lens.
- An eyepiece, which is used to view and capture the images from the internal area of the patient and are sent to screen for a clearer view.
- An additional channel that accommodates the manipulators of the medical instrument for surgical processes.
The Additional Components of a Flexible Endoscope
A flexible endoscope incorporates numerous external components other than the above-mentioned ones. Some of the external components of a flexible endoscope are:
- A universal cord or an umbilical cable that connects the light path plug to the part that controls head.
- An insertion tube, which is placed within the body of the patient. It becomes highly contaminated throughout the process. Its distal end is used to house the video scope microchips and openings for the water or air functions. It is also used for the suction.
- It contains a bending section that is adjacent to the distal end.
What Are Some Prominent Uses of an Endoscope?
Primarily, endoscopes are used for exploratory processes, but they are also crucial to perform certain small surgical procedures. These units have a flexible and fine design that enable endoscopes to reach various areas that otherwise need invasive surgery.
Using an endoscope, it is possible to perform treatment without involving much cost, time, physical trauma, and standard operations.
Surgical endoscopes are maneuver using knobs or cables and are useful to perform biopsies with small forceps linked to the tips. They are connected with a tube on the main cable that enables the doctor to remove blockages or fluids, using a simple sucking motion.
Some types of endoscopes also transmit precise laser beams that are useful to eliminate dead tissues, heal wounds, and create incisions with high accuracy.
While the basic functionality of an endoscope remains the same, its design and technology continue to evolve. Continuous efforts are made to make endoscope more technologically advanced and streamlined. For examples, there are endoscopes available with a micro-camera, which the patients can swallow to capture images of a patient’s internal system.
However, the primary components of an endoscope remain the same. If you wish to purchase high-quality endoscopes from big brands at competitive prices then visit us soon.
About The Author:
Dr. David Taylor, MD, Ph.D., a registered medical professional, is the founder and owner of BestRatedDocs.com. He holds an M.D. from Drexel University & a Ph.D. from Indiana University School of Medicine. Dr. David loves to utilize technology to improve healthcare and he does it daily through BestRatedDocs.com. He founded the company in 2016 with the vision to make the discoverability of the best healthcare facilities & best products simple and easy. |
Christendom: The Reformation: Post Tenebras Lux, introduces students to the great minds of the Reformation: John Calvin, Martin Luther, Thomas Cranmer, Erasmus, Spense, and Chaucer. But a study of this tumultuous period of history would be sorely lacking without a thorough understanding of the historical setting of the Reformation. Wesley Callihan and Dr. Chris Schlect team up to lay a solid foundation for understanding the origins and struggles of the Reformation, as well as its theology and influence. Students will read part of Calvin’s Institutes of the Christian Religion, as well as Chaucer’s Canterbury Tales and Spenser’s Faerie Queene.
Lecture List for The Reformation:
1. Introduction to Renaissance and Reformation
2. Canterbury Tales 1
3. Canterbury Tales 2
4. Canterbury Tales 3
5. From Premodern to Modern Times
6. Predecessors to the Reformation
7. Luther and 16th Century Reform
8. International Calvinism
9. The Reformation in England
10. Spenser 1
11. Spenser 2
12. Spenser 3
WHAT IS “OLD WESTERN CULTURE”?
“Old Western Culture” is a literature curriculum named after a term coined by C.S. Lewis to describe the fabric of ideas that shaped Western Civilization. For centuries, a “Great Books” education lay at the heart of what it meant to be educated. It was the education of the Church Fathers, of the Medieval Church, of the Reformers, and of all the Founding Fathers of the Unites States.– It is a CLASSICAL EDUCATION, based on the great books of western civilization.– It is a CHRISTIAN EDUCATION, which sees the history and literature of the West through the eyes of the Bible and historic Christianity.– It is an INTEGRATED HUMANITIES CURRICULUM, bringing together literature, history, philosophy, doctrine, geography, and art.– It is a HOMESCHOOL oriented curriculum, made by homeschoolers with the needs of homeschooled in mind, including flexibility, affordability, and ease-of-use.We bring a master-teacher into your home, and encourage parents to gain an overview of Western Civilization themselves by watching the video lessons with their children.
HOW DOES OLD WESTERN CULTURE WORK?
Old Western Culture is a video course. It is built around a master teacher, Wes Callihan. With decades of teaching experience, he guides students through the story of Western civilization. Early Medievals contains 12 video lessons.Each lesson begins with a brief review before jumping into summary, commentary, analysis, and inter-disciplnary connections of the works covered. After each lesson, students complete the assigned readings, and asnswer comprehension questions in the Student Workbook or online workbook.
WHY DO PEOPLE LOVE WES CALLIHAN?
Wes Callihan is a master story teller! With a remarkable ability to communicate a passion for history and literature, he makes profound ideas accessible, relevent, and interesting. Also known for his distinctive “rabbit trailers,” forays into funny and obscure historical anactodes, which have a way of showing up at the dinner table. (After all, rabbit trails are “hooks for the imagination and memory.”) Wes Callihan is a true classical scholar, fluent in both Latin and Greek. He lectures only from the notes in the margins of his worn copies of the Great Books. “Meet him” through THIS VIDEO TOUR of his personal library, which doubled as a mini-lecture!
Old Western Culture is intended for mature and discerning students. We recommend this course for ages 14 and up. The course will deal with mature themes such as paganism, sexual immorality, battle scenes (mostly in actual reading), and classical paintings. Old Western Culture is meant to equip your child with a Biblical lens from which to process these themes. We assume your child has a working knowledge of the Bible and basic Christian doctrine.
Each year of Old Western Culture is a double-credit Humanities course which is most commonly broken down into 1 Literature credit and 1 History credit. The double-credit assumes that the student will watch all the videos, read the required reading, answer the daily worksheets, and take 4 exams (one for each unit). This a robust course academically, and requires a fair bit of reading. Wesley Callihan will coach your student on how to approach the reading in the video. Average daily reading load is 30-40 pages.As an “integrated humanities” course, Old Western Culture will constantly be incorporating history, literature, theology, philosophy, art, and art history, all through the eyes of the Great Books. |
What is it?
Stereoisomerism occurs when substances have the same molecular formula, but a different arrangement of their atoms in space. Cis-trans isomerism is one type of this isomerism. It applies to:
- alkenes and other organic compounds that contain C=C bonds
- cyclic alkanes.
In A Level Chemistry, you only need to know about cis-trans isomerism due to the presence of a C=C bond.
What is here?
You can see models of the two cis-trans isomers of but-2-ene. This alkene has cis-trans isomers because each carbon atom involved in the C=C bond has two different groups attached.
For comparison, you can also see a model of but-1-ene, which does not form these isomers.
You should be prepared to identify cis-trans isomers for simple organic compounds like these for your examinations, and you should also be able to name them.
⚠ Does not have geometrical isomers, even though it has a C=C bond! One of the C atoms in this bond has two identical groups (H atoms) attached.
The name cis or trans depends on where the identical groups are located:
- cis if they are on the same side of the C=C bond
- trans if they are on opposite sides of the C=C bond.
So in cis-but-2-ene, the two —CH3 groups are on one side and the two H atoms are on the other side. In trans-but-2-ene, each of these are on opposite sides.
What happens if there are three or four different groups?
In this case, you cannot use the cis-trans naming system. Instead, you must use the E–Z naming system. This is more complicated but much more flexible. |
"Speech" refers to how we say sounds and words.
When children pronounce sounds incorrectly for example, saying "wabbit" for "rabbit" they are experiencing speech difficulties. This area of speech is often referred to as articulation.
Difficulties in this area can be due to a variety of underlying causes such as, incorrect placement of articulators (i.e., lips, tongue, etc.). Some children may be experiencing difficulty hearing sounds. Others may be experiencing limited oral motor which includes limited strength and coordination of lip, tongue, cheek, and jaw movements.
Some children may have difficulty with fluency otherwise experiencing a stutter or stammer. A child with fluency difficulties may repeat sounds or words in a sentence, for example "I l-l-like p-p-pizza."
At Speech Club we always make careful assessments of present difficulties and underlying causes while creating an effective treatment plan. |
Fragile X syndrome is a genetic condition that causes a range of developmental problems including learning disabilities and cognitive impairment. Usually, males are more severely affected by this disorder than females.
Affected individuals usually have delayed development of speech and language by age 2. Most males with fragile X syndrome have mild to moderate intellectual disability, while about one-third of affected females are intellectually disabled. Children with fragile X syndrome may also have anxiety and hyperactive behavior such as fidgeting or impulsive actions. They may have attention deficit disorder (ADD), which includes an impaired ability to maintain attention and difficulty focusing on specific tasks. About one-third of individuals with fragile X syndrome have features of autism spectrum disorder that affect communication and social interaction. Seizures occur in about 15 percent of males and about 5 percent of females with fragile X syndrome.
Most males and about half of females with fragile X syndrome have characteristic physical features that become more apparent with age. These features include a long and narrow face, large ears, a prominent jaw and forehead, unusually flexible fingers, flat feet, and in males, enlarged testicles (macroorchidism) after puberty.
Mutations in the FMR1 gene cause fragile X syndrome. The FMR1 gene provides instructions for making a protein called FMRP. This protein helps regulate the production of other proteins and plays a role in the development of synapses, which are specialized connections between nerve cells. Synapses are critical for relaying nerve impulses.
Nearly all cases of fragile X syndrome are caused by a mutation in which a DNA segment, known as the CGG triplet repeat, is expanded within the FMR1 gene. Normally, this DNA segment is repeated from 5 to about 40 times. In people with fragile X syndrome, however, the CGG segment is repeated more than 200 times. The abnormally expanded CGG segment turns off (silences) the FMR1 gene, which prevents the gene from producing FMRP. Loss or a shortage (deficiency) of this protein disrupts nervous system functions and leads to the signs and symptoms of fragile X syndrome.
Males and females with 55 to 200 repeats of the CGG segment are said to have an FMR1 gene premutation. Most people with this premutation are intellectually normal. In some cases, however, individuals with a premutation have lower than normal amounts of FMRP. As a result, they may have mild versions of the physical features seen in fragile X syndrome (such as prominent ears) and may experience emotional problems such as anxiety or depression. Some children with an FMR1 premutation may have learning disabilities or autistic-like behavior. The premutation is also associated with an increased risk of disorders called fragile X-associated primary ovarian insufficiency (FXPOI) and fragile X-associated tremor/ataxia syndrome (FXTAS).
Fragile X syndrome occurs in approximately 1 in 4,000 males and 1 in 8,000 females.
Fragile X syndrome is inherited in an X-linked dominant pattern. A condition is considered X-linked if the mutated gene that causes the disorder is located on the X chromosome, one of the two sex chromosomes. (The Y chromosome is the other sex chromosome.) The inheritance is dominant if one copy of the altered gene in each cell is sufficient to cause the condition. X-linked dominant means that in females (who have two X chromosomes), a mutation in one of the two copies of a gene in each cell is sufficient to cause the disorder. In males (who have only one X chromosome), a mutation in the only copy of a gene in each cell causes the disorder. In most cases, males experience more severe symptoms of the disorder than females.
In women, the FMR1 gene premutation on the X chromosome can expand to more than 200 CGG repeats in cells that develop into eggs. This means that women with the premutation have an increased risk of having a child with fragile X syndrome. By contrast, the premutation in men does not expand to more than 200 repeats as it is passed to the next generation. Men pass the premutation only to their daughters. Their sons receive a Y chromosome, which does not include the FMR1 gene. |
File Name: social stratification and mobility .zip
Most sociologists define social class as a grouping based on similar social factors like wealth, income, education, and occupation. These factors affect how much power and prestige a person has. Social stratification reflects an unequal distribution of resources. In most cases, having more money means having more power or more opportunities. Stratification can also result from physical and intellectual traits.
Most sociologists define social class as a grouping based on similar social factors like wealth, income, education, and occupation. These factors affect how much power and prestige a person has. Social stratification reflects an unequal distribution of resources. In most cases, having more money means having more power or more opportunities. Stratification can also result from physical and intellectual traits. Categories that affect social standing include family ancestry, race, ethnicity, age, and gender.
In the United States, standing can also be defined by characteristics such as IQ, athletic abilities, appearance, personal skills, and achievements. In the last century, the United States has seen a steady rise in its standard of living , the level of wealth available to a certain socioeconomic class in order to acquire the material necessities and comforts to maintain its lifestyle. The standard of living is based on factors such as income, employment, class, poverty rates, and housing affordability.
Because standard of living is closely related to quality of life, it can represent factors such as the ability to afford a home, own a car, and take vacations.
In the United States, a small portion of the population has the means to the highest standard of living. Wealthy people receive the most schooling, have better health, and consume the most goods and services.
Wealthy people also wield decision-making power. But as the study mentioned above indicates, there is not an even distribution of wealth. Millions of women and men struggle to pay rent, buy food, find work, and afford basic medical care. Women who are single heads of household tend to have a lower income and lower standard of living than their married or male counterparts. In the United States, as in most high-income nations, social stratifications and standards of living are in part based on occupation Lin and Xie Employment in medicine, law, or engineering confers high status.
Teachers and police officers are generally respected, though not considered particularly prestigious. At the other end of the scale, some of the lowest rankings apply to positions like waitress, janitor, and bus driver.
The size, income, and wealth of the middle class have all been declining since the s. This is occurring at a time when corporate profits have increased more than percent, and CEO pay has risen by more than percent Popken While several economic factors can be improved in the United States inequitable distribution of income and wealth, feminization of poverty, stagnant wages for most workers while executive pay and profits soar, declining middle class , we are fortunate that the poverty experienced here is most often relative poverty and not absolute poverty.
Whereas absolute poverty is deprivation so severe that it puts survival in jeopardy, relative poverty is not having the means to live the lifestyle of the average person in your country.
As a wealthy developed country, the United States has the resources to provide the basic necessities to those in need through a series of federal and state social welfare programs. This used to be known as the food stamp program. The program began in the Great Depression, when unmarketable or surplus food was distributed to the hungry.
It was not until that President John F. Kennedy initiated a food stamp pilot program. His successor Lyndon B. Johnson was instrumental in the passage of the Food Stamp Act in In , more than , individuals received food assistance. In March , on the precipice of the Great Recession, participation hovered around 28 million people.
During the recession, that number escalated to more than 40 million USDA. Does taste or fashion sense indicate class? Is there any way to tell if this young man comes from an upper-, middle-, or lower-class background?
For sociologists, categorizing class is a fluid science. Sociologists generally identify three levels of class in the United States: upper, middle, and lower class.
Within each class, there are many subcategories. One economist, J. Foster, defines the 20 percent of U. One sociological perspective distinguishes the classes, in part, according to their relative power and control over their lives.
In contrast, the lower class has little control over their work or lives. Below, we will explore the major divisions of U. Members of the upper class can afford to live, work, and play in exclusive places designed for luxury and comfort.
Photo courtesy of PrimeImageMedia. The upper class is considered the top, and only the powerful elite get to see the view from there. Money provides not just access to material goods, but also access to a lot of power. As corporate leaders, members of the upper class make decisions that affect the job status of millions of people. As media owners, they influence the collective identity of the nation. They run the major network television stations, radio broadcasts, newspapers, magazines, publishing houses, and sports franchises.
As board members of the most influential colleges and universities, they influence cultural attitudes and values. As philanthropists, they establish foundations to support social causes they believe in. As campaign contributors, they sway politicians and fund campaigns, sometimes to protect their own economic interests. While both types may have equal net worth, they have traditionally held different social standings. People of old money, firmly situated in the upper class for generations, have held high prestige.
Their families have socialized them to know the customs, norms, and expectations that come with wealth. Some study business or become lawyers in order to manage the family fortune. Others, such as Paris Hilton and Kim Kardashian, capitalize on being a rich socialite and transform that into celebrity status, flaunting a wealthy lifestyle. However, new-money members of the upper class are not oriented to the customs and mores of the elite.
They have not established old-money social ties. People with new money might flaunt their wealth, buying sports cars and mansions, but they might still exhibit behaviors attributed to the middle and lower classes.
These members of a club likely consider themselves middle class. Many people consider themselves middle class, but there are differing ideas about what that means. That helps explain why, in the United States, the middle class is broken into upper and lower subcategories. Comfort is a key concept to the middle class. Middle-class people work hard and live fairly comfortable lives. Upper-middle-class people tend to pursue careers that earn comfortable incomes.
They provide their families with large homes and nice cars. They may go skiing or boating on vacation. Their children receive high-quality education and healthcare Gilbert In the lower middle class, people hold jobs supervised by members of the upper middle class. They fill technical, lower-level management or administrative support positions.
Compared to lower-class work, lower-middle-class jobs carry more prestige and come with slightly higher paychecks. With these incomes, people can afford a decent, mainstream lifestyle, but they struggle to maintain it. In addition, their grip on class status is more precarious than in the upper tiers of the class system. When budgets are tight, lower-middle-class people are often the ones to lose their jobs. This man is a custodian at a restaurant. His job, which is crucial to the business, is considered lower class.
The lower class is also referred to as the working class. Just like the middle and upper classes, the lower class can be divided into subsets: the working class, the working poor, and the underclass. Compared to the lower middle class, lower-class people have less of an educational background and earn smaller incomes. They work jobs that require little prior skill or experience and often do routine tasks under close supervision. Working-class people, the highest subcategory of the lower class, often land decent jobs in fields like custodial or food service.
The work is hands-on and often physically demanding, such as landscaping, cooking, cleaning, or building. Beneath the working class is the working poor. Like the working class, they have unskilled, low-paying employment. However, their jobs rarely offer benefits such as healthcare or retirement planning, and their positions are often seasonal or temporary.
They work as sharecroppers, migrant farm workers, housecleaners, and day laborers. Some are high school dropouts. Some are illiterate, unable to read job ads. How can people work full-time and still be poor? Even working full-time, millions of the working poor earn incomes too meager to support a family. Even for a single person, the pay is low. A married couple with children will have a hard time covering expenses. Members of the underclass live mainly in inner cities.
Many are unemployed or underemployed. Those who do hold jobs typically perform menial tasks for little pay. Some of the underclass are homeless.
Handbook of the Sociology of Mental Health pp Cite as. Social stratification refers to differential access to resources, power, autonomy, and status across social groups. Social stratification implies social inequality; if some groups have access to more resources than others, the distribution of those resources is inherently unequal. Societies can be stratified on any number of dimensions. In the United States, the most widely recognized stratification systems are based on race, social class, and gender. Unable to display preview.
Don't have an account? Through an ethnographic exploration of everyday life infused with Marxist urbanism and critical theory, this work charts out the changes taking place in Muslim neighbourhoods in Delhi in the backdrop Through an ethnographic exploration of everyday life infused with Marxist urbanism and critical theory, this work charts out the changes taking place in Muslim neighbourhoods in Delhi in the backdrop of rapid urbanization and capitalist globalization. It argues that there is an implicit materialist logic in prejudice and segregation experienced by Muslims. Further, it finds that different classes within Muslims are treated differentially in the discriminatory process.
Article Information, PDF download for Social Stratification and Mobility, Open epub () "Social stratification and mobility: Problems in the measurement of.
Once production of your article has started, you can track the status of your article via Track Your Accepted Article. Help expand a public dataset of research that support the SDGs. The study of social inequality is and has been one of the central preoccupations of social scientists.
When we initiate new courses, or revise old ones, many of us would like to be able to consult the syllabi constructed by our colleagues who have taught similar courses. To meet this need for courses concerned with social stratification, social mobility, and social inequality, the Research Committee on Social Stratification and Social Mobility RC 28 , with the generous technical support of the California Center for Population Research, has established an archive of syllabi for such courses. These syllabi have been contributed mainly by our members, although anyone teaching pertinent courses is welcome to add syllabi to our collection. To contribute, please submit your syllabus in. Your syllabus usually will be added to the collection in just a few days.
Most sociologists define social class as a grouping based on similar social factors like wealth, income, education, and occupation. These factors affect how much power and prestige a person has. Social stratification reflects an unequal distribution of resources.
PDF | This article briefly reviews earlier as well as recent approaches to world social stratification and highlights changes in paradigms similar.Reply |
The fire that destroyed large sections of the iconic cathedral Notre-Dame de Paris last April was a national tragedy. Now, months on, scientists with the French national research organization CNRS are embarking on a multimillion-euro effort to study the 850-year-old building and its materials with the goal of illuminating how it was constructed. With unprecedented access to the cathedral’s fabric — including timber, metalwork and the building’s foundations — in the wake of the fire, scientists also hope that their work will arm them with information to help the restoration.
The research could “write a new page in the history of Notre-Dame, because there are currently many grey areas”, says Yves Gallet, a historian of Gothic architecture at the University of Bordeaux-Montaigne, who is in charge of a 30-strong research team investigating the masonry.
Construction of the cathedral, considered one of the finest examples of the French Gothic style, began in the twelfth century. The structure was modified in the Middle Ages and extensively restored in the nineteenth century by the architect Eugène Viollet-Le-Duc. But it has been the subject of surprisingly little scientific research, compared with other Gothic monuments in France and elsewhere, says Martine Regert, a biomolecular archaeologist at the CNRS’s CEPAM centre for the study of historical cultures and environments in Nice, who is one of the Notre-Dame project’s leaders. Many questions remain about the structure, such as which sections are medieval and whether Viollet-Le-Duc reused some of the older materials, says Regert.
The fire on 15 April, possibly caused by an electrical fault, destroyed the cathedral’s roof and spire, and caused part of its vaulted ceiling to collapse. The walls still stand, and the building will eventually be restored — although this is likely to take longer than the ambitious five years initially forecast, and is set to cost hundreds of millions of euros.
But until then, the interior of the building holds piles of debris: fallen stonework, burnt timbers and damaged metal artefacts, all now available for scientific study. The absence of tourists might also make it possible to use radar imaging to probe the foundations, which have been little investigated. Even some parts of the structure that were largely undamaged are now more accessible for inspection, says Philippe Dillmann, a specialist on historical metal artefacts at the CNRS Laboratory for Archaeomaterials and Alteration Forecasting in Gif-sur-Yvette, who is coordinating the project with Regert.
The CNRS project will focus on seven topics: masonry, wood, metalwork, glass, acoustics, digital data collection and anthropology. In all, the effort will involve more than 100 researchers in 25 laboratories and will last for 6 years.
Gallet’s team will study Notre-Dame’s stones to identify the quarries that supplied them and “reconstruct the supply networks and the economy of the site”. Studying the mortar used to bind the stones together could reveal how different compositions were used for the various structural elements — vaulting, walls and flying buttresses. The mortar used lime prepared from sedimentary limestone, which might contain fossil remnants that could reveal where it originated. A better knowledge of the historical materials could inform choices made in restoration, says Gallet.
The team will also analyse weaknesses in the remaining structure caused by the high temperatures of the fire, the fall of masonry and the water used to extinguish the flames. Damage to the stones was exacerbated last July by extreme heat waves in Paris, which “brutally dried” and weakened the masonry, says Gallet. A radar study will determine how solid the foundations are before restorers erect scaffolding in the crossing between the nave and the transept to allow them to dismantle the unstable remnants of the nineteenth-century spire.
And with the help of historians, Gallet’s team hopes to gain a deeper understanding of the structural engineering of Gothic architecture as a whole, and Notre-Dame’s place in that story.
Out of the ashes
Meanwhile, a team of about 50 will focus on Notre-Dame’s famous woodwork — especially the ‘forest’ of timbers in the roof space above the vaults — which has either burnt away or lies charred in the nave. These blackened remains could be tremendously valuable to researchers.
“The burnt structure constitutes a gigantic laboratory for archaeology,” says Alexa Dufraisse, an archaeologist at the National Museum of Natural History in Paris, who will lead the multidisciplinary wood team. The group will include archaeologists, historians, dendrochronologists, biogeochemists, climatologists, carpenters, foresters and engineers specializing in wood mechanics.
“Wood is an extraordinary source of information,” says Regert. Initial observations have confirmed that the ‘forest’ is made of oak, but studies will pinpoint the exact species used and give researchers clues about the techniques and tools of medieval timber construction.
Tree-ring dating of timber beams could reveal the year and location in which the trees were felled, filling in gaps in knowledge about the sequence of construction. “Each tree records within its tissues the environment in which it has grown,” says Dufraisse. This kind of study “could never have been conducted without the destruction of the structure by fire”, she says.
In particular, says Regert, the wood is a climate archive. “Isotopic analyses of oxygen and carbon in the rings make it possible to determine the temperature and rainfall over time,” she says. The trees used in Notre-Dame grew between the eleventh and thirteenth centuries, during a warm period known as the medieval climate optimum, offering a reference period for natural climate warming to compare with anthropogenic warming today. “This period is poorly known because woods of that time is rare,” says Dufraisse.
Metal and masonry
A separate team will investigate the cathedral’s metalwork — in particular that used to support the stone and woodwork. “We want to understand the use of iron armatures in the different construction and restoration phases,” says archaeologist Maxime L’Héritier of the University of Paris 8, who will lead the study. Metal rods, for example, were used to support sections of masonry under tension, and medieval builders sometimes inserted iron chains into the stonework to strengthen it. L’Héritier says that there has never before been a study of changes in the use of iron in cathedral building over such a long period, from the Middle Ages to the nineteenth century.
His team will also study the lead from the roof — much of which was damaged or melted in the fire. The researchers aim to develop a chemical reference data set that records the ratios of lead isotopes and the presence of trace elements in the material, “to understand the evolution of lead quality and supply” — for example, to identify the mines from which the metal came. The group also wants to investigate how much lead was recycled when the roofing was restored in the nineteenth century. These results might also enable researchers to work out how much lead the fire released into the environment — a potential health hazard for the immediate vicinity.
Access all areas?
Collecting and excavating the materials for analysis is challenging. There are three main piles of debris — in the nave, the crossing and the north transept — as well as material still on top of the remaining vaults. But these are currently off-limits to people for safety reasons, Dillmann says — so robots and drones must do all the collecting. Some of this material might ultimately be reused in restoration.
“The first challenge is to collect all wooden elements, regardless of their level of carbonization,” says Dillmann. So far, he says, nearly 1,000 fragments have been collected and labelled — but the work is just beginning. Dufraisse says that this wood won’t be accessible to researchers for at least another three months, because it is currently too contaminated with lead. Researchers will need to calibrate how chemical signatures in the wood have been modified by the high temperatures of the fire. “I know we are going to be faced with technical problems, but I remain confident,” says Dufraisse.
The collection and analysis will need to be documented precisely and thoroughly. Livio de Luca, a specialist in digital mapping of architecture, at the CNRS’s Mixed Research Unit in Marseille will lead a team dedicated to creating a “digital ecosystem” that summarizes both the scientific research and the current and previous states of the cathedral, drawing on the work of scientists, historians, archaeologists, engineers and curators — and perhaps even on old tourist photos of the structure.
“It will be like a ‘digital twin’ of the cathedral, able to evolve as the studies progress,” de Luca says. It will include online models for 3D visualization of the building and its attributes — a kind of Google Earth for Notre-Dame, created from billions of data points, with the history and evolution of the structure superimposed on the spatial map.
As well as deepening our understanding of this monumental building, Regert hopes that the scientific studies will be useful when its ravaged vaults rise again. The results, she says, might “illuminate the choices that society will have to make for the restoration”. She hopes, too, that they could help to prevent such a catastrophic accident from happening again.
Nature 577, 153-154 (2020) |
Today in 1517, Wittenburg, Saxony, Martin Luther nailed his ‘Disputation on the Power and Efficacy of Indulgences’ (also known as his 95 Theses) on the door of All Saints’ Church.
Although this may not seem to be a monumental act, it proved to have a huge impact on the religious, cultural and political traditions of Europe. For many, this event serves as the initial catalyst for the Protestant Reformation.
This 95 Theses challenged the pope’s authority by declaring that
“The pope has neither the will nor the power to remit any penalties beyond those imposed either at his own discretion or by canon law.”
In a period where the utmost devotion to Catholicism and the Papacy was undisputed and assumed, Luther’s attack on the validity of the scope of the pope’s authority, as well as the legitimacy of the sale of indulgences were considered heretical. His refusal to withdraw these challenges to Papal power led to his excommunication in 1520.
“Any Christian whatsoever, who is truly repentant, enjoys plemary remission from penalty and guilt, and this is givenhim without letters of indulgence.” Here Luther encouraged a return to Scripture by championing the doctrine that forgiveness and redemption came from God himself – not the pope. He was voicing the theory that the pope was overstepping his boundaries by granting pardons and letters of indulgence and by encouraging Christians to do so by teaching it as the only way to truly gain redemption.
The Catholic Church had grown increasingly wealthy through this encouragement to purchase letters to the point where it became common practice. Luther, angered by the growth of iconoclasm (that is, worship of religious idols/images in the place of God) and dependance on the authority of the pope rather than God, made his doctrine that followed Scripture more closely, public.
He believed in the doctrine of justification and redemption by faith alone, as he did not believe that the pope had the authority to grant redemption through the purchase of indulgences.
Although the Protestant Reformation did not necessarily promote Luther’a theology as dictated in his 95 Theses, it is clear that Reformation theology and doctrine emerged and grew from his points. It could be argued that it was the foundation of the Reformation as it allowed other theologians to follow suit and speak against the archaic values of the Catholic Church at the time and encourage debate.
If in any doubt over the impact of the rise of this Evangelical (it was not named Protestant for a while yet) theology, we need look no further than the Church of England. The writings and discussions of the new theologians and reformers across Europe allowed Henry VIII to consider and alternative route to gain his divorce rather than carry on fruitlessly by appealing to the Papacy. In the process of seeking alternative guidance he broke away from Rome and created the Church of England that placed him firmly as the head of both state and religion in his own country. It is a Church and doctrine that is still followed today (although greatly modified and developed over the past 500 years) in England. |
On this day in 1887, Grover Cleveland signed the Dawes Act, also called the General Allotment Act, which split up reservations held communally by Native American tribes into smaller units and distributed these units to individuals within the tribe constituted a huge blow to tribal sovereignty.
The goal was to encourage farming and integration in American culture.
Under the Dawes Act, the head of each Native American family received 160 acres in an effort to encourage Native Americans to take up farming, live in smaller family units that were considered more American and renounce tribal loyalties. The government held such lands in trust for 25 years, until the recipients could prove themselves self-sufficient farmers. Before the family could sell their allotment, they were required to get a certificate of competency. If the family did not succeed at farming, the land reverted back to the federal government for sale, usually to white settlers. The Dawes Act reduced Native American landholdings from 138 million acres in 1887 to 78 million in 1900 and continued the trend of white settlement on previously Native American-held land. In addition, the law created federally funded boarding schools designed to assimilate Native American children into white society. Family and cultural ties were practically destroyed by the now-notorious boarding schools, in which children were punished for speaking their native language or performing native rituals.
President Roosevelt abolished the Dawes Act in 1934 |
What are the components of the soil?
The soil is a thin non-compacted superficial layer which covers the Earth crust, and it is a very important part of the environment, There are different types of the soil that are different in color and texture.
Where the color of the soil helps the scientists to identify the elements and the minerals inside it and the texture of the soil is smooth or granular or rocky rough.
The soil components
The soil is made of many components such as the water, the air, the silt, the humus and pieces of rocks (which is composed of the sand, the clay, the minerals and the gravels).
The rocks are the main source of the sand and the clay which are the main components of the soil, The humus is the decayed remains of the animals and the plants mixed with the soil components, and its color is dark brown or black.
How is the humus formed?
When the living organisms (the plants or the animals) die, their bodies decay forming the humus, So, The animals and the plants affect on the soil composition.
The humus adds the nutrients to the soil and it affects on the color of the soil, So, the color of the soil is dark brown or black, The soil is composed of the minerals mixed with different microorganisms and the decayed materials of the dead organisms. |
How does exercise affect blood pressure?
Getting enough exercise can improve your blood pressure numbers and help get it under control. A lack of physical activity can be damaging to your health. Why? People who are inactive tend to have higher heart rates (the number of times your heart beats per minute). The higher your heart rate, the harder your heart must work—and the higher your blood pressure can get. Lack of physical activity also increases your risk of being overweight, which can lead to higher blood pressure.
The good news is that every minute counts! It only takes 30 minutes of exercise a day to lower your blood pressure. Exercise doesn't mean you have to run a marathon or swim across the ocean. It can be as simple as parking farther from the store, taking the stairs instead of the elevator, playing actively with your kids or going for a walk around the block after dinner.
Article courtesy of Measure Up/Pressure Down®. Measure Up/Pressure Down is a three-year national campaign created by the American Medical Group Foundation to improve blood pressure control. Learn how to lower your risk and manage the disease with our booklet, Circulation Nation: Your Roadmap to Managing High Blood Pressure. |
Speech and phonological disorders are the focus of this article by Michael Farrel, who considers provision for those pupils dealing with communication difficulties
Thompson (2003, p10) defines speech as ‘the mechanical aspect of communication… the ability to produce the sounds, words and phrases.’ Speech difficulties occur when communication is impaired by the child’s capacity for speech. Speech may be unintelligible owing to: physical difficulties with articulation; and/or difficulties making sound contrasts that convey meaning; and/or problems in controlling pitch. Phonological disorders, the focus of much of this article, relate to differences in speech sounds that carry meaning. Assessment draws on the shared observations of parents, teachers, speech and language therapists and others.
Curriculum and assessment
As well as providing structured sessions focusing on improving phonological skills and knowledge, curriculum planning will ensure that phonological development is supported in all aspects of the curriculum. More time may be spent on developing phonology across the curriculum, including special programmes such as Metaphon. Assessment of phonological development may be in small steps to provide the opportunity to recognise progress.
Raising phonological awareness
Raising phonological awareness lends itself to whole-class and small group teaching and can be interesting for all pupils. Where new vocabulary is introduced, the teacher will encourage a keen interest in the word or phrase. She will explicitly teach various aspects of the vocabulary including:
- Phonological – how do the sounds of the word break up and blend back together? Do the pupils know any words that sound similar? What are the syllables of the word?
- Grammatical – how is the word used in sentences?
- Semantic – what does the word mean? Does it have interesting origins?
Encouraging phonological change
Among the programmes that encourage phonological change is the Children’s Phonology Sourcebook (Flynn and Lancaster, 1997). Intended for speech and language therapists, it provides ideas and resources that can be copied for parents and teachers. Coverage includes auditory input, first words, speech perception, and phonological representations and there is an emphasis on the auditory processing of speech (see www.speechmark.net).
‘Metaphon’ also uses activities designed to bring about phonological change (Howell and Dean, 1994).
Alternative and augmentative communication
Where alternative and augmentative communication is used for children with speech problems the problems tend to be severe. With symbolic communication, symbols are used for example a word or a picture is used to stand for something.
‘Non aided’ communication involves the child making a movement or vocalisation not requiring a physical aid or other device. Examples are oral language, manual sign languages or individualised communication (eg one blink for ‘yes’, two for ‘no’). Signing may be used as means of communication other than speech or accompanying developing speech.
‘Aided’ augmentative communication involves using a device or item other than one’s own body such as communication boards, eye gaze boards and electronic systems.
Where a child has severe communication difficulties a communication board might be used. Another non-electronic device is a communication notebook. This can include photographs, symbols and words, enabling a pupil to find a symbol and show the particular page to someone who may not know the symbol so they can see the intended word.
Dedicated communication devices are electronic communication systems that speak programmed messages when the user activates locations marked by symbols. Computer aided communication may involve the pupil having a voice production device with a computer based bank of words and sentences that can be produced by pressing the keyboard keys.
Communication grids in which several graphic symbols are set out in a specified order can enable a pupil to participate in a group sessions; for example, to support retelling a story.
Where speech problems are severe or where there are many communication problems, signing may be used. If so, classroom organisation can ensure that all pupils are able to see the communications as well as hear the accompanying words. Where a child’s speech intelligibility is developing, in group and class settings it will be important that the acoustics are good so the teacher and other children can hear what the child is saying. It is also helpful to provide the correct model of the word. For example, a child who says ‘gog’ for ‘dog’ would be helped by hearing the teacher say, ‘You’ve got a new dog’ rather than simply hearing the teacher correct the wrong word.
Speech and language therapy
The role and contribution of the speech therapist is important for children with phonological difficulties, whether it involves the therapist working directly with the child or taking a more advisory or supervisory role. For example, individual task based programmes may be developed jointly with the teacher and speech therapist. Or the speech therapist might work with a teaching assistant who continues the planned work when the therapist is not present.
Farrell, M (2005) The Effective Teacher’s Guide to Autism and Communication Difficulties, New York and London, Routledge
- Flynn, L and Lancaster, G (1997) Children’s Phonology Sourcebook Brackley, Speechmark Publishing
- Howell, J and Dean, E (1994) (2nd Edition) Treating Phonological Disorders in Children: Metaphon – Theory to Practice London, Whurr Publishers
- Thompson, G (2003) Supporting Children with Communication Disorders: A Handbook for Teachers and Teaching Assistants London, David Fulton Publishers
Dr. Michael Farrell is a special education consultant |
- Memory hierarchy is the hierarchy of memory and storage devices found in a computer system.
- It ranges from the slowest but high capacity auxiliary memory to the fastest but low capacity cache memory.
There is a trade-off among the three key characteristics of memory namely-
- Access time
Memory hierarchy is employed to balance this trade-off.
Memory Hierarchy Diagram-
- At level-0, registers are present which are contained inside the CPU.
- Since they are present inside the CPU, they have least access time.
- They are most expensive and therefore smallest in size (in KB).
- Registers are implemented using Flip-Flops.
- At level-1, Cache Memory is present.
- It stores the segments of program that are frequently accessed by the processor.
- It is expensive and therefore smaller in size (in MB).
- Cache memory is implemented using static RAM.
- At level-2, main memory is present.
- It can communicate directly with the CPU and with auxiliary memory devices through an I/O processor.
- It is less expensive than cache memory and therefore larger in size (in few GB).
- Main memory is implemented using dynamic RAM.
- At level-3, secondary storage devices like Magnetic Disk are present.
- They are used as back up storage.
- They are cheaper than main memory and therefore much larger in size (in few TB).
- At level-4, tertiary storage devices like magnetic tape are present.
- They are used to store removable files.
- They are cheapest and largest in size (1-20 TB).
The following observations can be made when going down in the memory hierarchy-
- Cost / bit decreases
- Frequency of access decreases
- Capacity increases
- Access time increases
Goals of Memory Hierarchy-
The goals of memory hierarchy are-
- To obtain the highest possible average access speed
- To minimize the total cost of the entire memory system
To gain better understanding about Memory Hierarchy-
Next Article- Memory Organization | Simultaneous Vs Hierarchical
Get more notes and other study material of Computer Organization and Architecture.
Watch video lectures by visiting our YouTube channel LearnVidFun. |
Fragile X Syndrome is a genetic disorder that affects approximately 1 in 3,000 to 1 in 4,000 males and approximately 1 in 4,000 to 1 in 6,000 females. It is caused by a specific mutation on the FMR1 gene. People with Fragile X have a full mutation of the FMR1 gene. Some people have what is called a premutation, or less than a full mutation of the FMR1 gene.
What are Fragile X and FMR1 Gene Premutations?
Proteins are important for human development and the healthy function of our bodies. If the gene has a mutation, the instructions for building the protein will not make sense. It may be built incorrectly, or it may not get built at all.
Fragile X is caused by extra ‘words’ in the genetic code of the FMR1 gene. On part of the gene, there is a repetition of a specific word. These repeated words are called CGG repeats because the 3-letter genetic code combination of CGG is repeated many times. In healthy people, the CGG word is repeated between 6 and 54 times. People with Fragile X have more than 200 CGG-repeats. These extra words prevent the messenger RNA from making the amino acids that are needed for building the FMR1 protein. These extra words also make it look like part of the X chromosome is going to break off, which is how Fragile X got its name. People who have between 55 and 200 repeats are called premutation carriers. They are able to produce more of the FMR1 protein than people with the ‘full’ mutation but less than people who don’t have the mutation.
The FMR1 gene is located on the X chromosome. The X chromosome is one of the two chromosomes that determine biological sex in humans; a biological female has two Xs and a biological male has one X and one Y. Each parent contributes one chromosome to the child. The female contributes one X and dad contributes either an X (child will be biologically female,) or a Y, in which case the offspring will be biologically male.
So, if a biological female inherits the Fragile X mutation from one parent, she may inherit a healthy X chromosome from the other parent. This means that she will either be protected from developing symptoms, or her symptoms won’t be as severe, because the healthy X chromosome provides protection from the unhealthy chromosome. A male only inherits one X chromosome, so if that X chromosome has the mutation, he does not have another X chromosome to protect him. This means that boys are more likely to have Fragile X syndrome than girls, and boys with Fragile X usually have more severe symptoms than girls with Fragile X.
Fragile X Syndrome
The mutation of the FMR1 gene prevents the body from making enough protein. The FMR1 protein is important for brain development and the health of reproductive systems. Scientists also believe that the FMR1 protein helps translate the genetic code to make other important proteins. Because of the importance of FMR1 protein the body, people with the full FMR1 mutation – known as Fragile X Syndrome – may have many of the following characteristics:
- Intellectual deficits that can range from mild learning difficulties to severe intellectual disability
- Behaviors including poor attention, hyperactivity, social anxiety, repetitive hand biting/flapping, poor eye contact, unusual reactions to sensory stimuli, aggression. Many people with Fragile X syndrome may also have ADHD or autism spectrum disorder.
- Areas of strength that include social and friendly temperament and strong memory and imitation skills.
- Unusual physical features that include large ears, soft skin, ear infections, flat feet, high arched palate, double-joined fingers, and hyperflexible joints. Soft skin and large testicles (males) may also occur, but they are more likely to appear after puberty.
- Females with Fragile X often have milder symptoms than males, and a small percentage of females with the full mutation do not have symptoms.
Fragile X Premutation Carriers and Health
Fragile X premutation carriers are people who are likely to have a child with Fragile X Syndrome, but they do not have Fragile X Syndrome themselves. Premutation carriers have between 55 and 200 CGG repeats, which is less than people with Fragile X Syndrome but more repeats than are found in healthy people. Approximately 1 in 468 males and 1 in 151 females are premutation carriers. While premutation carriers do not have Fragile X syndrome, they are at higher risk for other health issues. These include:
Fragile X Tremor Ataxia Syndrome (FXTAS)
FXTAS is a condition that affects movement in people who have the Fragile X premutation. Symptoms are similar to Parkinson’s disease and usually appear later in life and worsen over time. Male premutation carriers over the age of 50 are most commonly affected, but female premutation carriers may also develop symptoms. The risk of developing FXTAS increases with age.
The main symptoms include shaking of the hands when using tools or reaching for something (intention tremor) and balance or stability problems when walking or using stairs (gait ataxia). People with FXTAS also have damage to the brain's white matter in a region called the medial cerebellar peduncles (MCP), which can be observed using Magnetic Resonance Imaging (MRI). This brain pathology is one of the primary signs that doctors use to diagnose FXTAS.
Other symptoms that clinicians use to help diagnose FXTAS include resting tremor (Parkinsonism), problems with short-term memory, problems with decision making and “executive function” (initiating and completing activities, changing behavior as needed, anticipating and planning for new tasks and situations, and problem solving). MRI findings include damage to the brain’s white matter that is outside of the cerebellum (lesions of cerebral white matter) and shrinking of the brain (brain atrophy).
Additionally, symptoms that may occur but are not used for diagnosis include numbness or tingling in the extremities (neuropathy), mood instability (irritability, personality changes), cognitive decline (loss of skill in reading, math, etc.), problems with autonomic control (impotence, loss of bladder control, low of bowel control), high blood pressure, thyroid disorders, and fibromyalgia.
Fragile X Premature Ovarian Insufficiency (FXPOI)
Because of the importance of the FMR1 gene in reproductive health, premutation carriers are at risk for disorders of the reproductive system. FXPOI is a condition where the ovaries do not function properly in women with the Fragile X premutation. About 20-25% of adult women with the FMR1 premutation have FXPOI. It can also occur in teenagers with the premutation, though it is less common.
Ovaries have many important functions in female reproductive health. They store and maintain a woman’s eggs throughout her lifetime. The ovaries also control when eggs are released for potential fertilization during a woman’s menstrual cycle (usually one egg per cycle).
In women with FXPOI, the ovaries do not function properly, and they act like the ovaries of an older woman. They do not keep the eggs healthy, and they do not release eggs as often.
Symptoms of FXPOI are similar to symptoms of menopause. They include absent or irregular menstrual cycles, infertility or poor fertility, hot flashes and vaginal dryness. Women with severe cases of FXPOI may experience premature ovarian failure, where a woman’s menstrual periods stop occurring before the age of 40. Women with the FMR1 premutation may have normal ovarian function, but they may go through early menopause (at around 40-45 years instead of 45-55 years of age).
Women who do not have the Fragile X premutation can also have premature ovarian insufficiency (POI) and may experience similar symptoms to women with FXPOI. Doctors can diagnose FXPOI using genetic testing to confirm FMR1 premutation status and tests of hormone levels, specifically follicle stimulating hormone (FSH), which is an indicator of ovarian function.
Fragile X-Associated Neuropsychiatric Disorders (FXAND)
According to recent research findings by Dr. Randi Hagerman and her team at University of California-Davis MIND Institute, FMR1 premutation carriers may be at increased risk for a variety of mental and other health conditions. These conditions may include anxiety, depression, attention deficit hyperactive disorder (ADHD), chronic pain, fibromyalgia, chronic fatigue, sleep problems, and autoimmune disorders. Though it is not known why premutation carriers are more likely to develop these health problems, Dr. Hagerman believes that it may be due to the toxic effect of the premutation on the brain in addition to other environmental factors, such as stress. Diagnosis and treatment of these conditions in premutation carriers is the same as it is for non-carriers. Treatments may include counseling and prescription medication such as an antidepressant or antianxiety medication. |
Have you ever:
- found yourself wearing one navy blue sock and one black sock?
- had a problem distinguishing very close shades of green, gray, and blue?
- discovered that you were wearing a red shirt when you thought it was brown?
If so, you may have color vision deficiency, otherwise known as “color blindness.”
Color vision deficiency is an inability to see colors accurately. Sometimes, it involves a confusion of brightness in color variations; other times, it involves lack of shade differentiation in similar colors.
- Ninety-nine percent of people with color vision deficiency can see some colors.
- If you cannot see any colors at all, you have achromatopsia. You are the rare one in every 33,000 people on Earth!
Color vision deficiency results from damage to or absence of the specific cells in the retina, the light-sensitive tissue in the back of the eye. When activated by light, these special cone-shaped cells send messages to the brain via nerve impulse that focus the light and interpret color. Cone cells work with three different light frequencies: red, green, and blue. They allow people to view 7-10 million color shades when healthy.
Red/green or blue/yellow color combinations are most commonly affected by these damaged or absent cone cells. You can have weaknesses in red perception, difficulty with green perception, complete red blindness, or total green blindness. It is possible to only see in shades of gray if you have no functioning cone cells (achromotopsia).
Most people experiencing color blindness are born with it (congenital). Some diseases can result in color vision distortions: glaucoma, diabetes, Alzheimer’s, Parkinson’s, leukemia, and sickle cell anemia. The use of certain drugs or alcoholism can also result in this condition. As people age, cataracts can also add to color vision decline, since they may tint the eye lens yellow or brown. Color blindness can be in one or both eyes.
- 320 million people are deficient in color perception in the world.
- Eight percent of men experience it, while only one in 255 women have it.
- Males have it more often because it is an inherited, sex-linked genetic trait.
- Color vision deficiency does not consistently affect nationalities (Caucasian males – 10% chance while Eskimos – 1% chance)
- Males of Northern European heritage experience color blindness most.
In order to assess whether you are experiencing this condition, vision specialists can conduct one or two exams.
- In an Ishihara test, they will show you a card with a pattern of multiple-colored dots. You read aloud the number you see in the colored pattern.
- In a Farnsworth Munsell 100 Hue Test, you receive four trays of items to arrange by gradually changing light to dark hues.
Color vision deficiency has no cure. However, some tinted contacts and glasses can assist with color differentiation by enhancing the brightness or darkness in your vision.
If you mistake blue for gray or red for orange, know that you are not alone. Keanu Reeves, Bill Clinton, Christopher Nolan, and Mark Zuckerberg all have red-green color blindness. Even Mr. Rogers had problems with color vision deficiency.
Remember to see your vision care provider if you are concerned about your eyesight. |
Decimal points are used in numbers to separate the whole number from parts of the whole. Like whole numbers, numbers written as decimals can be either positive or negative, for example, 2.6 or -2.6.
Decimals are just one way of expressing numbers that are parts of wholes. These numbers can also be written as fractions or percentages. The number 1.5 (decimal) could also be written as (fraction) or 150% (percentage). They are all exactly the same number.
Knowledge of converting between decimals, fractions and percentages is important.
Place value gives the value of each digit in a number. For example, in the number 42, the 4 is worth 4 tens, or 40, and the 2 is worth 2 units, or 2. The same process is true for decimals.
In the number 2.78, the 2 is worth two units, the 7 is worth 7 tenths and the 8 is worth 8 hundredths.
This is the same as 2 and 78 hundredths or .
Ordering decimals involves comparing digits in the same columns, starting with the digits in the place value column that is furthest to the left.
Which is greater, 2.5 or 2.15?
Firstly, both numbers have a 2 in the units column, so look at the next digit along. This is the digit in the first decimal place. The first number has a 5 in the tenths column whereas the second number has a 1 in the tenths column. 5 is greater than 1, so that means that 2.5 is greater than 2.15.
To make it easier to compare, make sure all the decimals have the same number of decimal places by adding zeros to the end if you need to.
To compare 2.5 and 2.15, add a zero to 2.5. It’s clear to see now that 2.15 must be smaller than 2.50, just like 215 is smaller than 250.
Put these decimals in order, starting with the smallest:
The answer is: 3.07, 3.7, 3.72, 3.764, 4.3. |
The range of tigers once extended across Asia from eastern Turkey and the Caspian Sea south of the Tibetan plateau eastward to Manchuria and the Sea of Okhotsk. Tigers were also found in northern Iran, Afghanistan, the Indus valley of Pakistan, Laos, Thailand, Vietnam, Cambodia, Malaysia, and the islands of Java and Bali. Tigers are now extinct or nearly extinct in most of these areas. Populations remain relatively stable in northeastern China, Korea, Russia, and parts of India and the Himalayan region. (Mazak, 1981; Sunquist and Sunquist, 2002; Thapar, 2005)
There are eight recognized subspecies of P. t. altaica, are currently found only in a small part of Russia, including the Amurussuri region of Primorye and Khabarovsk. Bengal tigers, P. t. tigris, are found in India, Bangladesh, Nepal, Bhutan, and China. Indochinese tigers, P. t. corbetti, are found in Cambodia, China, Laos, Malaysia, Myanmar, Thailand, and Vietnam. South China tigers, P. t. amoyensis, are found in three isolated areas in southcentral China. Sumatran tigers, P. t. sumatrae, are found only on the Indonesian island of Sumatra. Bali tigers (P. t. balica), Javan tigers (P. t. sondaica), and Caspian tigers (P. t. virgata) are thought to be extinct. Those subspecies occurred on the islands of Bali (P. t. balica), Java (P. t. sondaica), and in Turkey, the Transcaucasus region, Iran, and central Asia (P. t. virgata). (Mazak, 1981; Sunquist and Sunquist, 2002; Thapar, 2005). Siberian tigers,
Tigers live in a wide variety of habitats, suggested by their distribution across a wide range of ecological conditions. They are known to occur in tropical lowland evergreen forest, monsoonal forest, dry thorn forest, scrub oak and birch woodlands, tall grass jungles, and mangrove swamps. Tigers are able to cope with a broad range of climatic variation, from warm moist areas, to areas of extreme snowfall where temperatures may be as low as –40 degrees Celsius. Tigers have been found at elevations of 3,960 meters. In general, tigers require only some vegetative cover, a source of water, and sufficient prey. (Mazak, 1981; Sunquist and Sunquist, 2002; Ullasa, 2001)
Tigers have a reddish-orange coat with vertical black stripes along the flanks and shoulders that vary in size, length, and spacing. Some subspecies have paler fur and some are almost fully white with either black or dark brown stripes along the flanks and shoulders. The underside of the limbs and belly, chest, throat, and muzzle are white or light. White is found above the eyes and extends to the cheeks. A white spot is present on the back of each ear. The dark lines about the eyes tend to be symmetrical, but the marks on each side of the face are often asymmetrical. The tail is reddish-orange and ringed with several dark bands. (Mazak, 1981; Sunquist and Sunquist, 2002; Thapar, 2005; Ullasa, 2001)
Body size and morphology varies considerably among subspecies of tigers. Siberian tigers, also know as Amur tigers (P. t. altaica), are the largest. Male Siberian tigers can grow to 3.7 meters and weigh over 423 kg; females are up to 2.4 meters in length and 168 kg. Male Indochinese tigers (P. t. corbetti), though smaller than Siberian tigers in body size at 2.85 meters in length and 195 kg, have the longest skull of all tiger subspecies, measuring 319 to 365 mm. Sumatran tigers (P. t. sumatrae) are the smallest living subspecies. Male Sumatran tigers measure 2.34 meters and weigh 136 kg; females measure 1.98 meters and weigh 91 kg. (Mazak, 1981; Sunquist and Sunquist, 2002; Thapar, 2005; Ullasa, 2001)
Tigers are powerful animals, one is known to have dragged a gaur bull weighing 700 kg. Tigers have short, thick necks, broad shoulders, and massive forelimbs, ideal for grappling with prey while holding on with long retractible claws and broad forepaws. A tiger’s tongue is covered with hard papillae, to scrape flesh off the bones of prey. (Sunquist and Sunquist, 2002; Thapar, 2005; Ullasa, 2001)
All tigers have a dental formula of 3/3, 1/1, 3/2, 1/1. Bengal tigers (P. t. tigris) have the longest canines of any living large cat; from 7.5 to 10 cm in length. A tiger's skull is robust, short, and broad with wide zygomatic arches. The nasal bones are high, projecting little further than the maxillary, where the canines fit. Tigers have a well-developed sagittal crest and coronoid processes, providing muscle attachment for their strong bite. (Sunquist and Sunquist, 2002; Thapar, 2005; Ullasa, 2001)
Tigers are solitary and do not associate with mates except for mating. Local males may compete for access to females in estrus. (Sunquist and Sunquist, 2002)
Female tigers come into estrus every 3 to 9 weeks and are receptive for 3 to 6 days. They have a gestation period of about 103 days (from 96 to 111 days), after which they give birth to from 1 to 7 altricial cubs. Average litter sizes are 2 to 3 young. In Siberian tigers the average litter size is 2.65 (n=123), similar averages have been found in other tiger subspecies. Newborn cubs are blind and helpless, weighing from 780 to 1600 g. The eyes do not open until 6 to 14 days after birth and the ears from 9 to 11 days after birth. The mother spends most of her time nursing the young during this vulnerable stage. Weaning occurs at 90 to 100 days old. Cubs start following their mother at about 2 months old and begin to take some solid food at that time. From 5 to 6 months old the cubs begin to take part in hunting expeditions. Cubs stay with their mother until they are 18 months to 3 years old. Young tigers do not reach sexual maturity until around 3 to 4 years of age for females and 4 to 5 years of age for males. (Sunquist and Sunquist, 2002; Ullasa, 2001)
Like other mammals, females care for and nurse their dependent young. Weaning occurs at 3 to 6 months, but cubs are dependent on their mother until they become proficient hunters themselves, when they reach 18 months to 3 years old. Young tigers must learn to stalk, attack, and kill prey from their mother. A mother caring for cubs must increase her killing rate by 50% in order to get enough nutrition to satisfy herself and her offspring. Male tigers do not provide parental care. (Mazak, 1981; Sunquist and Sunquist, 2002)
Tigers usually live 8 to 10 years in the wild, although they can reach ages into their 20's. In captivity tigers have been known to live up to 26 years old, although a typical captive lifespan is 16 to 18 years. It is estimated that most adult tigers die as a result of human persecution and hunting, although their large prey can occasionally wound them fatally. Young tigers face numerous dangers when they disperse from their mother's home range, including being attacked and eaten by male tigers. Some researchers estimate a 50% survival rate for young tigers. (Mazak, 1981; Sunquist and Sunquist, 2002)
Tigers are solitary, the only long-term relationship is between a mother and her offspring. Tigers are most active at night, when their wild ungulate prey are most active, although they can be active at any time of the day. Tigers prefer to hunt in dense vegetation and along routes where they can move quietly. In snow, tigers select routes on frozen river beds, in paths made by ungulates, or anywhere else that has a reduced snow depth. Tigers have tremendous leaping ability, being able to leap from 8 to 10 meters. Leaps of half that distance are more typical. Tigers are excellent swimmers and water doesn't usually act as a barrier to their movement. Tigers can easily cross rivers as wide as 6-8 km and have been known to cross a width of 29 km in the water. Tigers are also excellent climbers, using their retractible claws and powerful legs. (Mazak, 1981; Sunquist and Sunquist, 2002)
Home range sizes vary depending on the density of prey. Female Indian tigers (P. t. tigris) have home range sizes from 200 to 1000 square kilometers (range 64 to 9252 km2); a male's home range averages between 2 to 15 times larger. Within their home range tigers maintain several dens, often among dense vegetation or in a cave, cavity under a fallen tree, or in a hollow tree. Tigers often defend exclusive home ranges, but they have also been known to peacefully share home ranges or wander permanently, without any home range. Tigers may cover as much as 16 to 32 kilometers in a single night. (Mazak, 1981; Sunquist and Sunquist, 2002)
Communication among tigers is maintained by scent markings, visual signals, and vocalization. Scent markings are deposited in the form of an odorous musky liquid that is mixed with urine and sprayed on objects like grass, trees, or rocks. A facial expression called “flehmen” is often associated with scent detection. During flehmen, the tongue hangs over the incisors, the nose is wrinkled, and the upper canines are bared. Flehmen is commonly seen in males that have just sniffed urine, scent marks, an estrous tigress, or a cub of their own species. (Schaller, 1967; Sunquist and Sunquist, 2002; Thapar, 2005; Ullasa, 2001)
Visual signals made by tigers include spots that have been sprayed, scrapes made by raking the ground, and claw marks left on trees or other objects. Schaller (1967) described a “defense threat” facial expression observed when a tiger is attacking. This involved pulling the corners of the open mouth back, exposing the canines, fattening the ears, and enlarging the pupils of the eyes. The spots on the back of their ears and their pattern of stripes may also be used in intraspecific communication. (Mazak, 1981; Schaller, 1967; Sunquist and Sunquist, 2002; Thapar, 2005; Ullasa, 2001)
Tigers can also communicate vocally with roars, growls, snarls, grunts, moans, mews, and hisses. Each sound has its own purpose, and appears to reflect the tiger's intent or mood. For example, a tiger’s roar is usually a signal of dominance; it tells other individuals how big it is and its location. A moan communicates submission. The ability of tigers to roar comes from having a flexible hyoid apparatus and vocal fold with a thick fibro- elastic pad that allows sound to travel long distances. (Schaller, 1967; Sunquist and Sunquist, 2002; Thapar, 2005; Ullasa, 2001)
Tigers prefer to hunt at night, when their ungulate prey are most active. In a study done in India by Schaller (1967), tigers were most active before 0800 and after 1600 hours. Tigers are thought to locate their prey using hearing and sight more than olfaction (Schaller, 1967). They use a stealthy approach, taking advantage of every rock, tree and bush as cover and rarely chase prey far. Tigers are silent, taking cautious steps and keeping low to the ground so they are not sighted or heard by the prey. They typically kill by ambushing prey, throwing the prey off balance with their mass as they leap onto it. Tigers are successful predators but only 1 out of 10 to 20 attacks result in a successful hunt. (Mazak, 1981; Schaller, 1967; Sunquist and Sunquist, 2002)
Tigers use one of two tactics when they get close enough to kill. Small animals, weighing less than half the body weight of the tiger, are killed by a bite to the back of the neck. The canines are inserted between the neck vertebrae forcing them apart and breaking the spinal cord. For larger animals, a bite to the throat is used to crush the animal’s trachea and suffocate it. The throat bite is the safer killing tactic because it minimizes any physical assault the tiger may receive while trying to kill its prey. After the prey is taken to cover, tigers feed first on the buttocks using the carnassials to rip open the carcass. As the tiger progresses it opens the body cavity and removes the stomach. Not all of the prey is eaten; some parts are rejected. Prey are usually dragged to cover and may be left there and revisited over several days. (Schaller, 1967; Sunquist and Sunquist, 2002)
The majority of the tiger diet consists of various large ungulate species, including sambar (Rusa unicolor), chital (Axis axis), hog deer (Axis porcinus), barasingha (Rucervus duvaucelii), barking deer (Muntiacus muntjak), elk (Cervus elaphus), sika deer (Cervus nippon), Eurasian elk (Alces alces), roe deer (Capreolus capreolus), muskdeer (Moschus moschiferus), nilgai (Boselaphus tragocamelus), black buck (Antilope cervicapra), gaur (Bos frontalis), banteng (Bos javanicus), water buffalo (Bubalus bubalis), and wild pigs (Sus). Domestic ungulates are also taken, including cattle (Bos taurus), water buffalo (Bubalus bubalis), horses (Equus caballus), and goats (Capra hircus). In rare cases tigers attack Malayan tapirs (Tapirus indicus), Indian elephants (Elephas maximus), and young Indian rhinoceroses (Rhinoceros unicornis). Tigers regularly attack and eat brown bears (Ursus arctos), Asiatic black bears (Ursus thibetanus), and sloth bears (Melursus ursinus). Smaller animals are sometimes taken when larger prey is unavailable, this includes large birds such as pheasants (Phasianinae), leopards (Panthera pardus), fish, crocodiles (Crocodylus), turtles, porcupines (Hystrix), rats, and frogs. A very few tigers begin to hunt humans (Homo sapiens). Tigers will eat between 18 and 40 kg of meat when they successfully take large prey, they do not typically eat every day. (Mazak, 1981; Schaller, 1967; Sunquist and Sunquist, 2002)
Tigers help regulate populations of their large herbivore prey, which put pressure on plant communities. Because of their role as top predators, they may be considered keystone species. (Sunquist and Sunquist, 2002)
Tiger parasites include the nematode, trematode, and cestode worms: Paragonimus westermani, Toxocara species, Uiteinarta species, Physaloptera praeputhostoma, Dirofilaria species, Gnathostoma spinigerum, Diphyllobothrium erinacei, Taenia bubesei, and Taenia pisiformis. Ticks known from tigers are Rhipicephalus annulatus, Dermacentor silvarum, Hyalomma truncatum, Hyalomma kumari, Hyalomma marginata, and Rhipicelphalus turanicus.
Live tigers are of economic importance in zoos where they are displayed to the public and in wildlife areas where they may bring in tourism. Tigers are illegally killed for their fur to make rugs and wall hangings. In addition, for more than 3000 years traditional Chinese medicine has used tiger parts to treat sickness and injury. The humerus (upper leg bone), for example, has been prescribed to treat rheumatism even though there is no evidence that it has any affect on the disease. Some believe that tiger bones will help them become as strong and ferocious as the tiger. (Sunquist and Sunquist, 2002)
Normally tigers avoid human contact, very rarely tigers may become “man eaters”. A man-eating tigress was rumored to have killed over 430 people, including 234 over the course of four years. It is thought that man-eating tigers are those that cannot effectively prey on large ungulated because they have become crippled, are old, or no longer have suitable native habitat and prey available. Because human populations are rapidly increasing, competition over natural resources is increasing pressure on tigers and their habitat and increasing the likelihood of negative human-tiger interactions. (Mazak, 1981; Sunquist and Sunquist, 2002)
Siberian (P. t. altaica), South China (P. t. amoyensis), and Sumatran tigers (P. t. sumatrae) are all critically endangered. Bengal (P. tigris tigris) and Indochinese tigers (P. tigris corbetti) are endangered. Bali (P. t. balica), Javan (P. t. sondaica), and Caspian tigers (P. tigris virgata) are extinct. The specific threats to tigers vary regionally, but human persecution, hunting, and human-induced habitat destruction are universal factors in threatening tiger populations. (Mazak, 1981)
has 38 chromosomes. The karyotype has 16 pairs of metacentric and submetacentric autosomes and two pairs of acrocentric autosomes. The X chromosome is a medium-sized metacentric and the Y chromosomes is a small metacentric.
Maltese tigers (sometimes referred to as P. t. melitensis, although they are not a true subspecies) are a variety of tiger that results from inbreeding. Maltese tigers have white fur with grey hues, making them look blue from a distance. So called 'white tigers' result when a cub is born with two recessive forms of a gene, also the result of inbreeding. White tigers suffer from many problems including eye weakness, sway backs, and twisted necks. (Mazak, 1981; Sunquist and Sunquist, 2002; Ullasa, 2001)
Tanya Dewey (editor), Animal Diversity Web.
Kevin Dacres (author), Michigan State University, Barbara Lundrigan (editor, instructor), Michigan State University.
living in the northern part of the Old World. In otherwords, Europe and Asia and northern Africa.
uses sound to communicate
young are born in a relatively underdeveloped state; they are unable to feed or care for themselves or locomote independently for a period of time after birth/hatching. In birds, naked and helpless after hatching.
having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria.
an animal that mainly eats meat
uses smells or other chemicals to communicate
to jointly display, usually with sounds, at the same time as two or more other individuals of the same or different species
active at dawn and dusk
having markings, coloration, shapes, or other features that cause an animal to be camouflaged in its natural environment; being difficult to see or otherwise detect.
humans benefit economically by promoting tourism that focuses on the appreciation of natural areas or animals. Ecotourism implies that there are existing programs that profit from the appreciation of natural areas or animals.
animals that use metabolically generated heat to regulate body temperature independently of ambient temperature. Endothermy is a synapomorphy of the Mammalia, although it may have arisen in a (now extinct) synapsid ancestor; the fossil record does not distinguish these possibilities. Convergent in birds.
A substance that provides both nutrients and energy to a living thing.
forest biomes are dominated by trees, otherwise forest biomes can vary widely in amount of precipitation and seasonality.
ovulation is stimulated by the act of copulation (does not occur spontaneously)
offspring are produced in more than one group (litters, clutches, etc.) and across multiple seasons (or other periods hospitable to reproduction). Iteroparous animals must, by definition, survive over multiple seasons (or periodic condition changes).
a species whose presence or absence strongly affects populations of other species in that area such that the extirpation of the keystone species in an area will result in the ultimate extirpation of many more species in that area (Example: sea otter).
marshes are wetland areas often dominated by grasses and reeds.
having the capacity to move from one place to another.
This terrestrial biome includes summits of high mountains, either without vegetation or covered by low, tundra-like vegetation.
the area in which the animal is naturally found, the region in which it is endemic.
active during the night
generally wanders from place to place, usually within a well-defined range.
found in the oriental region of the world. In other words, India and southeast Asia.
the business of buying and selling animals for people to keep in their homes as pets.
chemicals released into air or water that are detected by and responded to by other animals of the same species
the kind of polygamy in which a female pairs with several males, each of which also pairs with several different females.
rainforests, both temperate and tropical, are dominated by trees often forming a closed canopy with little light reaching the ground. Epiphytes and climbing plants are also abundant. Precipitation is typically not limiting, but may be somewhat seasonal.
communicates by producing scents from special gland(s) and placing them on a surface whether others can smell or taste them
scrub forests develop in areas that experience dry seasons.
remains in the same area
reproduction that includes combining the genetic contribution of two individuals, a male and a female
places a food item in a special place to be eaten later. Also called "hoarding"
uses touch to communicate
Coniferous or boreal forest, located in a band across northern North America, Europe, and Asia. This terrestrial biome also occurs at high elevations. Long, cold winters and short, wet summers. Few species of trees are present; these are primarily conifers that grow in dense stands with little undergrowth. Some deciduous trees also may be present.
that region of the Earth between 23.5 degrees North and 60 degrees North (between the Tropic of Cancer and the Arctic Circle) and between 23.5 degrees South and 60 degrees South (between the Tropic of Capricorn and the Antarctic Circle).
Living on the ground.
defends an area within the home range, occupied by a single animals or group of animals of the same species and held through overt defense, display, or advertisement
The term is used in the 1994 IUCN Red List of Threatened Animals to refer collectively to species categorized as Endangered (E), Vulnerable (V), Rare (R), Indeterminate (I), or Insufficiently Known (K) and in the 1996 IUCN Red List of Threatened Animals to refer collectively to species categorized as Critically Endangered (CR), Endangered (EN), or Vulnerable (VU).
the region of the earth that surrounds the equator, from 23.5 degrees north to 23.5 degrees south.
A terrestrial biome. Savannas are grasslands with scattered individual trees that do not form a closed canopy. Extensive savannas are found in parts of subtropical and tropical Africa and South America, and in Australia.
A grassland with scattered trees or scattered clumps of trees, a type of community intermediate between grassland and forest. See also Tropical savanna and grassland biome.
A terrestrial biome found in temperate latitudes (>23.5° N or S latitude). Vegetation is made up mostly of grasses, the height and species diversity of which depend largely on the amount of moisture available. Fire and grazing are important in the long-term maintenance of grasslands.
A terrestrial biome with low, shrubby or mat-like vegetation found at extremely high latitudes or elevations, near the limit of plant growth. Soils usually subject to permafrost. Plant diversity is typically low and the growing season is short.
uses sight to communicate
reproduction in which fertilization and development take place within the female body and the developing embryo derives nourishment from the female.
breeding takes place throughout the year
2007. "Evolution, Ecology and Status of Global Tigers" (On-line pdf). World Wide Fund for Nature Hong Kong. Accessed April 03, 2007 at http://www.wwf.org.hk/eng/pdf/references/factsheets/factsheetii.PDF.
Mazak, V. 1981. Mammalian Species. Panthera tigris, 152: 1-8.
Schaller, G. 1967. The deer and the tiger. Chicago: University of Chicago Press.
Sunquist, M., F. Sunquist. 2002. Wild Cats of the World. Chicago: University of Chicago Press.
Thapar, V. 2005. Wild Tigers of Ranthambhore. New Delhi, NY: Oxford University Press.
Ullasa, K. 2001. The Way of the Tiger. Stillwater, MN: Voyageur Press. |
by Maggie R. Limbeck*1
The oceans of the Palaeozoic era (541 million to 252 million years ago) were full of animals that we are familiar with, such as fish, snails, and coral, but also included many organisms that look almost nothing like their living relatives. The further back in time we go, for instance to the Cambrian and Ordovician periods (541 million to 444 million years ago), the greater the difference in body plans, or morphologies, compared to modern species. Echinoderms are an excellent example of this — living members of the group, such as starfish and sea urchins, are easily recognizable, but many of their extinct, fossilized relatives from hundreds of millions of years ago look very different. Understanding these different body forms is important to palaeontologists because it helps us to learn about the history and complexity of life on Earth. A comprehensive study of these animals and where they are found can give us information on their evolutionary relationships, interactions with other organisms and responses to environmental changes in the past.
Paracrinoids are some of the most bizarre echinoderms ever to have existed. Palaeontologists have described them as ‘Frankenstein creatures’ because they combine features found in other echinoderm groups to create unusual and seemingly contradictory body shapes (Fig. 1). Paracrinoids were a short-lived group, known only from the Middle to Late Ordovician (470 million to 444 million years ago; for comparison, many other now-extinct fossil groups were present for more than 50 million or even 100 million years). Part of what makes this group so unusual is that for the short time paracrinoids were around, they showed an incredible diversity of body shapes. This raises a number of questions about their evolutionary history and how they lived and interacted with other organisms (their palaeoecology).
In addition to having unusual shapes, paracrinoid bodies are not well organized. This is very different from other echinoderms, such as blastoids and echinoids, which have an ordered body plan (that is, the number and position of the plates on the body are constant in each species). All paracrinoids have three ‘basal’ plates, to which a stem attached in life, and multiple ‘oral’ plates that surround the mouth (Fig. 1), but the rest of the plates that make up the body are added in a seemingly unpredictable way. It is possible to detect some patterns if you look at the fossils for long enough, but it is generally thought that as the organism grew, plates were added as needed, so their placement and size varies.
Although the shapes and sizes of paracrinoids can seem completely random (Fig. 2), their overall variation gives researchers much to consider when attempting to work out how different paracrinoids were related to each other. Questions could include: how many food grooves (used for transporting food; Fig. 2A) are there around the mouth? How many mouths does the organisms have (Fig. 2B)? Are there structures for breathing (respiratory structures; Fig. 2C)? Are the plates ornamented (Fig. 2D)? This list of questions could continue for quite some time! The real challenge for understanding these different shapes is in trying to work out whether the features we are looking at are homologous (inherited from a common ancestor). If they are, they can be used to reconstruct an evolutionary tree, in a process called phylogenetic analysis. A common example of similar features that are not homologous are wings, which are present in both birds and insects, but evolved independently; this is called a convergent feature. If we treated these features as homologous, our phylogenetic analysis would have to assume that birds and insects share a more recent common ancestor than they actually do, producing an inaccurate picture of their evolutionary history.
Understanding which features are homologous in paracrinoids is very difficult. Each subgroup has very few features that are shared among all paracrinoids, so researchers must take great care when assessing these characters. However, even if features are not homologous, they can still be informative: their presence in different species could mean that these features help organisms to live in a certain environment, explaining why they evolved independently multiple times.
Mode of life:
As adults, paracrinoids were sessile, meaning that they did not move around. Some species had long stems that attached to the sea floor, whereas others are thought to have had a short stem that anchored the paracrinoid into the sea floor (Fig. 3). One of the main challenges for understanding the mode of life of fossil echinoderms is that after an echinoderm dies, its skeleton breaks apart rapidly. This is problematic because individual echinoderms can be made up of anywhere from tens to millions of different pieces! Paracrinoids are no different. Typically, only the main body is found fossilized (Fig. 2), because the stem and brachioles (arm-like structures seen in Fig. 3) are the easiest to break apart and so are lost readily after death.
Very little is known about how paracrinoids lived in their environments or how their different shapes could have benefitted them. One of the questions that I and others are interested in is: how did these animals function in life? Paracrinoid bodies are asymmetrical, with the mouth and the stem both on the left side of the body (Fig. 4). By contrast, most other fossil echinoderms (and living crinoids, a type of echinoderm also known as a ‘sea lily’) have their moth and stem aligned with the central axis of the body. Paracrinoids are also unusual in that their feeing appendages only grew from the left side of the food grooves (Fig. 1), as opposed to most other fossil echinoderms whose feeding appendages grew along both sides of the food grooves. The general body shape of paracrinoids is extremely varied and can range from almost perfect spheres to flattened ovals and even crescent-moon shapes. The function of these varied body shapes and how they many have been beneficial in the Ordovician seas is still unclear.
To learn more about paracrinoid lifestyles, we must keep studying their anatomy. This can be achieved by examining fossils, or by using modern techniques to simulate on a computer how their varied body shapes responded to different conditions.
The evolutionary relationships of early echinoderms are important for researchers trying to assemble the echinoderm tree of life and understand how this diverse and successful group of animals evolved. To assess these relationships, researchers study features such as the number of oral plates, the shape and placement of the food grooves, and they types of respiratory structures to determine if they are homologous. Until recently, our poor understanding of paracrinoid features prevented researcher from completing a phylogenetic analysis of this puzzling group.
Previous studies had separated paracrinoids into two groups. However, a recent phylogenetic analysis indicates that there are actually four subgroups within Paracrinoidea (Fig. 5). These subgroups are largely defined by whether or not the paracrinoid has respiratory structures, and by what shape the food grooves are. This is different from previous studies, which suggested that only one feature defined the division of Paracrinoidea into two subgroups: either the presence or absence of respiratory structures or the shape of the food grooves, but not both.
Paracrinoids are an especially interesting group of echinoderms because of their unusual morphologies that evolved in a relatively short amount of time. By contrast, blastoids, a Palaeozoic echinoderm group that lived for about 200 million years, had plenty of time for dramatic environmental changes, changing community interactions, and the evolution of new features. Many species of different groups (echinoderms and other animals) look very similar in the same time period, but as time goes on, changes in body shape happen, and over a period of hundreds of millions of years, those animals look very different from when they first appeared. Paracrinoids are unusual because they lived for such a short time period, but had a diversity in body plans that researchers would expect to see of a group that had been around for a longer amount of time. Some paracrinoids do have features that we can recognize from other echinoderms, and knowledge of these other groups can improve our understanding of paracrinoids. To learn more about paracrinoid features, then, researchers must undertake large-scale phylogenetic analyses with multiple echinoderm groups, including paracrinoids.
Paracrinoids, echinoderms that thrived for a fairly short period of time hundreds of millions of years ago, had highly unusual body shapes compared with other fossil echinoderms from the same time. This has often hindered our understanding of these organisms, but advances in our knowledge of echinoderm features and phylogenetic methods have allowed researchers to begin to examine relationships in this group. Future work aims to digitally reconstruct selected paracrinoid species to begin to learn how these animals lived. These reconstructions, along with the evolutionary analyses discussed here, will be used to investigate rates of evolutionary change and functional morphology, shedding light on the palaeobiology of this puzzling fossil group.
Suggestions for further reading:
Frest, T. J., Strimple, H. L. & Coney, C. C. Paracrinoids (Platycystitidae) from the Benbolt Formation (Blackriverian) of Virginia. Journal of Paleontology 53, 380–398 (1979).
Frest, T. J. & Strimple, H. L. A new comarocystitid (Echinodermata: Paracrinoidea) from the Kimmswick Limestone (Middle Ordovician), Missouri. Journal of Paleontology 56, 358–370 (1982).
Guensburg, T. E. The Stem and Holdfast of Amygdalocystites florealis Billings, 1854 (Paracrinoidea): Lifestyle Implications. Journal of Paleontology 65, 693-695 (1991). (DOI: 10.1017/S0022336000030791)
Kesling, R. V. Cystoidea. In Treatise on Invertebrate Paleontology, Part S, Echinodermata 1 (ed. Moore, R. C.) S85–S262 (Geological Society of America and University of Kansas, 1967).
Parsley, R. L. & Mintz, L. W. North American Paracrinoidea: (Ordovician: Paracrinozoa, New, Echinodermata). Bulletins of American Paleontology 68, 1–113 (1975).
Sumrall, C. D. & Deline, B. A new species of the dual-mouthed paracrinoid Bistomiacystis and a redescription of the Edrioasteroid Edrioaster priscus from the upper Ordovician Curdsville member of the Lexington limestone. Journal of Paleontology 83, 135–139 (2009). (DOI: 10.1017/S0022336000058194)
Sumrall, C. D. A model for elemental homology for the peristome and ambulacra in blastozoan echinoderms. In Echinoderms: Durham. (eds Harris, L. G., Böttger, S. A., Walker, C. W. & Lesser, M. P.) 269–276 (CRC Press, 2010).
Sumrall, C. D. & Waters, J. A. Universal elemental homology in glyptocystitoids, hemicosmitoids, coronoids and blastoids: Steps toward echinoderm phylogenetic reconstruction in derived Blastozoa. Journal of Paleontology 86, 956–972 (2012). (DOI: 10.1666/12-029R.1)
1The University of Tennessee, Knoxville, 602 Strong Hall, 1621 Cumberland Ave, Knoxville, TN 37916, USA. |
This is a lesson for intermediate students, which I thought would be interesting to share, mainly because of the video support, which I selected only after some wasted time listening to some very poor quality videos or perhaps good quality videos but which, unfortunately, were not appropriate for this level.
Step 1. What is a stereotype?
A stereotype is a widely held but fixed and oversimplified image or idea of a particular type of person related to their race, nationality and sexual orientation…etc
Step 2. Brainstorming Ideas.
Ask the class. What do you think of when you hear the word British? Give them one or two minutes to think and then call on a few students to give you their answers. Play the video British Stereotypes.
Step 3. Brainstorming
In pairs, students try to answer the same question but, this time, about Spain and the Spaniards. Embedded below are some of my students’ answers. Do you agree ?
Step 4. Speaking: National Stereotypes
Ask students whether they agree or disagree with the following National Stereotypes
1. The British are violent mad football freaks
2. The Italians are good lovers but bad workers
3. The Chinese eat everything that moves
4. The Germans are very punctual
5. The Swiss love clocks
Have you ever wondered how we sound to speakers of other languages when we speak our native language? Some languages are easy to imitate, as for example the Italian language or the German one but I would never have guessed how a Spanish speaking native sounds to the rest of the world.
In this video the British Sketch comedian, Katherine Tate, volunteers to translate into seven different languages. Hilarious! And I hope nobody takes offence!
Step 6.Speaking. Students in pairs answer the following questions about stereotypes
♥What do people think of when they think of Spain and the Spaniards? Do you think these stereotypes are true or false?
♥Do you know of any stereotypes about British people?
♥What are some stereotypes you know of about women?
♥What are some stereotypes about men?
♥ What stereotypes exist about people who are blonde?
♥Do you think some stereotypes are true?
♥What stereotypes exist about religion? |
Our Kspace Kimberley teacher resources include creative projects, assignment questions and an online quiz, all linked to the Australian Curriculum.
They were developed for use with our Kids learning space, where videos, photographs and background help provide answers and inspiration for students.
Setting the scene
Kspace takes children to the Kimberley in 1990. The Kimberley is a distinctive landscape of rugged beauty, with a diversity of wildlife and a rich Indigenous history including deep cultural connections that continue today.
Questions children can keep in mind if visiting this scene at the Museum are:
- What animals are seen, and what do they reveal about the location and climate?
- What are some of the features of the landscape?
- Which people do we encounter, and what are they doing? What are their roles and relationship to the region?
Narrative and gameplay
Our free, printable visitor access guide gives a sense of what happens in the game before you visit the Museum. It includes storyboards on the Kimberley narrative and gameplay and a detailed description of the Kspace experience.
This guide is also helpful for students with hearing impairment, learning difficulties or limited mobility, who may need to prepare before they visit.
Primary source study
A stone spearhead from the National Museum's collection helps demonstrate a long Aboriginal tradition and the impact of European settlement on the Kimberley region of Western Australia.
Ten multiple choice questions for students to demonstrate their knowledge of the Kimberley region. The quiz questions and answers are also available in a printable version (PDF 100kb)
Suggested projects for children to make and do.
- Draw a picture or create a diorama of a famous Kimberley feature, such as a gorge, mountain range or waterfall.
- Produce two pictures of the same place, one in the wet season and the other in the dry season.
- Draw a picture of people engaged in an economic or cultural activity in the Kimberley.
- Create a map of the Kimberley showing major rivers, roads, towns and tourist spots.
- Create a map of the Kimberley showing Aboriginal language groups.
- Write and illustrate a tourist brochure for the Kimberley region.
- Make a poster featuring animals found today in the Kimberley. Include at least one amphibian, reptile, mammal and bird, and separate them into introduced and native species.
- Pretend you are a journalist covering the story of Jandamarra. Write an article or record a news broadcast describing a key event or series of events.
- Create a PowerPoint presentation looking at some of the different social and economic activities people undertake in the Kimberley.
Suggested assignments questions encouraging children to think and write.
- What does the term ‘caring for country’ mean to Indigenous people in Australia?
- Identify and discuss some of the positive and negative effects of agriculture on Indigenous communities in the Kimberley.
- What relevance does the story of Jandamarra have to a wider Australian audience?
- Australia’s major cities are mainly in the south. What are some of the limitations to a major population centre in the Kimberley?
- Discuss some of the ideas about how the boab tree came to the Kimberley.
- The Kimberley is home to diverse wildlife but many species are endangered. What are some of the main threats facing these species?
Note: inspiration and answers for online activities, creative projects and written assignments can be found in the Kids learning space |
Teens experience many changes as they leave childhood and move toward adulthood. Not only are they becoming more independent, they are making serious life decisions such as who to date and which college to go to. Most parents want their teens to turn into moral people. Eighty-five percent of parents feel that schools should teach values in the classroom, reports the Harvard Education Letter, a publication of Harvard's Graduate School of Education. Teens' moral development often tracks their age.
Stage of Development
Lawrence Kohlberg, professor and author, developed several stages of moral development in the late 1960s and early 1970s. His work elaborates on the earlier theories of well-known psychologist Jean Piaget. According to Kohlberg, teens fall into the fourth stage of moral development, which is "responsibility to the system." During this stage, teens begin to connect the consequences of their actions, shoplifting for example, to the world around them. If a teen considers what would happen if everyone stole items from the store, she is probably well on her way to age-appropriate moral development.
During the teen years, parents might be surprised to see the dramatic rise in moral thinking. This doesn't mean that teens always do the moral thing, but they can see why it is right and how it benefits and affects others. Most teens feel that it is important to help others, respect authority, honor responsibilities and obligations, and follow local, state and federal laws. They understand that failing to do so could alter the social structure around them. During this stage of moral development, many teens don't feel that there is ever a good reason to break the law and are likely to side with the government if conflicts or disputes arise between the law and citizens.
While some aspects of a teen's development happen naturally, others need some assistance. Your child isn't born knowing what is moral and what isn't. She has to be taught and shown it. Your teen is faced with moral dilemmas every day, from entering into a sexual relationship to trying drugs or alcohol. Parents should instill their ideas of proper morals from an early age. Many schools use character development programs that help teach children moral behaviors.
When There is a Problem
Teens aren't always going to the make right choice and will inevitably do something that goes against their moral upbringing. Lacking sound morals, however, plays a big role in teen crime, according to a 2012 article in The Daily Mail, citing a Cambridge University study. Teens who aren't raised to become moral people are more likely to commit crimes, including burglary and car theft. If you worry that your teen might be struggling with her moral identity, talk to a mental health professional who can help her work through the issues. Teens who foster close relationships with parents are less likely to engage in immoral behaviors, according to PBS.org. Spending more time with your teen is an effective way to help her make better choices in the future. |
The tensile strength of copper as defined by the Young's modulus is 129.8 gigapascals, (129.8 GPa). This number is at the upper end of the range, meaning that copper is a very ductile material and can tolerate being pulled into very thin wire without breaking.
This high ductility is due to the electron arrangement of the copper atom, which has a filled d-shell with one s-orbital electron above it. This makes its metallic bonds relatively weak. Copper shares this property with certain other metals such as silver and gold.
Copper's ability to be pulled into thin wire makes it ideal for electrical wire and jewelry. Copper is soft, however, and is often alloyed with other metals such as zinc to provide strength. |
The exoplanet, called WASP-39 b, is categorized as a gas giant by NASA and is orbiting a sunlike star. WASP-39 b has a mass roughly the same as Saturn and diameter 1.3 times greater than Jupiter, according to NASA.
Unlike our solar system’s gas giants Jupiter and Saturn, which orbit our sun from a great distance, WASP-39 b orbits its sun at about one-eighth the distance between our sun and Mercury.
WASP-39 b was first discovered in 2011 and previous observations from NASA’s Hubble and Spitzer space telescopes found that its atmosphere contained water vapor, sodium, and potassium.
"Webb’s unmatched infrared sensitivity has now confirmed the presence of carbon dioxide on this planet as well," NASA said in a statement on Thursday.
NASA’s Webb research team used the space telescope’s Near-Infrared Spectrograph (NIRSpec) to observe the carbon dioxide surrounding WASP-39 b.
A transmission spectrum of the hot gas giant exoplanet WASP-39 b captured by Webb’s Near-Infrared Spectrograph (NIRSpec) July 10, 2022, reveals the first clear evidence for carbon dioxide in a planet outside the solar system. This is also the first d
"As soon as the data appeared on my screen, the whopping carbon dioxide feature grabbed me," said Zafar Rustamkulov, a graduate student at Johns Hopkins University and member of the JWST Transiting Exoplanet Community Early Release Science team, which undertook this investigation. "It was a special moment, crossing an important threshold in exoplanet sciences."
Why does this matter?
No telescope has been able to measure and capture the subtle differences in light and colors in a planet’s atmosphere outside our solar system like the James Webb Space Telescope. Being able to see this spectrum will allow researchers to measure the abundances of specific gases such as water and methane, NASA explained.
"Detecting such a clear signal of carbon dioxide on WASP-39 b bodes well for the detection of atmospheres on smaller, terrestrial-sized planets," said Natalie Batalha of the University of California at Santa Cruz, who leads the team.
If researchers are able to conclusively say what a planet’s atmosphere contains, it could help unlock the secrets to its origin.
"Carbon dioxide molecules are sensitive tracers of the story of planet formation," said Mike Line of Arizona State University, another member of this research team. "By measuring this carbon dioxide feature, we can determine how much solid versus how much gaseous material was used to form this gas giant planet. In the coming decade, JWST will make this measurement for a variety of planets, providing insight into the details of how planets form and the uniqueness of our own solar system."
The observations made of WASP-39 b are just the beginning, according to NASA.
The discovery of carbon dioxide on WASP-39 b is part of a larger investigation that will involve utilizing multiple instruments from Webb, as well as observations on two other exoplanets.
"The goal is to analyze the Early Release Science observations quickly and develop open-source tools for the science community to use," explained Vivien Parmentier, a co-investigator from Oxford University. "This enables contributions from all over the world and ensures that the best possible science will come out of the coming decades of observations."
James Webb Space Telescope is hard at work
The James Webb telescope is peering farther into the universe than ever before, and now, the celestial objects it’s finding along the way need names.
The International Astronomical Union, the organization tasked with naming celestial objects, is launching a contest to name 20 exoplanetary systems — or planets orbiting other stars — that the long-awaited telescope has discovered. Organizers say the contest, NameExoWorlds 2022, "invites communities across the globe to connect their own cultures to these distant worlds."
The James Webb — the world’s largest and most powerful telescope — launched in December 2021 and traveled a million miles to its final destination.
With Webb, scientists hope to glimpse light from the first stars and galaxies that formed 13.7 billion years ago, just 100 million years from the universe-creating Big Bang. The telescope also will scan the atmospheres of alien worlds for possible signs of life.
Heather Miller contributed to this report. This story was reported from Los Angeles. |
The Equation of a Straight Line
The graph of a first degree polynomial is always a straight line. The graph of a second degree polynomial is a curve known as a parabola. A polynomial of the third degree has the form shown on the right. Skill in coördinate geometry consists in recognizing this relationship between equations and their graphs. Hence the student should know that the graph of any first degree polynomial y =ax + b is a straight line, and, conversely, any straight line has for its equation, y =ax + b.
Sketching the graph of a first degree equation should be a basic skill. See Lesson 33 of Algebra.
Example. Mark the x- and y-intercepts, and sketch the graph of
y = 2x + 6.
The x-intercept is the root. It is the solution to 2x + 6 = 0. The
The y-intercept is the constant term, 6.
Now, what does it mean to say that y = 2x + 6 is the "equation" of that line?
It means that every coördinate pair (x, y) that is on the graph, solves that equation. (That's what it means for a coördinate pair to be on the graph on any equation.) Every coördinate pair (x, y) on that line is
That line, therefore, is called the graph of the equation y = 2x + 6. And y = 2x + 6 is called the equation of that line.
Every first degree equation has for its graph a straight line. (We will prove that below.) For that reason, functions or equations of the first degree -- where 1 is the highest exponent -- are called linear functions or linear equations.
Problem 1. Mark the x- and y-intercepts, and sketch the graph of
y = −3x − 3
To see the answer, pass your mouse over the colored area.
The x-intercept is the solution to −3x − 3 = 0. It is x = −1. The y-intercept is the constant term, −3.
Problem 2. Sketch the graph of y = −4.
An equation of the form y = A number, is a horizontal line.
See Lesson 33 of Algebra, the section "Vertical and horizontal lines."
The slope-intercept form
This linear form
y = ax + b
is called the slope-intercept form of the equation of a straight line. Because, as we shall prove presently, a is the slope of the line (Topic 8), and b -- the constant term -- is the y-intercept.
This first degree form
Ax + By + C = 0
where A, B, C are integers, is called the general form of the equation of a straight line.
Theorem. The equation
y = ax + b
is the equation of a straight line with slope a and y-intercept b.
For, a straight line may be specified by giving its slope and the coördinates of one point on it. (Theorem 8.3.)
Therefore, let the slope of a line be a, and let the one point on it be its y-intercept, (0, b).
Then if (x, y) are the coördinates of any point on that line, its slope is
On solving for y,
y = ax + b.
Therefore, since the variables x and y are the coördinates of any point on that line, that equation is the equation of a straight line with slope a and y-intercept b. This is what we wanted to prove.
The slope of a straight line -- that number -- indicates the rate at which the value of y changes with respect to the value of x. (Topic 8.)
Problem 3. Name the slope of each line, and state the meaning of each slope.
a) y = 2x + 6
The slope is 2. This means that y increases 2 units for every 1 unit of x.
c) y = x
The slope is 1. This means that y increases 1 unit for every 1 unit of x. This is the identity function, Lesson 5.
d) 3x + 3y = 1
It is only when y = ax + b, that the slope is a. Therefore, on solving for y: y = −x + 1/3. The slope is −1. This means that y decreases 1 unit for every unit that x increases.
Please make a donation to keep TheMathPage online.
Copyright © 2015 Lawrence Spector
Questions or comments? |
Heat Exchangers are the devices used to transfer heat from one medium to another. The transfer of heat mainly takes place by the process of conduction and convection. Depending on the various parameters, heat exchangers can be classified into many types, and we at AIC Heat Exchanger offer all of these under our roof. Let us give you a walk through different types of heat exchangers used extensively are following:-
- Shell and tube type: This is the most commonly used heat exchanger. As the name suggests it consists of a shell full of flowing cooling medium and stacks of tubes having the fluid to be cooled. It further can be classified based on the direction of flow viz. parallel or counter flow.
- Tubular Types: This is almost same as the shell and tube type, the differences being the tubes used in this type of heat exchangers are concentric, i.e one tube is placed inside the other. One tube carries the fluid to be cooled whereas the other carries the cooling medium.
- Plate Heat Exchanger: These types of heat exchangers have found their use in various industries. They consist of a stack of plates which provide more surface area to transfer heat, thereby making it very efficient. They can also be used for heating of medium in addition to cooling.
- Fin Heat Exchanger: They are very similar to the plate type heat exchanger except they have fins fitted between the plates. These fins solve two purposes- one, they tend to support the plates and two they increase the surface area further.
Designing of these heat exchangers requires many parameters and heat calculations, few important ones which should be kept in mind are given below:
- Material: Material composition plays a very important role in designing of heat exchanger. It should be kept in mind that the tube/plate inside the heat exchange should have high heat transfer coefficient to facilitate better heat transfer, whereas the material of the heat exchanger itself should have less heat transfer coefficient to ensure the heat is not radiated outside.
- Fluid used: The material used in the heat exchanger should also be as resistant to the fluid used as possible. This will ensure less chemical reaction between them, thereby helping in greater efficiency.
- Maintenance Envelope: The heat exchanger should be designed as maintenance friendly. It should have provisions to clean the tubes or repair the tubes in case of any damage to them.
- Efficiency: This is also an important factor while designing the heat exchanger, however, it should not be confused with the effectiveness of the heat exchanger. A heat exchanger might be very effective but not efficient or the other way around. An efficiency of the heat exchanger is defined by the ratio of temperature difference between what is actually recovered to a temperature that could be recovered.
Considering the various losses due to friction, corrosion and others, obtaining an efficiency of 100% is impossible and hence heat exchangers are designed considering all the above factors in mind. We at AIC Heat Exchanger offer wide variety of these equipments for domestic and industrial applications. Visit us today! |
There’s no denying the fact that we live in a time where technology has become less artificial and more intelligent. Whether we talk about AI applications or the applications of its subsets in particular (machine learning and deep learning), the scope is far beyond what humans could have or can imagine.
From asking Alexa to order our pizza to unlock our phone through facial recognition, we all have explored AI applications in our daily life. Given that, would it be strange to know that AI applications have surpassed our regular lives and are now taking over space (Indian moon mission – Chandrayaan-2, for instance)?
For it’s obvious, space exploration is a vast topic. And human intelligence needs something to complement it to be able to comprehend the intricacies of space. There cannot be a better model than AI to do that.
AI Applications: Role of AI in Space Exploration
- Space exploration gives rise to humongous amounts of data that cannot be analyzed through human intelligence. That is where Artificial Intelligence applications, score. Through analyzing and deriving the meaning of the data, AI can change the trajectory of space exploration. The data can help researchers find life on new planets. It can help identify and map patterns that were not possible by humans. Also, planets that have the right conditions to support life, can be known.
- The rovers (robots) currently roaming the surface of Mars are required to make decisions without specific commands from the mission control. It is AI applications that make it possible. The NASA Curiosity rover, for example, can move on its own while avoiding obstacles on the way and determining the best route to travel.
- The data that we receive from the space in the form of images. The challenge, however, is to decode those images and extract the needed information. Machine Learning can help here. The NASA Frontier Development Lab and tech-giants such as IBM and Microsoft have come together to leverage machine learning as a solution for solar storm damage detection, atmosphere measurement, and determining the ‘space weather’ of a given planet through the magnetosphere and atmosphere measurement. The same technique can also be used for resource discovery in the space and to identify suitable planet landing sites.
- Machine Learning, a subset of Artificial Intelligence, had a role to play in the successful landing of SpaceX Falcon 9 at Cape Canaveral Air Force Station in 2015. It identified the best way to land the rocket through real-time data facilitating route prediction.
- Through AI applications, the geological makeup and historical significance of a planet can be known. Not only this, but AI can also send, analyze, and classify images of the same and decide the next best action.
- Deep Learning, a subset of Artificial Intelligence can be applied in automatic landing, intelligent decision-making and fully automated systems.
- The new-generation spacecraft, by the courtesy of Artificial Intelligence applications, will be more independent, self-sufficient, and autonomous. AI will go beyond human limits to identify findings and send information back to Earth.
- AI applications can optimize planetary tracking systems, enable smart data transmission, and nullify the risk of human error (by using predictive maintenance).
Achievements of AI – Past, Present, and Future
- Earth Observing-1 – The satellite EO-1 (Earth Observing 1) has been successful in the past in gathering images of natural calamities. The AI functioning with it started to take pictures of the calamities even before the ground crew knew that the incident had taken place. It was the first satellite –
- to map active lava flows from space;
- to measure a facility’s methane leak from space;
- to track re-growth in a partially logged Amazon forest from space.
- SKICAT – SKICAT (Sky Image Cataloging and Analysis Tool) identified what was beyond human capabilities. It classified approximately a thousand objects in low resolution during the second Palomar Sky Survey.
- Kepler data – AI, with NASA and Google, made 2017- the year of discovery of two obscure planets.
- Kepler-90, now- Kepler-90i.
- Kepler 80, now- Kepler-80g.
- CIMON – Crew Interactive Mobile Companion, is basically, a head-shaped robot, used in the International Space Station. The device is an AI-based assistant for astronauts. It is capable of hearing and seeing and serves through searching for objects, inventory management, documenting experiments, videography, and photography.
- GPS in Space – NASA Frontier Development Lab has been working on an AI application that would do the job of a GPS in space and would make it easy to explore Titan, Mars, or even the Moon. The use of GPS and the other GNSS systems in Medium Earth Orbit (MEO), Geostationary Orbit (GEO) and beyond, including cislunar space ( area between the earth and the moon), is “an emergent capability,” according to Miller (the Positioning Navigation and Timing (PNT) policy lead for the NASA Goddard Space Flight Center).
Now that we’ve discussed the past, present, and the future of space exploration, it would be an injustice to miss out on India’s recent achievement – Indian Moon Mission -Chandrayaan-2.
AI in Indian Moon Mission – Chandrayaan2
India’s second moon mission – Chandrayaan-2, has been a defining episode in the history of space exploration. But as we were busy noticing the indelible mark it made, there was something else that was happening. And that was the integration of Artificial Intelligence with Chandryaan-2’s rover – Pragyan.
Indian Space Research Organisation delivered Pragyan – a solar-powered robotic vehicle that was to explore the lunar surface on its six wheels.
Pragyan comprised –
- LIBS (Laser Induced Breakdown Spectroscope) from LEOS (Laboratory for Electro Optic Systems), Bengaluru. It was to identify elements present near the landing site.
- APIXS (Alpha Particle Induced X-ray Spectroscope) from the Physical Research Laboratory (PRL), Ahmedabad. It was to inspect the composition of the elements identified by LIBS near the landing site.
Artificial Intelligence enabled the Chandrayaan-2’s rover in the following manner –
- The AI-powered rover – Pragyan could communicate with the lander. It featured motion technology which was to help the rover move over and land on the lunar surface.
- Not only this, but the artificial intelligence algorithm could also help the rover detect traces of water and other minerals on the lunar surface.
- Through AI the rover could send images that would have been used for research and testing.
Concluding Notes – AI has infinite potential in terms of space exploration. It is justified to say that Artificial Intelligence will prove to a defining enabler in space revolution. There’s so much that we have seen, and so much more that we cannot possibly imagine.
What we can, however, be sure of is that the right time to leverage the opportunities and serve the stream with futuristic solutions is – right now.
And for that, here’s a resource for you –
Springboard’s courses on data science, data analytics, and Artificial Intelligence/Machine Learning online learning programs that come with 1:1 mentoring, project-based curriculum and career coaching. These courses are need of the hour (you know why!). The best part is they are industry focussed and job-oriented especially designed for technology enthusiasts, like you, to serve them with a career that matters. |
Teaching English as a Second Language (ESL) in China is for the most part pleasant and exciting. Students are eager to learn. However, one of the frustrations ESL teachers often complain about is that students seem to make the same mistakes repetitively. Learners will often transfer the rules of their first language to express something in their second language. This transference happens when they have insufficient knowledge of the rules of the second language. In China, students fall back on the rules of their first language (Mandarin) when they do not know the rules of the second language (English). The result is a poor form of English, informally referred to as "Chinglish". The errors that occur are also called language interference errors. These errors affect students? academic performance in English. Foreign teachers with limited knowledge of Mandarin may not even know why the same kinds of errors are being made repeatedly. Teachers feel frustrated and discouraged. To find textbooks that provide information on common interference errors and ways to ?teach? them, is hard. Knowing where these errors come from may guide teachers to deal with these interference errors effectively. In this article we will identify some of the most common errors made by Chinese students in writing, as well as offer some strategies for teachers to use in the ESL classroom.
There are a number of causes leading to language interference errors. Errors are chiefly due to differences between the two languages, structurally and phonologically. The greater the difference, the more acute the learning difficulties are. The differences between English and Mandarin are many. These differences lead to confusion of the appropriate gender and number inflection for subject and object pronouns. For example, students confuse "he with she" and "him with her" and vice versa. In spoken Mandarin they do not have pronouns indicating the gender of the object or subject! Even an intermediate student can be heard saying, "I love my husband. She is so handsome." When one looks at sentences in Mandarin, verbs frequently appear in the final position as opposed to English verbs that appear in the middle of sentences. Another example of a big difference between the languages is that in Mandarin, nouns stay the same, but "counting words" are used to indicate plural. Students do not add the ?s to plurals. It is common to hear sentences like "Monkeys like to eat banana." The first noun was pluralized, but not the second noun. This is not only a grammatical error in writing, but happens frequently in speaking too. Mandarin speakers use a specific time phrase to mark the time. Typical sentences that can be found in the writing and speaking of ESL learners are "I yesterday eat cake" and "She eat rice". The correct form, "I ate cake yesterday" and "She eats rice" would be considered redundant in a Mandarin way of thinking! There is no lexical equivalent for the definite article "the". Students are confused about when to use it and when to omit it. They often place the definite article in front of a proper name. For example, they often produce, "I want to go to the Beijing for the weekend." Mandarin uses double transitions which English speakers consider redundant. To Mandarin speakers it is logical to say, "Because Kate is English, therefore Kate can speak English." Multi-syllabic words cause confusion for since ESL learners since most words in Mandarin tend to have one morpheme and Mandarin sentences are shorter. Mandarin nouns, adjectives and adverbs do not show suffixes as they do in English. The word "happy" can be a noun, adverb and adjective in Mandarin. Many ESL teachers in China consider the incorrect use of adverbs and adjectives the most common interference errors. Students produce English such as "You can sing beautiful" instead of "You can sing beautifully". These reoccurring errors hinder students´ English performance in tests and English assignments and may also be detrimental to their confidence in using their second language. As stated earlier, an insufficient knowledge of the second language´s grammar rules, forces students to fall back on the rules of their first language. Language interference errors occur. For example, students repeatedly ignore the agreement between the verb and subject. Another common mistake students make is the use of a comma instead of a period at the end of a sentence. In Mandarin sentences are separated with the use of a comma. Since many ESL schools put the main focus on teaching communication skills, grammar is often neglected. This poses a big problem for elementary school students. They enter elementary school with acceptable speaking skills but they have tremendous difficulty in writing English sentences and paragraphs. Many schools underestimate the value of teaching grammar at an earlier age. They think grammar is too abstract. Lack of age and developmentally appropriate English grammar resources specifically designed for Chinese children, add to the problem. It is hard to address language interference errors in schools with a No-Mandarin-During- English- Time-policy. Children do not get the opportunity to make the necessary links and comparisons between English and Mandarin. Again, though knowledge of the students? first language is not compulsory, it may help teachers in understanding the interference errors made by students.
In general (according to one experience teacher in China), it is important to reform the way English is taught in Chinese schools. The most crucial improvement needed lies in the adoption of methods aimed at enhancing students communicative abilities. Instead of making students spend all or most of their time memorizing grammatical rules, English classes should focus on developing the abilities to speak and write the language. Toward this goal, the textbooks that are used in schools for teaching English should be drastically revised or rewritten. Staff recruitment as well as parental and auxiliary staff instruction may be needed if teachers want to make an impact on more than just their classroom.
Date of post: 2007-04-18 |
The brown trout is an economically important species, particularly due to its popularity with anglers, and stocks are maintained in many areas by artificial introductions (2).
This fish feeds on invertebrates, insect larvae, aerial insects, and molluscs, as well as the occasional fish and frog (4). Spawning occurs between January and March, when females are accompanied by a number of males. The eggs, which are fertilised externally, are covered with gravel by the female. For the first days after hatching, the young fish (fry) derive their nutrients from their large yolk sacs; they then feed on small arthropods, such as insect larvae (2). The maximum-recorded life span of a brown trout is 5 years (4). |
CH30‐EQ3 Observe and analyze phenomena related to acid‐base reactions and equilibrium. [SI, DM]
d. Differentiate between strength (strong versus weak) and concentration (concentrated versus dilute) when referring to acids and bases. (S)
g. Solve problems involving pH, pOH, [H+]/[H3O+], [OH‐], Kw, Ka and Kb. (S)
Overview: Today we discussed what pH is and how to find it using [H+]. We also looked at pOH and how to convert between [OH-], pOH, pH and [H+]. FOr the second half of class we went to the library for our final in class time to work on the final projects (recording starts next week!) I also showed students how to use Adobe Auditions to record their podcast. |
A flower garden can bring beauty to the landscape as the flowers attract bird, bees and butterflies. Flowers can also cut for use in floral arrangement or dried for use in wreaths. Despite their beauty and useful purposes, there can be disadvantage of flowers.
Some flowers, like purple loosestrife or dame’s rocket, self-seed to repopulate not only the area surrounding the plant, but the seeds can float on the wind or water, or be carried by animals or people to others areas where they drop and take root. Vines, like Asian or Japanese honeysuckle, grow quickly and drop seed. The base of a honeysuckle vine can become very dense, requiring a chain saw to remove it. Ground covers, like periwinkle, can create dense surface stems and underground rooting systems making them difficult to eradicate. Ground cover may also prevent nutrients from reaching surrounding plants. The rooting systems of plants like yucca extend under the ground to surface, sending up new shoots several feet away from the main plant. Those new shoots can root and send out more underground threads that will subsequently sprout above ground, root and continue the spreading cycle.
Nature has given some plants protection from animals through poisonous foliage. Lily of the valley is a pleasantly scented, spring perennial. Digestive upset and irregular heart beat can occur if the foliage of lily of the valley is eaten. Consuming the leaves of foxglove, a tall perennial flower, can cause the heart to beat so irregularly, it could be fatal. Seek immediate medical attention for anyone thought to have eaten any part of poisonous plants.
Flowers produce pollen as part of their reproductive process. For individuals sensitive to flower pollen, blooming flowers like dahlia or sunflower can cause sneezing, itchy eyes or runny nose. In addition to pollen allergies that affect the body through inhalation, some plants can cause skin irritation when touched. The foliage of sweet alyssum, an annual ground cover, may cause skin irritation if touched. Handling chrysanthemum or the leaves of sunflower can cause skin rash in some individuals.
Prickly flowers stems, like those of roses and bougainvilleas, can poke through the skin if not handled carefully. The stiff, pointed sword-like leaves of yucca can hurt if brushed against. |
The term, Aztec, is a startlingly imprecise term to describe the culture that dominated the Valley of Mexico in the fifteenth and sixteenth centuries. Properly speaking, all the Nahua-speaking peoples in the Valley of Mexico were Aztecs, while the culture that dominated the area was a tribe of the Mexica (pronounced “me-shee-ka”) called the Tenochca (“te-noch-ka”). At the time of the European conquest, they called themselves either “Tenochca” or “Toltec,” which was the name assumed by the bearers of the Classic Mesoamerican culture. The earliest we know about the Mexica is that they migrated from the north into the Valley of Mexico as early as the twelfth century AD, well after the close of the Classic Period in Mesoamerica. They were a subject and abject people, forced to live on the worst lands in the valley. They adopted the cultural patterns (called Mixteca-Pueblo) that originated in the culture of Teotihuacán, so the urban culture they built in the fifteenth and sixteenth centuries is essentially a continuation of Teotihuacán culture.
The history of the Tenochca is among the best preserved of the Mesoamericans. They date the beginning of their history to 1168 and their origins to an island in the middle of a lake north of the Valley of Mexico. Their god, Huitzilopochtli, commanded them on a journey to the south and they arrived in the Valley of Mexico in 1248. According to their history, the Tenochca were originally peaceful, but their Chichimec ways, especially their practice of human sacrifice, revolted other peoples who banded together and crushed their tribe. In 1300, the Tenochcas became vassals of the town of Culhuacan; some escaped to settle on an island in the middle of the lake. The town they founded was Tenochtitlan, or “place of the Tenochcas.”
Relations between the Tenochcas and Culhuacan became bitter after the Tenochcas sacrificed a daughter of the king of Culhuacan; so enraged were the Culhuacans that they drove all the Tenochcas from the mainland to the island. There, the Tenochcas who had lived in Culhuacan taught urban culture and architecture to the peoples on the island and the Tenochcas began to build a city. The city of Tenochtitlan is founded, then, sometime between 1300 and 1375.
The Tenochcas slowly became more powerful and militarily more skilled, so much so that they became allies of choice in the constant conflicts between the various peoples of the area. The Tenochcas finally won their freedom under Itzacoatl (1428-1440), and they began to build their city, Tenochtitlan, with great fervor. Under Itzacoatl, they built temples, roads, a causeway linking the city to the mainland, and they established their government and religious hierarchy. Itzacoatl and the chief who followed him Mocteuzma I (1440-1469) undertook wars of conquest throughout the Valley of Mexico and the southern regions of Vera Cruz, Guerrero, and Puebla. As a result, Tenochtitlan grew dramatically: not only did the city increase in size, precipitating the need for an aqueduct system to bring water from the mainland, it grew culturally as well as the Tenochcas assimilated the gods of the region into their religion.
A succession of kings followed Mocteuzma I until the accession of Mocteuzma II in 1502; despite a half century of successful growth and conquest, Tenochca culture and society began to suffer disasters under Mocteuzma II. First, tribute peoples began to revolt all over the conquered territories and it is highly likely that Tenochca influence would eventually have declined by the middle of the sixteenth century. Most importantly, the reign of Mocteuzma II was interrupted by the invasion of the Spaniards under Cortez in 1519-1522.
The most dramatic achievement of the Olmecs were the building of massive stone heads. We aren’t sure who is represented by these heads, but archaeologists believe that they may be Olmec kings. Around 300 BC, the Olmec vanished for reasons that vanished with them. We do know, however, that much of their culture and social structure was absorbed by other peoples. The Olmecs, as far as we can tell, are the first chain in the development of Mesoamerican culture.
Teotihuacan was, at its apogee in the first half of the 1st millennium CE, the largest city in the pre-Columbian Americas. During its zenith it may have had more than 100,000 inhabitants placing it among the largest cities of the world in this period. The civilization and cultural complex associated with the site is also referred to as Teotihuacan or Teotihuacano. Although it is a subject of debate whether Teotihuacan was the center of an empire, its influence throughout Mesoamerica is well documented; evidence of Teotihuacano presence, if not outright political and economic control, can be seen at numerous sites in Veracruz and the Maya region. The ethnicity of the inhabitants of Teotihuacan is also a subject of debate and possible candidates are the Nahua, Otomi or Totonac ethnic groups. Often it has been suggested that Teotihuacan was in fact a multiethnic state.
The early history of Teotihuacan is quite mysterious, and the origin of its founders is debated. For many years, archaeologists believed it was built by the Toltec. This belief was based on colonial period texts such as the Florentine Codex which attributed the site to the Toltecs. However, the Nahuatl word “Toltec” generally means “craftsman of the highest level” and may not always refer to the archaeological Toltec civilization centered at Tula, Hidalgo. Since Toltec civilization flourished centuries after Teotihuacan, they cannot be understood as the city’s founders.
In the Late Formative period, a number of urban centers arose in central Mexico. The most prominent of these appears to have been Cuicuilco, on the southern shore of Lake Texcoco. Scholars have speculated that the eruption of the Xitle volcano may have prompted a mass emigration out of the central valley and into the Teotihuacan valley. These settlers may have founded and/or accelerated the growth of Teotihuacan.
Other scholars have put forth the Totonac people as the founders of Teotihuacan, and the debate continues to this day. There is evidence that at least some of the people living in Teotihuacan came from areas influenced by the Teotihuacano civilization, including the Zapotec, Mixtec and Maya peoples. The culture and architecture of Teotihuacan was influenced by the Olmec people, who are considered to be the “mother civilization” of Mesoamerica. The earliest buildings at Teotihuacan date to about 200 BCE, and the largest pyramid, the Pyramid of the Sun, was completed by 100 CE.
Teotihuacán was conquered by northern tribes in 700 AD and began to rapidly decline in its influence over the Mexican peoples. For two hundred years following the decline of Teotihuacán, the region had no centralized culture or political control. Beginning around 950, a culture based in northern Mexico at Tula began to dominate Central America. These people were known as the Toltecs. They were a war-like people and expanded rapidly throughout Mexico, Guatemala, and the Yucatán peninsula. At the top of their society was a warrior aristocracy which attained mythical proportions in the eyes of Central Americans long after the demise of their power. Around 1200, their dominance over the region faded.
They were important as transmitters of the culture of Teotihuacán, including religion, architecture, and social structure. Their name, in fact, is not a tribal name (the original Toltec tribal names have been lost to us); the word, toltecatl , simply means “craftsman” in the Nahua languages. Toltec was simply the word used to distinguish the Mexican peoples which retained the culture and much of the urban characteristics of the culture of Teotihuacán from other peoples; even the Aztecs primarily referred to themselves by either their tribal name (Tenochca) or as “Toltecs.”
The Toltecs expanded the cult of Quetzalcoatl, the “Sovereign Plumed Serpent,” and created a mythology around the figure. In Toltec legend, Quetzalcoatl was the creator of humanity and a warrior-god that had been driven from Tula, but would return some day. The Toltecs also originated the Central American ball-game, which was played on a large stone court with a rubber ball. The game was primarily a religious ritual celebrating the victory of god-heroes over the gods of death; as a religious ritual, it involved the human sacrifice of the loser.
The Toltecs conquered large areas controlled by the Maya and settled in these areas; they migrated as far south as the Yucatán peninsula. The culture borne out of this fusion is called the Toltec-Maya, and its greatest center was Chichén Itzá— on the very tip of the Yucatan peninsula. Chichén Itzá was the last great center of Mayan civilization. The Toltec-Maya cultures greatly expanded the cultural diffusion of Mayan thought, religion, and art north into the Valley of Mexico.
Veneration of Huitzilopochtli, the personification of the sun and of war, was central to the religious, social and political practices of the Mexicas. Huitzilopochtli attained this central position after the founding of Tenochtitlan and the formation of the Mexica city-state society in the 14th century. Prior to this, Huitzilopochtli was associated primarily with hunting, presumably one of the important subsistence activities of the itinerant bands that would eventually become the Mexica.
According to myth, Huitzilopochtli directed the wanderers to found a city on the site where they would see an eagle devouring a snake perched on a fruit-bearing nopal cactus. (It was said that Huitzilopochtli killed his nephew, Copil, and threw his heart on the lake. Huitzilopochtli honoured Copil by causing a cactus to grow over Copil’s heart.) Legend has it that this is the site on which the Mexicas built their capital city of Tenochtitlan. This legendary vision is pictured on the Coat of Arms of Mexico.
According to their own history, when the Mexicas arrived in the Anahuac valley (Valley of Mexico) around Lake Texcoco, the groups living there considered them uncivilized. The Mexicas borrowed much of their culture from the ancient Toltec whom they seem to have at least partially confused with the more ancient civilization of Teotihuacan. To the Mexicas, the Toltecs were the originators of all culture; “Toltecayatl” was a synonym for culture. Mexica legends identify the Toltecs and the cult of Quetzalcoatl with the mythical city of Tollan, which they also identified with the more ancient Teotihuacan.
For most people today, and for the European Catholics who first met the Aztecs, human sacrifice was the most striking feature of Aztec civilization. While human sacrifice was practiced throughout Mesoamerica, the Aztecs, if their own accounts are to be believed, brought this practice to an unprecedented level. For example, for the reconsecration of Great Pyramid of Tenochtitlan in 1487, the Aztecs reported that they sacrificed 84,400 prisoners over the course of four days, reportedly by Ahuitzotl, the Great Speaker himself.
However, most experts consider these numbers to be overstated. For example, the sheer logistics associated with sacrificing 84,000 victims would be overwhelming, 2,000 being a more likely figure. A similar consensus has developed on reports of cannibalism among the Aztecs.
In the writings of Bernardino de Sahagún, Aztec “anonymous informants” defended the practice of human sacrifice by asserting that it was not very different from the European way of waging warfare: Europeans killed the warriors in battle, Aztecs killed the warriors after the battle.
Accounts by the Tlaxcaltecas, the primary enemy of the Aztecs at the time of the Spanish Conquest, show that at least some of them considered it an honor to be sacrificed. In one legend, the warrior Tlahuicole was freed by the Aztecs but eventually returned of his own volition to die in ritual sacrifice. Tlaxcala also practiced the human sacrifice of captured Aztec warriors.
Unlike the cultures of the Valley of Mexico, the only period in which the urban centers were important to the Mayas was during the Classic period from 300 to 900 AD. The culture of the Mayas, however, has little changed from the classic period to the modern period, for Maya culture was largely tribal and rural all throughout the Classic period. What distinguishes Classic from post-Classic Maya culture was the importance of urban centers and their structures in the religious life of the Mayas and the extent of literate culture.
The Mayas were never a “true” urban culture; the urban centers were almost entirely used as religious centers for the rural population surrounding them. Therfore, the decline of the urban centers after 900 AD did not involve titanic social change so much as religious change; it is believed by some scholars that the abandonment of the cities was primarily due to religious proselytizing from the north. Nevertheless, the Classic period saw an explosion of cultural creativity all throughout the region populated by the tribes we call “Mayan.” They derived many cultural forms from the north, but also devised many cultural innovations that profoundly influenced all subsequent cultures throughout Mesoamerica. Much of Maya culture, particularly the religious reckoning of time, is still a vital aspect of Native American life in Guatemala and Honduras.
Classic Maya culture developed in three regions in Mesoamerica. By far the most important and most complete urban developments occurred in the lowlands in the “central region” of southern Guatemala. This region is a drainage basin about sixty miles long and twenty miles wide and is covered by tropical rain forest; the Mayas, in fact, are only one of two peoples to develop an urban culture in a tropical rainforest. The principal city in this region was Tikal, but the spread of urbanization extended south to Honduras; the southernmost Mayan city was Copan in northern Honduras. In the Guatemalan highlands to the north, Mayan culture developed less fully. The highlands are more temperate and seem to have been the main suppliers of raw materials to the central urban centers. The largest and most complete urban center was Palenque. The other major region of Mayan development was the Yucatan peninsula making up the southern and eastern portions of modern-day Mexico. This is a dry region and, although urban centers were built in this region, including Chichen Itza and Uxmal (pronounced “Oosh-mal”), most scholars believe that this was a culturally marginal area. After the abandonment of the Classic Mayan cities, the Yucatán peninsula became the principal region of a new, synthetic culture called Toltec-Mayan which was formed when Toltecs migrating from the north integrated with indigenous Maya peoples.
The Maya civilization shares many features with other Mesoamerican civilizations due to the high degree of interaction and cultural diffusion that characterized the region. Advances such as writing, epigraphy, and the calendar did not originate with the Maya; however, their civilization fully developed them. Maya influence can be detected as far as central Mexico, more than 1000 km (625 miles) from the Maya area. Many outside influences are found in Maya art and architecture, which are thought to result from trade and cultural exchange rather than direct external conquest. The Maya peoples never disappeared, neither at the time of the Classic period decline nor with the arrival of the Spanish conquistadores and the subsequent Spanish colonization of the Americas. Today, the Maya and their descendants form sizable populations throughout the Maya area and maintain a distinctive set of traditions and beliefs that are the result of the merger of pre-Columbian and post-Conquest ideologies.
Paper, generally known by the Nahuatl word amatl, was named by the Mayas huun. The folding books are the products of professional scribes working under the patronage of the Howler Monkey Gods. The Maya developed their huun-paper around the 5th century, the same era that the Romans did, but their paper was more durable and a better writing surface than papyrus. The codices have been named for the cities in which they eventually settled. The Dresden codex is generally considered the most important of the few that survive.
The Maya had specific techniques to create, inscribe, paint, and design pottery. To begin creating a ceramic vessel the Maya had to locate the proper resources for clay and temper. The present-day indigenous Maya, who currently live in Guatemala, Belize and southern Mexico still create wonderful ceramics. Prudence M. Rice provides a look at what the current Guatemalan Maya use today for clay. Highland Guatemala has a rich geological history comprised mainly from a volcanic past. The metamorphic and igneous rock, as well as the sand and ash from the pumice areas provide many types of tempering. In the area, there are a range of clays that create varied colors and strengths when fired. Today’s Maya locate their clays in the exposed river systems of the highland valleys. It is hypothesized that the ancient people obtained their clay by the same method as today’s Maya. The clays are located in exposed river systems of the highland valleys. Most likely, due to the climatic similarities over the last millennia it is likely that these same deposits or similar ones could have been used in early times.
Once the clay and temper were collected, pottery creation began. The maker would take the clay and mix it with the temper (the rock pieces, ash, or sand). Temper served as a strengthening device for the pottery. Once worked into a proper consistency, the shape of the piece was created.
A potter’s wheel was not used in creating this pottery. Instead, they used coil and slab techniques. The coil method most likely involved the formation of clay into long coiled pieces that were wound into a vessel. The coils were then smoothed together to create walls. The slab method used square slabs of clay to create boxes or types of additions like feet or lids for vessels. Once the pot was formed into the shape, then it would have been set to dry until it was leather hard, then it was painted, inscribed, or slipped. The last step was the firing of the vessel.
Like the Ancient Greeks, the Maya created clay slips from a mixture of clays and minerals. The clay slips were then used to decorate the pottery. By the fourth century, a broad range of colors including yellow, purple, red, and orange were being made. However, some Mayan painters refrained from using many colors and used only black, red, and occasionally cream. This series of ceramics is termed the “Codex-style”, it being similar to the style of the Pre-Columbian books.
From the 5th century AD onwards, post-firing stucco was adopted from Teotihuacan. By preparing a thin quicklime, the Maya added mineral pigments that would dissolve and create rich blues and greens that added to their artistic culture. Many times this post-fire stucco technique was mixed with painting and incising. Incising is carving deeply or lightly into partially dried clay to create fine detailed designs. This technique was mostly popular during the Early Classic Period. |
Edema (also known as fluid retention) is swelling caused by the accumulation of abnormally large amounts of fluid in the spaces between the body's cells or in the circulatory system. It is most common in feet, ankles, and legs. It can also affect the eyes, face, brain, and hands. Pregnant women and older adults often get edema, but it can happen to anyone.
Edema is a symptom, not a disease or disorder. In fact, edema is a normal response to injury. Edema becomes a concern when it persists beyond the inflammatory phase. Widespread, long-term edema can indicate a serious underlying health problem.
Signs and Symptoms
These will vary and may include the following:
- Swollen limbs (possibly accompanied by pain, redness, heat)
- Facial puffiness
- Abdominal bloating
- Shortness of breath, extreme difficulty breathing, coughing up blood
- Sudden change in mental state or coma
- Muscle aches and pains
What Causes It?
Some of the following factors may cause edema:
- Sitting or standing for long periods
- Certain medications
- Hormonal changes during menstruation and pregnancy
- Infection or injury to a blood vessel, blood clots, or varicose veins
- Blocked lymph channels (lymphedema)
- Allergies to food or insect bites
- Kidney, heart, liver, or thyroid disease
- High or low blood pressure
- Eating salty foods
- Brain tumor or head injury
- Exposure to high altitudes or heat, especially when combined with heavy physical exertion
What to Expect at Your Doctor's Office
Your health care provider will look for varicose veins, blood clots, wounds, or infections. An x-ray, computed tomography (CT) scan, magnetic resonance imaging (MRI), urine test, or blood test may be necessary. Pulmonary edema, which occurs when fluid builds up in the lungs, can be caused by other diseases, such as cardiovascular disease or by climbing at high altitudes. It can be life threatening and may require hospitalization.
Treatment may involve using compression bandages and pressure sleeves tightened over swollen limbs to help force the body to reabsorb the fluid. Other options include a salt reduction diet, daily exercise, resting with legs elevated above the heart level, wearing support hose, taking a diuretic, and massage.
- Medication for your underlying disorder. Talk to your health care provider.
- Diuretics. For example, loop diuretics or potassium-sparing diuretics. These medicines reduce body fluid levels, but they also deplete important vitamins and minerals, which can result in loss of bone mass. Diuretics may have several other possibly serious side effects.
Surgery may be needed to remove fat and fluid deposits associated with a type of edema called lipedema, or to repair damaged veins or lymphatic glands to reestablish lymph and blood flow.
Complementary and Alternative Therapies
The following nutritional and herbal support guidelines may help relieve edema, but the underlying cause must be addressed. Tell your health care provider about any complementary or alternative therapies (CAM) you are considering. If you are pregnant, or thinking about becoming pregnant, do not use any CAM therapies unless directed to do so by your physician.
Nutrition and Supplements
Following these nutritional tips may help reduce symptoms:
- Eliminate suspected food allergens, such as dairy (milk, cheese, and ice cream), wheat (gluten), soy, corn, preservatives, and chemical food additives. Your provider may want to test you for food allergies.
- Reduce salt intake. If you are taking diuretics, your doctor should give you specific instructions about salt intake.
- Eat foods high in B-vitamins and iron, such as whole grains (if no allergy), dark leafy greens (such as spinach and kale), and sea vegetables. If you are taking certain diuretics, your provider may give you specific instructions about getting different nutrients into your diet, such as potassium and/or potassium potassium restrictions. Potassium is in many vegetables. Follow your provider's instructions strictly.
- Eat natural diuretic vegetables, including asparagus, parsley, beets, grapes, green beans, leafy greens, pineapple, pumpkin, onion, leeks, and garlic. Some of these foods may interact with diuretic medications.
- Eat antioxidant foods, such as blueberries, cherries, tomatoes, squash, and bell peppers.
- Avoid refined foods, such as white breads, pastas, and sugar.
- Eat fewer red meats and more lean meats, cold-water fish, tofu (soy, if no allergy), or beans for protein.
- Use healthy cooking oils, such as olive oil.
- Reduce or eliminate trans fatty acids, found in commercially-baked goods, such as cookies, crackers, cakes, French fries, onion rings, donuts, processed foods, and margarine.
- Avoid alcohol, and tobacco.
- Exercise lightly 5 days a week if your health care provider says you can.
You may address nutritional deficiencies with the following supplements:
- A multivitamin daily, containing the antioxidant vitamins A, C, E, the B-complex vitamins, and trace minerals, such as magnesium, calcium, zinc, and selenium. Many multivitamins contain calcium and potassium, two minerals your doctor may want you to avoid in large quantities if you are taking certain types of medications. Talk to your provider.
- Vitamin C, as an antioxidant.
- If you use diuretics, your doctor may have you take potassium aspartate (20 mg per day), since diuretics flush out potassium from the body and cause a deficiency. DO NOT take extra potassium without informing your doctor. Some diuretics do the opposite and cause potassium to accumulate in the body.
Herbs are generally a safe way to strengthen and tone the body's systems although they can interact with many medications and have certain side effects. As with any therapy, you should work with your doctor to determine the best and safest herbal therapies for your case before starting treatment, and always tell your provider about any herbs you may be taking. If you are pregnant or nursing, do not use herbs except under the supervision of a provider knowledgeable in herbal therapies. Your doctor may need to strictly monitor your potassium levels if you take certain types of diuretics, and some herbs may be naturally high in potassium. You should not use herbal remedies without first consulting your physician. You may use herbs as dried extracts (capsules, powders, or teas), glycerites (glycerine extracts), or tinctures (alcohol extracts). Unless otherwise indicated, make teas with 1 tsp. herb per cup of hot water. Steep covered 5 to 10 minutes for leaf or flowers, and 10 to 20 minutes for roots. Drink 2 to 4 cups per day. You may use tinctures alone or in combination as noted.
- Bilberry (Vaccinium myrtillus) standardized extract, for antioxidant support. DO NOT use bilberry if you are on blood-thinning medications.
- Dandelion (Taraxacum officinale). Dandelion leaf is itself a diuretic, so it should not be used while taking diuretic medications. Speak with your doctor. DO NOT use dandelion if you have gall bladder disease, take blood-thinning medications, or have allergies to many plants. Dandelion can interact with many medications, including antibiotics and lithium. Talk to your provider.
- Grape seed extract (Vitis vinifera), standardized extract, for antioxidant support. Evidence suggests that using grape seed extract may improve chronic venous insufficiency, which causes swelling when blood pools in the legs. Grape seed can interact with some medicines, including blood-thinning medications such as warfarin (Coumadin).
- Dry skin brushing. Before bathing, briskly brush the surface of the skin with a rough washcloth, loofa, or soft brush. Begin at your feet and work up. Always stroke in the direction of your heart.
- Cold made with yarrow tea.
- Contrast hydrotherapy involves alternating hot and cold applications. Alternate 3 minutes hot with 1 minute cold. Repeat 3 times to complete one set. Do 2 to 3 sets per day for a short term only. Check with your provider to make sure your heart is strong enough for this therapy.
- Put a pillow under your legs when you're lying down.
- Wear support stockings, which you can buy at most drugstores.
Acupuncture may improve fluid balance.
Therapeutic massage can help lymph nodes drain.
Excessive fluid retention during pregnancy (toxemia) is potentially dangerous to both you and your baby.
Adeva MM, Souto G, Donapetry C, et al. Brain edema in diseases of different etiology. Neurochem Int. 2012;61(2):166-74.
Clement DL. Management of venous edema: insights from an international task force. Angiology. 2000;51:13-17.
Hansell DM, Armstrong P, Lynch DA, et al. Imaging of Diseases of the Chest. 4th ed. Philadelphia, PA: Elsevier Mosby; 2005.
Haritoglou C, Gerss J, Hammes HP, et al. Alpha-lipoic acid for the prevention of diabetic macular edema. Ophthalmologica. 2011;226(3):127-37.
Kiesewetter H, Koscielny J, Kalus U, et al. Efficacy of orally administered extract of red vine leaf AS 195 (folia vitis viniferae) in chronic venous insufficiency (stages I-II). A randomized, double-blind, placebo-controlled trial. Arzneimittelforschung. 2000;50:109-17.
Ma L, Lin S, Chen R, et al. Treatment of moderate to severe premenstrual syndrome with Vitex agnus castus (BNO 1095) in Chinese women. Gynecol Endocrinol. 2010;26(8):612-6.
Maggiorini M. Prevention and treatment of high-altitude pulmonary edema. Prog Cardiovasc Dis. 2010;52(6):500-6.
Makri OE, Georgalas I, Georgakopoulos CD. Drug-induced macular edema. Drugs. 2013;73(8):789-802.
Meissner MH, Eklof B, Smith PC, et al. Secondary chronic venous disorders. J Vasc Surg. 2007;46 Suppl S:68S-83S.
Rathnasamy G, Ling EA, Kaur C. Therapeutic implications of melatonin in cerebral edema. Histol Histopathol. 2014; 29912):1525-38.
Schütz K, Carle R, Schieber A. Taraxacum -- a review on its phytochemical and pharmacological profile. J Ethnopharmacol. 2006;107(3):313-23.
Shapiro S, Pollock DM, Gillies H, et al. Frequency of edema in patients with pulmonary arterial hypertension receiving ambrisentan. Am J Cardiol. 2012;110(9):1373-7.
Shi J, Yu J, Pohorly JE, Kakuda Y. Polyphenolics in grape seeds-biochemistry and functionality. J Med Food. 2003;6(4):291-9.
Szczesny G, Olszewski WL. Post-traumatic edema: pathomechanism, diagnosis and treatment. Ortop Traumatol Rehabil. 2001;3(3):385-94.
Tickle J. Managing venous leg ulcers and oedema using compression hosiery. Nurs Stand. 2015;30(8):57-63.
Trayes KP, Studdiford JS, Pickle S, et al. Edema: diagnosis and management. Am Fam Physician. 2013;88(2):102-10.
Villeco JP. Edema: a silent but important factor. J Hand Ther. 2012;25(2):153-61.
Zafra-Stone S, Yasmin T, Bagchi M, et al. Berry anthocyanins as novel antioxidants in human health and disease prevention. Mol Nutr Food Res. 2007;51(6):675-83.
- Last reviewed on 4/1/2016
- Steven D. Ehrlich, NMD, Solutions Acupuncture, a private practice specializing in complementary and alternative medicine, Phoenix, AZ. Review provided by VeriMed Healthcare Network.
The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997- 2013 A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited. |
Assessment of Hands-On Elementary Science Projects
This book, by Janet Harley Brown and Richard J. Shavelson, steers the reader towards understanding and selecting a particular measure, tailored to the teacher's practical needs.
- why use performance assessment?
- what do performance assessments measure?
- defining performance assessment
- scoring student performance
- use of performance assessments in the classroom
- examples of a holistic scoring system
- choosing an assessment you can trust, and
- which performance assessment is best for your purpose?
This book is highly readable and can be used in a worksheet format.
The LOC monograph
The LOC monograph (#90-062318), edited by George Hein and prepared at the Center for Teaching and Learning at the University of North Dakota in 1990 under NSF auspices, provides assessment strategies for telecommunications projects(in addition to other dimensions of science education for younger students).
For example, rather than asking about formula required to change heat states (p.198ff), you could ask why it takes frozen peas longer to come to a boil than fresh peas - actual thinking rather than regurgitation of a theory.
In the unit Acid Rain from the National Geographic Kids Network, project students are asked to examine data on a map that shows wind patterns and emission sources and to predict where acid rain will become the most serious problem. Rather than administering a general questionnaire about whether student attitudes improved after this project, these researchers interviewed students and elicited nicely specific examples of how scientists might address human problems.
Finally, they applied embedded assessment techniques by building in specific writing assignments at particular places in the telecommunications curriculum. This approach provides guidance to the teacher, as projects progress, as well as policy-makers, that "reflect the reality of what it means to do science."(p.203)
Unfortunately, it is very difficult to obtain a hard-copy of this book.
The Educational Testing Service site is dedicated primarily to their standardized tests; under teaching and learning is a list of current performance assessment studies and measures.
These nuggets of assessing understanding crop up rarely in curriculum. The good news is that once you focus student inquiry in this vein all sorts of intriguing extensions will come to mind, many from your own students. The authors have not jumped on the hands-on bandwagon thoughtlessly; they included protocols from their experience to illustrate the contexts in which this method has worked well and where it has not. |
Puffins Word Search
In this word search worksheet, students search for a list of ten words as they relate to the topic of puffins. Words can be found backward, forward, up, down, and diagonally.
3 Views 0 Downloads
Seasons and Weather: Supplemental Guide
From warm summer days to cold winter nights, this 10-lesson unit takes children on an exploration of seasons. Using the included reading passages and images, a series of read-aloud lessons and vocabulary activities provide young children...
K - 1st Science CCSS: Designed
Write On! Step by Step Paragraph and Report Writing
Students plan and write paragraphs while integrating other core subject areas. In this paragraph writing lesson, students complete 5 lessons to practice writing paragraphs. Students incorporate various core subjects into their paragraph...
3rd English Language Arts |
Blind mole rats live in underground burrows that can be quite hypoxic, with oxygen levels getting as low as 7.2 percent. (The air we breathe is 21 percent oxygen). A team of researchers in Israel and Germany have determined that Spalax have evolved to tolerate this environment by expressing more of their oxygen-carrying globins than other rats, which help them survive at low ambient oxygen levels.
The globin family is comprised of proteins responsible for the delivery and storage of oxygen throughout the body. Hemoglobin (Hb), in red blood cells, transports oxygen from the lungs through the blood to the inner organs. Myoglobin (Mb) is expressed in striated and cardiac muscles; it acts as localized oxygen storage, and helps disperse the oxygen as needed throughout the relatively large muscle cells.
Ten years ago, two unique mammalian globins were discovered. Neuroglobin (Ngb) is found in neurons of both the central and peripheral nervous systems, and in endocrine organs. Ngb is also expressed in astrocytes in mammals tolerant of hypoxia—like these mole rats and the deep diving hooded seal—but not in those sensitive to hypoxia, like mice and rats. Ngb is highly colocalized with mitochondria, and thus active oxidative metabolism, but its function is not yet clear; it may act in the brain like Mb acts in muscle, to supply oxygen to the mitochondrial respiratory chain, or it may scavenge reactive oxygen species. Cytoglobin (Cygb) is found in fibroblasts, and its role is even more obscure than Ngb’s.
Spalax are known to have increased levels of Hb, and their Hb has a high affinity for oxygen. In recent work, the researchers compared the sequences and expression levels of the other globin genes among the different Spalax species with those of the hypoxia sensitive rat, under both normoxic and hypoxic conditions.
They had access to 4 different allospecies of the blind mole rat Spalax from Israel. These animals cope with hypoxia to varying degrees depending on the different climates in the regions where they live. Spalax galili live in the Galilee, the Northern part of the country, which is cool and moist; they have the most efficient hypoxic adaptation. Spalax judaei live in the Judean desert to the south, which is warm and dry.
These proteins are highly conserved among the Spalax species, so a difference in function did not seem to account for their different abilities to tolerate hypoxia. Spalax expressed 2-3 fold more Ngb RNA and protein than rats under normal conditions. Upon short term severe hypoxic stress, however—five hours at six percent oxygen—normal rats and S. judaei downregulated their Ngb mRNA by almost twofold but S. galili did not. Twenty-two hours of ten percent oxygen were required to bring their Ngb mRNA levels down. Even after hypoxia reduced Ngb mRNA, Ngb protein levels stayed constant in the mole rats, but not the other rats.
Spalax expressed two- to three-fold more Cygb mRNA and protein than normal rats under normoxia as well, but only in the brain, not in the heart or liver (muscle tissues). When the animals were placed under hypoxic conditions, Cygb mRNA stayed constant in rat brains but increased two fold in mole rat brains. And it increased two fold in rat hearts, but twelve fold in mole rat hearts.
The mole rats also expressed more Mb than normal rats. In neck muscle, which the mole rats use for digging, S. judaei had 27 fold and S. galili had 44 fold Mb mRNA than other rats. Short term severe hypoxia did nothing to Spalax Mb mRNA levels, but rat Mb mRNA levels increased threefold.
When humans are deprived of oxygen, we lose consciousness within minutes. Subterranean mole rats, in contrast, just continue digging their tunnels. Their higher levels of Hb, Mb, Ngb and Cygb protein help them to survive with chronic hypoxia. Perhaps we can exploit this system to help human cells survive hypoxic stresses, such as those precipitated by a heart attack or stroke.
Listing image by NSF |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2016 May 23
Explanation: Why is there more matter than antimatter in the Universe? To better understand this facet of basic physics, energy departments in China and the USA led in the creation of the Daya Bay Reactor Neutrino Experiment. Located under thick rock about 50 kilometers northeast of Hong Kong, China, eight Daya Bay detectors monitor antineutrinos emitted by six nearby nuclear reactors. Featured here, a camera looks along one of the Daya Bay detectors, imaging photon sensors that pick up faint light emitted by antineutrinos interacting with fluids in the detector. Early results indicate an unexpectedly high rate of one type of antineutrino changing into another, a rate which, if confirmed, could imply the existence of a previously undetected type of neutrino as well as impact humanity's comprehension of fundamental particle reactions that occurred within the first few seconds of the Big Bang.
Authors & editors:
Jerry Bonnell (UMCP)
NASA Official: Phillip Newman Specific rights apply.
A service of: ASD at NASA / GSFC
& Michigan Tech. U. |
State governments control much of our everyday dealings. Under the U.S. Constitution's Tenth Amendment, states possess all powers not specifically granted to the federal government. Maryland adopted its first State Constitution in November 1776, its second in June 1851, third in October 1864 and fourth and our current State Constitution September 1867, each containing a declaration of rights - the State's bill of rights. The source of all power and authority for governing the State of Maryland lies with its citizens. In fact, Article I of the Constitution's Declaration of Rights states: "That all Government of right originates from the People, is founded in compact only, and is instituted solely for the good of the whole; and they have, at all times, the inalienable right to alter, reform or abolish their Form of Government in such manner as they may deem expedient."
The framers of the State Constitution of 1867 followed precedent established in earlier Maryland constitutions and separated the powers of government into three distinct branches, explicitly grants each branch certain powers, which exercise certain checks and balances on each other. Those branches are the executive, legislative, and judicial branches; each representing Marylanders' interests in our relations with other governments. |
Wafer-thin magnetic materials developed for future quantum technologies
Two-dimensional magnetic structures are regarded as a promising material for new types of data storage, since the magnetic properties of individual molecular building blocks can be investigated and modified. For the first time, researchers have now produced a wafer-thin ferrimagnet, in which molecules with different magnetic centers arrange themselves on a gold surface to form a checkerboard pattern. Scientists at the Swiss Nanoscience Institute at the University of Basel and the Paul Scherrer Institute published their findings in the journal Nature Communications.
Ferrimagnets are composed of two centers which are magnetized at different strengths and point in opposing directions. Two-dimensional, quasi-flat ferrimagnets would be suitable for use as sensors, data storage devices or in a quantum computer, since the two-dimensional arrangement allows the magnification state of the individual atoms or molecules to be selected. For mathematical and geometrical reasons, however, it has so far not been possible to produce two-dimensional ferrimagnets.
Choice of materials makes the impossible possible
The scientists in Professor Thomas Jung's research groups at the Paul Scherrer Institute (PSI) and the Department of Physics at the University of Basel have now found a method of making a two-dimensional ferrimagnet.
The researchers first produce "phthalocyanines" — hydrocarbon compounds with different magnetic centers composed of iron and manganese. When these phthalocyanines are applied to a gold surface, they arrange themselves into a checkerboard pattern in which molecules with iron and manganese centers alternate. The researchers were able to prove that the surface is magnetic, and that the magnetism of the iron and manganese is of different strengths and appears in opposing directions – all characteristics of a ferrimagnet.
"The decisive factor of this discovery is the electrically conductive gold substrate, which mediates the magnetic order," explains Dr. Jan Girovsky from the PSI, lead author of the study. "Without the gold substrate, the magnetic atoms would not sense each other and the material would not be magnetic."
The decisive effect of the conducting electrons in the gold substrate is shown by a physical effect detected in each magnetic atom using scanning tunnel spectroscopy. The experiments were conducted at various temperatures and thus provide evidence of the strength of the magnetic coupling in the new magnetic material. Model calculations confirmed the experimentally observed effect and indicated that special electrons attached to the surface in the gold substrate are responsible for this type of magnetism.
Nanoarchitecture leads to new magnetic materials
"The work shows that a clever combination of materials and a particular nanoarchitecture can be used to produce new materials that otherwise would be impossible," says Professor Nirmalya Ballav of the Indian Institute of Science Education and Research in Pune (India), who has been studying the properties of molecular nano-checkerboard architectures for several years with Jung. The magnetic molecules have great potential for a number of applications, since their magnetism can be individually investigated and also modified using scanning tunnel spectroscopy.
Dr. Thomas A. Jung
Related Journal Article |
Genetic resources are genetic material of current or potential use. In technical terms "genetic material" means any material of plant, microbial or animal origin, including reproductive and vegetative propagating material, containing functional units of heredity;
In everyday terms genetic resources range from, for example, fully mature plants, animals and microbes to seeds, cuttings, conserved embryos, eggs, and semen.
Diverse genetic resources are important for maintaining an efficient and sustainable farming industry, as they allow the development of varieties and breeds to cope with new demands. Genetic selection also has important consequences for animal health and welfare, and has an important role to play in reducing environmental pollution from livestock.
Defra is responsible for policy on genetic resources for food and agriculture for England and Wales. The conservation and sustainable use of genetic resources contribute directly to Defra's objectives concerning a sustainable, competitive food supply chain, sustainable, diverse and adaptable farming and sustainable management of natural resources. Indigenous livestock breeds have a particular role to play in managing the rural environment and assisting in maintaining wild biological diversity
The conservation and sustainable use of genetic resources for food and agriculture is a widely supported international objective as a contribution to efforts to achieve global poverty elimination and world food security. Defra, in collaboration with the Department for International Development, has a continuing UK role to play in this area particularly in view of the rich diversity of plant varieties and animal breeds that the UK possesses that are of international interest. |
Gamma (denoted as γ) of an eclipse describes how centrally the shadow of the Moon or Earth strikes the other. The distance, when the axis of the shadow cone passes closest to Earth or Moon's center, is stated as a fraction of the equatorial radius of the Earth. The sign of gamma defines, for a solar eclipse, if the axis of the shadow passes north or south of the center of the Earth; a positive value means north. For a lunar eclipse it defines whether the axis of the Earth's shadow passes north or south of the Moon; a positive value means south. For solar eclipses the Earth is defined as that half which is exposed to the Sun (this changes with the seasons and is not related directly to the Earth's poles or equator, thus the Earth's center is wherever the Sun is directly overhead).
The adjoining diagram illustrates solar eclipse gamma: The red line shows the least distance from the center of the Earth, in this case approximately 75% of radius of the Earth. Because the umbra passes north of the Earth's center, gamma in this example is +0.75.
- If gamma is 0 the axis of the shadow cone is exactly between the northern and southern halves of the sunlit side of the Earth (for solar eclipses) or Moon (for lunar eclipses) when it passes over the center.
- If gamma is lower than 0.9972, the eclipse is central. The axis of the shadow cone strikes the Earth and there are locations on Earth, where the Moon can be seen central in front of the Sun. Central eclipses can be total or annular (if the tip of the umbra reaches surface of Earth barely reaches earth the type can change during the eclipse from annular to total and vice versa, this is called a hybrid eclipse).
- If gamma constitutes between 0.9972 and 1.0260 the axis of the shadow cone misses Earth, but because of the umbra or antumbra has a certain width in some circumstances a part of the umbra or antumbra can touch Earth in polar regions. The result is a non central total or annular eclipse.
- If gamma is between 0.9972 and approximately 1.55 and the special circumstances mentioned above do not occur the eclipse is partial, the Earth traverses only the penumbra.
The Solar eclipse of April 29, 2014 with gamma of 1.0001 constitutes the special case of an annular but not central eclipse. The axis of the shadow cone barely misses Earth's south. Thus no central line can be specified for the zone of annular visibility.
- J. Meeus: Astronomical Algorithms. 2nd ed., Willmann-Bell, Richmond 2000, ISBN 0-943396-61-1, Chapter 54
- The radius of penumbra of the Moon in the fundamental pane is about 0.53 to 0.57 of the Earth's radius.
J. Meeus: Mathematical Astronomy, Morsels, Willmann-Bell, 2000, ISBN 0-943396-51-4, Fig. 10.c. und
J. Meeus: Mathematical Astronomy, Morsels III, Willmann-Bell, 2004, ISBN 0-943396-81-6, Page 46
- J. Meeus: Mathematical Astronomy Morsels III. Willmann-Bell, Richmond 2004, ISBN 0-943396-81-6, Chapter 6
- Fred Espenak: Path of the Annular Solar Eclipse of 2014 Apr 29 |
Australian Government/The Commonwealth Legislature
The legislature of the Commonwealth is the Commonwealth Parliament or Parliament of Australia. The Parliament of Australia is a bicameral parliament consisting of the House of Representatives (the "lower house") and the Senate (the "upper house" or "house of review"). Section 1 of the Constitution of Australia provides that: "The legislative power of the Commonwealth shall be vested in a Federal Parliament, which shall consist of the Queen, a Senate, and a House of Representatives, and which is herein-after called 'The Parliament,' or 'The Parliament of the Commonwealth'."
Queen Elizabeth II is, in her capacity as Queen of Australia, Australia's head of state, but her constitutional functions in Australia are delegated to the Governor-General of Australia.
The House of Representatives consists of 150 members elected from single-member constituencies of approximately equal population. The Senate consists of 76 members: 12 Senators are elected from each of the six states and two from each of the two territories.
The principal function of the Parliament is to pass laws, or legislation. Any Member or Senator may introduce a proposed law (a bill), except for a money bill (a bill proposing an expenditure or levying a tax), which must be introduced in the House of Representatives. In practice, the great majority of bills are introduced by ministers. Bills introduced by other Members are called private members' bills. All bills must be passed by both Houses to become law. The Senate has the same legislative powers as the House, except that it may not amend money bills, only pass or reject them.
The Parliament performs other functions besides legislation. It can discuss urgency motions or matters of public importance: these provide a forum for debates on public policy matters. Members can move motions of censure against the government or against individual ministers. On most sitting days in both Houses there is a session called Question Time at which Members and Senators address questions to the Prime Minister and other ministers. Members and Senators can also present petitions from their constituents. Both Houses have an extensive system of committees in which draft bills are debated, evidence is taken and public servants are questioned.
Members of the Australian Parliament do not have legal immunity: they can be arrested and tried for any offence. They do, however, have Parliamentary privilege: they cannot be sued for anything they say about each other or about persons outside the Parliament. This privilege extends to reporting in the media of anything a Member or Senator says.
There is a legal offence called contempt of Parliament. A person who speaks or acts in a manner contemptuous of the Parliament or its members can be tried and, if convicted, imprisoned. The Parliament used to have the power of hearing such cases itself, and did so in the Browne-Fitzpatrick case of 1955. This power has now been delegated to the courts, but no-one has been prosecuted for this offence.
The House of Representatives
The 150 members of the house are elected from single-member geographic districts (popularly known as "seats" but officially known as "Commonwealth Electoral Divisions") which are intended to represent reasonably contiguous regions, with relatively equal population in each of about 80 000 people. Voting is by the preferential system.
According to Australia's Constitution, the powers of both houses are nearly equal with the consent of both houses needed to pass legislation. In practice, however, the Lower House is far stronger in some ways, and far weaker in others.
By convention, the party or coalition in the lower house with a majority is invited by the Governor-General to form government, and thus the leader of the party in the lower house becomes the Prime Minister of Australia and his senior colleagues ministers responsible for various government departments. Bills appropriating money can also only be introduced or modified in the lower house. Thus, only parties in the lower house can govern. However, in the rigid Australian party system, this ensures that virtually all contentious votes are along party lines, and the government always has a majority in those votes. The Opposition's only real role in the House is to present arguments why the government's policies and legislation are wrong, and attempt to embarrass the government as much as possible by asking difficult questions at question time.
In a reflection of the color scheme of the United Kingdom House of Commons, the House of Representatives is decorated in green.
The voting system for the Senate has changed twice since it was created. The original arrangement was a first past the post block voting mechanism. In 1919, it was changed to preferential block voting. Block voting tended to grant landslide majorities very easily. In 1946, the Australian Labor Party government won 30 out of the 33 Senate seats. In 1948, partially in response to this extreme situation, they introduced proportional representation in the Senate.
From a comparative governmental perspective, the Australian Senate is almost unique in that unlike the upper house in other Westminster system governments, the Senate is not a vestigial body with limited legislative power but rather plays and is intended to play an active role in legislation. Rather than being modelled after the House of Lords the Australian Senate was in part modelled after the United States Senate and was intended to give small rural states added voice in a Federal legislature, while also fulfilling the revising role of an upper house in the Westminster system.
Although the Prime Minister is answerable to, and selected from the House of Representatives (the "lower house"), other ministers are drawn from either house and the two houses have almost equal legislative power. As with most upper chambers in bicameral parliaments, it cannot introduce Appropriation Bills or impose taxation, that role being reserved for the universally elected lower chamber. That degree of equality between the Australian Senate and House of Representatives is in part due to the age of the Australian constitution; it was enacted before the confrontation in 1909 between the British House of Commons and House of Lords, over estate taxes, which ultimately resulted in the restrictions in the powers of the House of Lords in the Parliament Act. The Senate thus reflected the pre-1911 Lords-Commons relationship, namely a house with theoretically wide powers that are by convention not widely used, but which in a crisis could be.
However in one area it possesses a highly sensitive power, namely the right to withdraw or block Supply, i.e. government control of exchequer funding. In most democracies, this power does not exist within upper houses. In the Australian case, probably due to its age and its pre-dating of the British Parliament Act, 1911, that power, once possessed by the House of Lords, is still possessed by the Australian Senate. What it means is that in strict constitutional terms, the Government is also answerable to the Australian Senate, given that the loss of Supply in parliamentary democracies automatically requires the resignation of a government or ministry, or alternatively the calling of a general election, because without access to exchequer funding a government cannot function and would face bankruptcy. However, as in the United Kingdom prior to the 1909 clash over David Lloyd George's budget between the House of Commons and House of Lords, a general convention built up in which the upper house would not use its power to block exchequer funding, leaving that responsibility to the democratically representative lower house. Indeed to break convention in this area is sometimes described as a parliamentary nuclear option because of its political, financial and governmental impact. In 1975, in a dispute strikingly similar to Britain's 1909 clash (an upper House breaking convention by withdrawing Supply on the basis that it was reacting to a breach of another fundamental convention by the government - which the government denied) the Senate did refuse to pass a required financial measure, producing a 'nuclear'-style crisis which resulted in a stand off, a decision of the Prime Minister not to resign or seek a dissolution, and the eventual intervention of the Governor-General to withdraw the commission of the Australian Prime Minister (in effect dismissing him) and the appointment of a minority government from the opposition in the House of Representatives and the calling of a general election.
In practice, however, most legislation (except for "Private Member's Bills") in the Australian Parliament is initiated by the Government, which has control over the lower house. It is then passed to the Senate, which may amend the bill or refuse to pass it. In the majority of cases, voting is along party lines. |
Fluorescence Excitation and Emission Fundamentals
Fluorescence is a member of the ubiquitous luminescence family of processes in which susceptible molecules emit light from electronically excited states created by either a physical (for example, absorption of light), mechanical (friction), or chemical mechanism. Generation of luminescence through excitation of a molecule by ultraviolet or visible light photons is a phenomenon termed photoluminescence, which is formally divided into two categories, fluorescence and phosphorescence, depending upon the electronic configuration of the excited state and the emission pathway. Fluorescence is the property of some atoms and molecules to absorb light at a particular wavelength and to subsequently emit light of longer wavelength after a brief interval, termed the fluorescence lifetime. The process of phosphorescence occurs in a manner similar to fluorescence, but with a much longer excited state lifetime.
The fluorescence process is governed by three important events, all of which occur on timescales that are separated by several orders of magnitude (see Table 1). Excitation of a susceptible molecule by an incoming photon happens in femtoseconds (10E-15 seconds), while vibrational relaxation of excited state electrons to the lowest energy level is much slower and can be measured in picoseconds (10E-12 seconds). The final process, emission of a longer wavelength photon and return of the molecule to the ground state, occurs in the relatively long time period of nanoseconds (10E-9 seconds). Although the entire molecular fluorescence lifetime, from excitation to emission, is measured in only billionths of a second, the phenomenon is a stunning manifestation of the interaction between light and matter that forms the basis for the expansive fields of steady state and time-resolved fluorescence spectroscopy and microscopy. Because of the tremendously sensitive emission profiles, spatial resolution, and high specificity of fluorescence investigations, the technique is rapidly becoming an important tool in genetics and cell biology.
Several investigators reported luminescence phenomena during the seventeenth and eighteenth centuries, but it was British scientist Sir George G. Stokes who first described fluorescence in 1852 and was responsible for coining the term in honor of the blue-white fluorescent mineral fluorite (fluorspar). Stokes also discovered the wavelength shift to longer values in emission spectra that bears his name. Fluorescence was first encountered in optical microscopy during the early part of the twentieth century by several notable scientists, including August Köhler and Carl Reichert, who initially reported that fluorescence was a nuisance in ultraviolet microscopy. The first fluorescence microscopes were developed between 1911 and 1913 by German physicists Otto Heimstädt and Heinrich Lehmann as a spin-off from the ultraviolet instrument. These microscopes were employed to observe autofluorescence in bacteria, animal, and plant tissues. Shortly thereafter, Stanislav Von Provazek launched a new era when he used fluorescence microscopy to study dye binding in fixed tissues and living cells. However, it wasn't until the early 1940s that Albert Coons developed a technique for labeling antibodies with fluorescent dyes, thus giving birth to the field of immunofluorescence. By the turn of the twenty-first century, the field of fluorescence microscopy was responsible for a revolution in cell biology, coupling the power of live cell imaging to highly specific multiple labeling of individual organelles and macromolecular complexes with synthetic and genetically encoded fluorescent probes.
Timescale Range for Fluorescence Processes
Fluorescence is generally studied with highly conjugated polycyclic aromatic molecules that exist at any one of several energy levels in the ground state, each associated with a specific arrangement of electronic molecular orbitals. The electronic state of a molecule determines the distribution of negative charge and the overall molecular geometry. For any particular molecule, several different electronic states exist (illustrated as S(0), S(1), and S(2) in Figure 1), depending on the total electron energy and the symmetry of various electron spin states. Each electronic state is further subdivided into a number of vibrational and rotational energy levels associated with the atomic nuclei and bonding orbitals. The ground state for most organic molecules is an electronic singlet in which all electrons are spin-paired (have opposite spins). At room temperature, very few molecules have enough internal energy to exist in any state other than the lowest vibrational level of the ground state, and thus, excitation processes usually originate from this energy level.
The category of molecules capable of undergoing electronic transitions that ultimately result in fluorescence are known as fluorescent probes, fluorochromes, or simply dyes. Fluorochromes that are conjugated to a larger macromolecule (such as a nucleic acid, lipid, enzyme, or protein) through adsorption or covalent bonds are termed fluorophores. In general, fluorophores are divided into two broad classes, termed intrinsic and extrinsic. Intrinsic fluorophores, such as aromatic amino acids, neurotransmitters, porphyrins, and green fluorescent protein, are those that occur naturally. Extrinsic fluorophores are synthetic dyes or modified biochemicals that are added to a specimen to produce fluorescence with specific spectral properties.
Absorption, Excitation, and Emission
Absorption of energy by fluorochromes occurs between the closely spaced vibrational and rotational energy levels of the excited states in different molecular orbitals. The various energy levels involved in the absorption and emission of light by a fluorophore are classically presented by a Jablonski energy diagram (see Figure 1), named in honor of the Polish physicist Professor Alexander Jablonski. A typical Jablonski diagram illustrates the singlet ground (S(0)) state, as well as the first (S(1)) and second (S(2)) excited singlet states as a stack of horizontal lines. In Figure 1, the thicker lines represent electronic energy levels, while the thinner lines denote the various vibrational energy states (rotational energy states are ignored). Transitions between the states are illustrated as straight or wavy arrows, depending upon whether the transition is associated with absorption or emission of a photon (straight arrow) or results from a molecular internal conversion or non-radiative relaxation process (wavy arrows). Vertical upward arrows are utilized to indicate the instantaneous nature of excitation processes, while the wavy arrows are reserved for those events that occur on a much longer timescale.
Absorption of light occurs very quickly (approximately a femtosecond, the time necessary for the photon to travel a single wavelength) in discrete amounts termed quanta and corresponds to excitation of the fluorophore from the ground state to an excited state. Likewise, emission of a photon through fluorescence or phosphorescence is also measured in terms of quanta. The energy in a quantum (Planck's Law) is expressed by the equation:
where E is the energy, h is Planck's constant, n and l are the frequency and wavelength of the incoming photon, and c is the speed of light. Planck's Law dictates that the radiation energy of an absorbed photon is directly proportional to the frequency and inversely proportional to the wavelength, meaning that shorter incident wavelengths possess a greater quantum of energy. The absorption of a photon of energy by a fluorophore, which occurs due to an interaction of the oscillating electric field vector of the light wave with charges (electrons) in the molecule, is an all or none phenomenon and can only occur with incident light of specific wavelengths known as absorption bands. If the absorbed photon contains more energy than is necessary for a simple electronic transition, the excess energy is usually converted into vibrational and rotational energy. However, if a collision occurs between a molecule and a photon having insufficient energy to promote a transition, no absorption occurs. The spectrally broad absorption band arises from the closely spaced vibrational energy levels plus thermal motion that enables a range of photon energies to match a particular transition. Because excitation of a molecule by absorption normally occurs without a change in electron spin-pairing, the excited state is also a singlet. In general, fluorescence investigations are conducted with radiation having wavelengths ranging from the ultraviolet to the visible regions of the electromagnetic spectrum (250 to 700 nanometers).
With ultraviolet or visible light, common fluorophores are usually excited to higher vibrational levels of the first (S(1)) or second (S(2)) singlet energy state. One of the absorption (or excitation) transitions presented in Figure 1 (left-hand green arrow) occurs from the lowest vibrational energy level of the ground state to a higher vibrational level in the second excited state (a transition denoted as S(0) = 0 to S(2) = 3). A second excitation transition is depicted from the second vibrational level of the ground state to the highest vibrational level in the first excited state (denoted as S(0) = 1 to S(1) = 5). In a typical fluorophore, irradiation with a wide spectrum of wavelengths will generate an entire range of allowed transitions that populate the various vibrational energy levels of the excited states. Some of these transitions will have a much higher degree of probability than others, and when combined, will constitute the absorption spectrum of the molecule. Note that for most fluorophores, the absorption and excitation spectra are distinct, but often overlap and can sometimes become indistinguishable. In other cases (fluorescein, for example) the absorption and excitation spectra are clearly separated.
Immediately following absorption of a photon, several processes will occur with varying probabilities, but the most likely will be relaxation to the lowest vibrational energy level of the first excited state (S(1) = 0; Figure 1). This process is known as internal conversion or vibrational relaxation (loss of energy in the absence of light emission) and generally occurs in a picosecond or less. Because a significant number of vibration cycles transpire during the lifetime of excited states, molecules virtually always undergo complete vibrational relaxation during their excited lifetimes. The excess vibrational energy is converted into heat, which is absorbed by neighboring solvent molecules upon colliding with the excited state fluorophore.
An excited molecule exists in the lowest excited singlet state (S(1)) for periods on the order of nanoseconds (the longest time period in the fluorescence process by several orders of magnitude) before finally relaxing to the ground state. If relaxation from this long-lived state is accompanied by emission of a photon, the process is formally known as fluorescence. The closely spaced vibrational energy levels of the ground state, when coupled with normal thermal motion, produce a wide range of photon energies during emission. As a result, fluorescence is normally observed as emission intensity over a band of wavelengths rather than a sharp line. Most fluorophores can repeat the excitation and emission cycle many hundreds to thousands of times before the highly reactive excited state molecule is photobleached, resulting in the destruction of fluorescence. For example, the well-studied probe fluorescein isothiocyanate (FITC) can undergo excitation and relaxation for approximately 30,000 cycles before the molecule no longer responds to incident illumination.
Several other relaxation pathways that have varying degrees of probability compete with the fluorescence emission process. The excited state energy can be dissipated non-radiatively as heat (illustrated by the cyan wavy arrow in Figure 1), the excited fluorophore can collide with another molecule to transfer energy in a second type of non-radiative process (for example, quenching, as indicated by the purple wavy arrow in Figure 1), or a phenomenon known as intersystem crossing to the lowest excited triplet state can occur (the blue wavy arrow in Figure 1). The latter event is relatively rare, but ultimately results either in emission of a photon through phosphorescence or a transition back to the excited singlet state that yields delayed fluorescence. Transitions from the triplet excited state to the singlet ground state are forbidden, which results in rate constants for triplet emission that are several orders of magnitude lower than those for fluorescence.
Both of the triplet state transitions are diagrammed on the right-hand side of the Jablonski energy profile illustrated in Figure 1. The low probability of intersystem crossing arises from the fact that molecules must first undergo spin conversion to produce unpaired electrons, an unfavorable process. The primary importance of the triplet state is the high degree of chemical reactivity exhibited by molecules in this state, which often results in photobleaching and the production of damaging free radicals. In biological specimens, dissolved oxygen is a very effective quenching agent for fluorophores in the triplet state. The ground state oxygen molecule, which is normally a triplet, can be excited to a reactive singlet state, leading to reactions that bleach the fluorophore or exhibit a phototoxic effect on living cells. Fluorophores in the triplet state can also react directly with other biological molecules, often resulting in deactivation of both species. Molecules containing heavy atoms, such as the halogens and many transition metals, often facilitate intersystem crossing and are frequently phosphorescent.
The probability of a transition occurring from the ground state (S(0)) to the excited singlet state (S(1)) depends on the degree of similarity between the vibrational and rotational energy states when an electron resides in the ground state versus those present in the excited state, as outlined in Figure 2. The Franck-Condon energy diagram illustrated in Figure 2 presents the vibrational energy probability distribution among the various levels in the ground (S(0)) and first excited (S(1)) states for a hypothetical molecule. Excitation transitions (red lines) from the ground to the excited state occur in such a short timeframe (femtoseconds) that the internuclear distance associated with the bonding orbitals does not have sufficient time to change, and thus the transitions are represented as vertical lines. This concept is referred to as the Franck-Condon Principle. The wavelength of maximum absorption (red line in the center) represents the most probable internuclear separation in the ground state to an allowed vibrational level in the excited state.
At room temperature, thermal energy is not adequate to significantly populate excited energy states and the most likely state for an electron is the ground state (S(O)), which contains a number of distinct vibrational energy states, each with differing energy levels. The most favored transitions will be the ones where the rotational and vibrational electron density probabilities maximally overlap in both the ground and excited states (see Figure 2). However, incident photons of varying wavelength (and quanta) may have sufficient energy to be absorbed and often produce transitions from other internuclear separation distances and vibrational energy levels. This effect gives rise to an absorption spectrum containing multiple peaks (Figure 3). The wide range of photon energies associated with absorption transitions in fluorophores causes the resulting spectra to appear as broad bands rather than discrete lines.
The hypothetical absorption spectrum illustrated in Figure 3 (blue band) results from several favored electronic transitions from the ground state to the lowest excited energy state (labeled S(0) and S(1), respectively). Superimposed over the absorption spectrum are vertical lines (yellow) representing the transitions from the lowest vibrational level in the ground state to higher vibrational energy levels in the excited state. Note that transitions to the highest excited vibrational levels are those occurring at higher photon energies (lower wavelength or higher wavenumber). The approximate energies associated with the transitions are denoted in electron-volts (eV) along the upper abscissa of Figure 3. Vibrational levels associated with the ground and excited states are also included along the right-hand ordinate.
Scanning through the absorption spectrum of a fluorophore while recording the emission intensity at a single wavelength (usually the wavelength of maximum emission intensity) will generate the excitation spectrum. Likewise, exciting the fluorophore at a single wavelength (again, preferably the wavelength of maximum absorption) while scanning through the emission wavelengths will reveal the emission spectral profile. The excitation and emission spectra may be considered as probability distribution functions that a photon of given quantum energy will be absorbed and ultimately enable the fluorophore to emit a second photon in the form of fluorescence radiation.
Stokes Shift and the Mirror Image Rule
If the fluorescence emission spectrum of a fluorophore is carefully scrutinized, several important features become readily apparent. The emission spectrum is independent of the excitation energy (wavelength) as a consequence of rapid internal conversion from higher initial excited states to the lowest vibrational energy level of the S(1) excited state. For many of the common fluorophores, the vibrational energy level spacing is similar for the ground and excited states, which results in a fluorescence spectrum that strongly resembles the mirror image of the absorption spectrum. This is due to the fact that the same transitions are most favorable for both absorption and emission. Finally, in solution (where fluorophores are generally studied) the detailed vibrational structure is generally lost and the emission spectrum appears as a broad band.
As previously discussed, following photon absorption, an excited fluorophore will quickly undergo relaxation to the lowest vibrational energy level of the excited state. An important consequence of this rapid internal conversion is that all subsequent relaxation pathways (fluorescence, non-radiative relaxation, intersystem crossing, etc.) proceed from the lowest vibrational level of the excited state (S(1)). As with absorption, the probability that an electron in the excited state will return to a particular vibrational energy level in the ground state is proportional to the overlap between the energy levels in the respective states (Figure 2). Return transitions to the ground state (S(0)) usually occur to a higher vibrational level (see Figure 3), which subsequently reaches thermal equilibrium (vibrational relaxation). Because emission of a photon often leaves the fluorophore in a higher vibrational ground state, the emission spectrum is typically a mirror image of the absorption spectrum resulting from the ground to first excited state transition. In effect, the probability of an electron returning to a particular vibrational energy level in the ground state is similar to the probability of that electron's position in the ground state before excitation. This concept, known as the Mirror Image Rule, is illustrated in Figure 3 for the emission transitions (blue lines) from the lowest vibrational energy level of the excited state back to various vibrational levels in ground state. The resulting emission spectrum (red band) is a mirror image of the absorption spectrum displayed by the hypothetical chromophore.
In many cases, excitation by high energy photons leads to the population of higher electronic and vibrational levels (S(2), S(3), etc.), which quickly lose excess energy as the fluorophore relaxes to the lowest vibrational level of the first excited state (see Figure 1). Because of this rapid relaxation process, emission spectra are generally independent of the excitation wavelength (some fluorophores emit from higher energy states, but such activity is rare). For this reason, emission is the mirror image of the ground state to lowest excited state transitions, but not of the entire absorption spectrum, which may include transitions to higher energy levels. An excellent test of the mirror image rule is to examine absorption and emission spectra in a linear plot of the wavenumber (the reciprocal of wavelength or the number of waves per centimeter), which is directly proportional to the frequency and quantum energy. When presented in this manner (see Figure 3), symmetry between extinction coefficients and intensity of the excitation and emission spectra as a function of energy yield mirrored spectra when reciprocal transitions are involved.
Presented in Figure 4 are the absorption and emission spectra for quinine, the naturally occurring antimalarial agent (and first known fluorophore) whose fluorescent properties were originally described by Sir John Fredrick William Hershel in 1845. Quinine does not adhere to the mirror image rule as is evident by inspecting the single peak in the emission spectrum (at 460 nanometers), which does not mirror the two peaks at 310 and 350 nanometers featured in the bimodal absorption spectrum. The shorter wavelength ultraviolet absorption peak (310 nanometers) is due to an excitation transition to the second excited state (from S(0) to S(2)) that quickly relaxes to the lowest excited state (S(1)). As a consequence, fluorescence emission occurs exclusively from the lowest excited singlet state (S(1)), resulting in a spectrum that mirrors the ground to first excited state transition (350 nanometer peak) in quinine and not the entire absorption spectrum.
Because the energy associated with fluorescence emission transitions (see Figures 1-4) is typically less than that of absorption, the resulting emitted photons have less energy and are shifted to longer wavelengths. This phenomenon is generally known as Stokes Shift and occurs for virtually all fluorophores commonly employed in solution investigations. The primary origin of the Stokes shift is the rapid decay of excited electrons to the lowest vibrational energy level of the S(1) excited state. In addition, fluorescence emission is usually accompanied by transitions to higher vibrational energy levels of the ground state, resulting in further loss of excitation energy to thermal equilibration of the excess vibrational energy. Other events, such as solvent orientation effects, excited-state reactions, complex formation, and resonance energy transfer can also contribute to longer emission wavelengths.
In practice, the Stokes shift is measured as the difference between the maximum wavelengths in the excitation and emission spectra of a particular fluorochrome or fluorophore. The size of the shift varies with molecular structure, but can range from just a few nanometers to over several hundred nanometers. For example, the Stokes shift for fluorescein is approximately 20 nanometers, while the shift for quinine is 110 nanometers (see Figure 4) and that for the porphyrins is over 200 nanometers. The existence of Stokes shift is critical to the extremely high sensitivity of fluorescence imaging measurements. The red emission shift enables the use of precision bandwidth optical filters to effectively block excitation light from reaching the detector so the relatively faint fluorescence signal (having a low number of emitted photons) can be observed against a low-noise background.
Brian Herman - Department of Cellular and Structural Biology, University of Texas Health Science Center, 7703 Floyd Curl Drive, San Antonio, Texas 78229.
Joseph R. Lakowicz - Center for Fluorescence Spectroscopy, Department of Biochemistry and Molecular Biology, University of Maryland and University of Maryland Biotechnology Institute (UMBI), 725 West Lombard Street, Baltimore, Maryland 21201.
Douglas B. Murphy - Department of Cell Biology and Anatomy and Microscope Facility, Johns Hopkins University School of Medicine, 725 N. Wolfe Street, 107 WBSB, Baltimore, Maryland 21205.
Thomas J. Fellers and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310. |
King TutankhatenEditBorn circa 1341 B.C.E., King Tut was the 12th king of the 18th Egyptian dynasty, in power from 1361 B.C.E. to 1352 B.C.E. During his reign, powerful advisers restored the traditional Egyptian religion which had been set aside by his predecessor Akhenaton, who had led the "Amarna Revolution." After his death at age 18, he disappeared from history until the discovery of his tomb in 1922. Since then, studies of his tomb and remains have revealed much information about his life and times.
Probably one of the best known pharaohs of ancient Egypt, Tutankhamun was a minor figure in ancient Egyptian history. The boy king of 18th Egyptian dynasty was the son of the powerful Akhenaten, also known as Amenhotep IV, and most likely one of Akhenaten's sisters. His short reign of eight to nine years accomplished little, but the discovery of his nearly intact tomb in 1922 has led many to unravel the mysteries to his life and death.
Early Life Edit
Tutankhamun was born circa 1341 B.C.E. and given the name Tutankhaten, meaning "the living image of Aten." At this time, ancient Egypt was going through great social and political upheaval. Tutankhaten's father had forbidden the worship of many gods in favor of worshiping one, Aten, the sun disc. For this, he is known as the "heretic king." Historians differ on how extensive the change from polytheism to monotheism was, or whether Akhenaten was only attempting to elevate Aten above the other gods. It does seem, however, that his intent was to reduce the power of the priests and shift the traditional temple-based economy to a new regime run by local government administrators and military commanders.
As the populace was forced to honor Aten, the religious conversion threw ancient Egyptian society into chaos. The capital of Thebes was moved to the new capital of Armana. Akhenaten put all of his efforts into the religious transition, neglecting domestic and foreign affairs. As the power struggle between old and new intensified, Akhenaten became more autocratic and the regime more corrupt. Following a 17-year reign, he was gone, probably forced to abdicate and died soon after. His 9-year-old son, Tutankhaten, took over around 1332 B.C.E.
Boy to PharoahEdit
The same year that Tutankhaten took power, he married Ankhesenamun, his half sister and the daughter of Ikhanaton and Neferiti . It is known that the young couple had two daughters, both stillborn. Due to Tutankhaten's young age when he assumed power, the first years of his reign were probably controlled by an elder known as Ay, who bore the title of Vizier. Ay was assisted by Horemheb, ancient Egypt's top military commander at the time. Both men reversed Akhenaten's decree to worship Aten, in favor of the traditional polytheistic beliefs. Tutankhaten changed his name to Tutankhamun, which means "the living image of Amun," and had the royal court moved back to Thebes.
Foreign policy had also been neglected during Akhenaten's reign, and Tutankhamun sought to restore better relations with ancient Egypt's neighbors.
While there is some evidence to suggest that Tutankhamun's diplomacy was successful, during his reign, battles took place between Egypt and the Nubians and Asiatics over territory and control of trade routes. Tutankhamun was trained in the military, and there is some evidence that he was good at archery. However, it is unlikely that he saw any military action.
Tutankhamun sought to restore the old order in hopes that the gods would once again look favorably on Egypt. He ordered the repair of the holy sites and continued construction at the temple of Karnack. He also oversaw the completion of the red granite lions at Soleb.
Death and BurialEdit
Because Tutankhamun and his wife had no children but tried to fuck his wife but it would work upon his death, Ankhesenamun contacted the king of the Hittites, asking for one of his sons as a husband. The Hittite king sent a candidate, but he died during the journey, most likely assassinated before he got to the royal palace. This attempt to forge an alliance with a foreign power was most likely prevented by Ay and Horemheb, who were still in control behind the scenes. Evidence shows that Ankhesenamun later married Ay, before disappearing from history.
Tutankhamun was buried in a tomb in the Valley of the Kings. It is believed that his early death necessitated a hasty burial in a smaller tomb, most likely built for a lesser noble. The body was preserved in the traditional fashion of mummification. Seventy days after his death, Tutankhamun's body was laid to rest and the tomb was sealed. There are no known records of Tutankhamun after his death, and, as a result, he remained virtually unknown until the 1920s. Even the location of his tomb was lost, as its entrance had been covered by the debris from a later-built tomb building.
King Tut's Tomb DiscoveredEdit
Much of what is known about Tutankhamun, better known today as King Tut, derives from the discovery of his tomb in 1922. British archaeologist Howard Carter had begun excavating in Egypt in 1891, and after World War I, he began an intensive search for Tutankhamun's tomb in the Valley of the Kings. On November 26, 1922, Carter and fellow archaeologist George Herbert, the Earl of Carnarvon, entered the interior chambers of the tomb. To their amazement, they found much of its contents and structure miraculously intact. Inside one of the chambers, murals were painted on the walls that told the story of Tutankhamun's funeral and his journey to the afterworld. Also in the room were various artifacts for his journey—oils, perfumes, toys from his childhood, precious jewelry, and statues of gold and ebony.
The most fascinating item found was the stone sarcophagus containing three coffins, one inside the other, with a final coffin made of gold. When the lid of the third coffin was raised, King Tut's royal mummy was revealed, preserved for more than 3,000 years. As archaeologists examined the mummy, they found other artifacts, including bracelets, rings and collars. Over the next 17 years, Carter and his associates carefully excavated the four-room tomb, uncovering an incredible collection of thousands of priceless objects. |
Alphabet with inherent vowel /a/
|Languages||Meroitic and possibly Old Nubian|
|300 BC to 600 AD|
The Meroitic script is an alphabetic script, used to write the Meroitic language of the Kingdom of Meroë in Sudan. It was developed in the Napatan Period (about 700–300 BCE), and first appears in the 2nd century BCE. For a time, it was also possibly used to write the Nubian language of the successor Nubian kingdoms. Its use was described by the Greek historian Diodorus Siculus (c. 50 BCE).
Although the Meroitic alphabet continued to be used by the Nubian kingdoms that succeeded the Kingdom of Meroë, it was replaced by the Greek alphabet with the Christianization of Nubia, in the sixth century. The Nubian form of the Greek alphabet retained three Meroitic letters.
The script was deciphered in 1909 by Francis Llewellyn Griffith, a British Egyptologist, based on the Meroitic spellings of Egyptian names. However, the Meroitic language itself has yet to be translated. In late 2008, the first complete royal dedication was found, which may help confirm or refute some of the current hypotheses.
The longest inscription found is in the Museum of Fine Arts, Boston.
Form and values
There were two graphic forms of the Meroitic alphabet: monumental hieroglyphs, and a cursive. The majority of texts are cursive. Unlike Egyptian writing, there was a simple one-to-one correspondence between the two forms of Meroitic, except that in the cursive form, consonants are joined in ligatures to a following vowel i.
The direction of cursive writing was from right to left, top to bottom, while the monumental form was written top to bottom in columns going right to left. Monumental letters were oriented to face the beginning of the text, a feature inherited from their hieroglyphic origin.
Being primarily alphabetic, the Meroitic script worked differently than Egyptian hieroglyphs. Some scholars, such as Harald Haarmann, believe that the vowel letters of Meroitic are evidence for an influence of the Greek alphabet in its development.
There were 23 letters in the Meroitic alphabet, including four vowels. In the transcription established by Griffith and later Hintze, they are:
- a appears only at the beginning of a word
- e was used principally in foreign names
- i and o were used like vowels in the Latin or Greek alphabets.
The fourteen or so consonants are conventionally transcribed:
- ya, wa, ba, pa, ma, na, ra, la, cha, kha, ka, qa, sa, da.
These values were established from evidence such as Egyptian names borrowed into Meroitic. That is, the Meroitic letter which looks like an owl in monumental inscriptions, or like a numeral three in cursive Meroitic, we transcribe as m, and it is believed to have been pronounced as [m]. However, this is a historical reconstruction, and while m is not in much doubt, the pronunciations of some of the other letters are much less certain.
The three vowels i a o were presumably pronounced /i a u/. Kh is thought to have been a velar fricative, as the ch in Scottish loch or German Bach. Ch was a similar sound, perhaps uvular as g in Dutch dag or palatal as in German ich. Q was perhaps a uvular stop, as in Arabic Qatar. S may have been like s in sun. An /n/ was omitted in writing when it occurred before any of several other consonants within a word. D is uncertain. Griffith first transcribed it as r, and Rowan believes that was closer to its actual value. It corresponds to Egyptian and Greek /d/ when initial or after an /n/ (unwritten in Meroitic), but to /r/ between vowels, and does not seem to have affected the vowel a the way the other alveolar obstruents t n s did.
Comparing late documents with early ones, it is apparent that the sequences sel- and nel-, which Rowan takes to be /sl/ and /nl/ and which commonly occurred with the determiner -l-, assimilated over time to t and l (perhaps /t/ and /ll/).
Meroitic was a type of alphabet called an abugida: The vowel /a/ was not normally written; rather it was assumed whenever a consonant was written alone. That is, the single letter m was read /ma/. All other vowels were overtly written: the letters mi, for example, stood for the syllable /mi/, just as in the Latin alphabet. This system is broadly similar to the Indian abugidas that arose around the same time as Meroitic.
Griffith and Hintze
Griffith identified the essential abugida nature of Meroitic when he deciphered the script in 1911. He noted in 1916 that certain consonant letters were never followed by a vowel letter, and varied with other consonant letters. He interpreted them as syllabic, with the values ne, se, te, and to. Ne, for example, varied with na. Na could be followed by the vowels i and o to write the syllables ni and no, but was never followed by the vowel e.
He also noted that the vowel e was often omitted. It often occurred at the ends of Egyptian loanwords that had no final vowel in Coptic. He believed that e functioned both as a schwa [ə] and a "killer" mark that marked the absence of a vowel. That is, the letter m by itself was read [ma], while the sequence me was read [mə] or [m]. This is how Ethiopic works today. Later scholars such as Hitze and Rilly accepted this argument, or modified it so that e could represent either [e] or schwa–zero.
It has long been puzzling to epigraphers why the syllabic principles that underlie the script, where every consonant is assumed to be followed by a vowel a, should have special letters for consonants followed by e. Such a mixed abugida–syllabary is not found among the abugidas of India, nor in Ethiopic. Old Persian cuneiform script is somewhat similar, with more than one inherent vowel, but is not an abugida because the non-inherent vowels are written with full letters, and are often redundantly written after an inherent vowel other than /a/.
Millet and Rowan
Millet (1970) proposed that Meroitic e was in fact an epenthetic vowel used to break up Egyptian consonant clusters that could not be pronounced in the Meroitic language, or appeared after final Egyptian consonants such as m and k which could not occur finally in Meroitic. Rowan (2006) takes this further and proposes that the glyphs se, ne, and te were not syllabic at all, but stood for consonants /s/, /n/, and /t/ at the end of a word or morpheme (as when followed by the determiner -l; she proposes Meroitic finals were restricted to alveolar consonants such as these. An example is the Coptic word ⲡⲣⲏⲧ prit "the agent", which in Meroitic was transliterated perite (pa-e-ra-i-te). If Rowan is right and this was pronounced /pᵊrit/, then Meroitic would have been a fairly typical abugida. She proposes that Meroitic had three vowels, /a i u/, and that /a/ was raised to something like [e] or [ə] after the alveolar consonants /t s n/, explaining the lack of orthographic t, s, n followed by the vowel letter e.
Very rarely does one find the sequence CVC, where the C's are both labials or both velars. This is similar to consonant restrictions found throughout the Afro-Asiatic language family, suggesting to Rowan that there is a good chance Meroitic was an Afro-Asiatic language like Egyptian.
Rowan is not convinced that the system was completely alphabetic, and suggests that the glyph te also may have functioned as a determinative for place names, as it frequently occurs at the end of place names that are known not to have a /t/ in them. Similarly, ne may have marked royal or divine names.
Meroitic scripts, both Hieroglyphic and Cursive, were added to the Unicode Standard in January, 2012 with the release of version 6.1.
The Unicode block for Meroitic Hieroglyphs is U+10980–U+1099F. The Unicode block for Meroitic Cursive is U+109A0–U+109FF.
| Meroitic Hieroglyphs|
Official Unicode Consortium code chart (PDF)
| Meroitic Cursive|
Official Unicode Consortium code chart (PDF)
As a Meroitic Unicode font you may use Aegyptus which can be downloaded from Unicode Fonts for Ancient Scripts.
- The constructed language Nuwaubic, sometimes called Meroitic.
- "Sudan statues show ancient script" (BBC 16 December 2008)
- Everson, Michael (2009-07-29). "N3665: Proposal for encoding the Meroitic Hieroglyphic and the Meroitic Cursive scripts in the SMP of the UCS" (PDF). Working Group Document, ISO/IEC JTC1/SC2/WG2.
- Rowan, Kirsty (2006). "A phonological investigation into the Meroitic 'syllable' signs ne and se and their implications on the e sign". SOAS Working Papers in Linguistics, Volume 14. pp. 131–167. Retrieved 2013-10-24.
Török, László (1998). The Kingdom of Kush: Handbook of the Napatan-Meróitic Civilization (Handbook of Oriental Studies/Handbuch Der Orientalistik). New York: Brill Academic Publishers. ISBN 90-04-10448-8.
- Meroitic - AncientScripts
- Meroitic Writing
- Meroitic at Omniglot
- Meroitic font
- Examples of Meroitic script |
The name rotifer means "wheel" (Latin-rota) "to bear" (Latin - fera). These animals gained their name from the ciliated region around their head which is used for locomotion and food acquisition called the corona. The cilia are arranged in two circles and when they are beating, resemble two wheels spinning.
Rotifers are cylindrical and unsegmented. They range in size from 0.1 to 1mm in length. Rotifers are easily distinguished from other zooplankton by the corona (a ciliated region around the head used for locomotion and food acquisition) and the mastax (a muscular pharynx with a set of hard jaws). A well-developed, transparent cuticle covers their body, so most rotifers appear transparent, but some may appear green, orange, red or brown depending of the contents in the digestive tract. The cuticle may be quite thick, but is thinner around the corona and the foot to permit some flexibility. Many rotifers have projections or spines that provide protection against predators. The posterior end of the body is referred to as the foot. The foot usually has toes and adhesive glands that are used to anchor the rotifer in place temporarily. Sessile forms secrete a cement from the foot to anchor themselves permanently in place.
Rotifers may be sessile or free-swimming, and some form colonies. All rotifers swim during some portion of their lives. Swimming is especially important for the dispersal of the larval forms of sessile rotifers. They swim by beating the cilia of the corona which drives them forward in a helical pattern. Some rotifers creep along the substrate by temporarily attaching their foot to the substrate, elongating the body forward and attaching their anterior end to the substrate, then releasing the foot and drawing it in towards the head.
The sensory system in rotifers is very simple consisting of a cerebral ganglion (brain) and a few ganglia in the mastax and foot. They have three types of sense organs; mechanoreceptors, chemoreceptors and photoreceptors. Eyespots near the brain act as photoreceptors, but sessile rotifers often lose them during metamorphosis because they don't need them any longer. Pores on the corona act as chemoreceptors, and bristles and antennae are mechanoreceptors.
Rotifers exchange gases across their integument by diffusion. Osmoregulation is accomplished with a protonephridial system containing flame cells and tubules.
Excretion uses the same protonephridial system as osmoregulation. The tubules drain the excrement into the urinary bladder, and it then moves into the cloaca to be expelled.
Rotifers reproduce both sexually and parthenogenetically. If they reproduce sexually, the males either insert sperm into the cloaca of a female, or inject sperm directly into the pseudocoelom (hypodermic impregnation). The eggs that are developed in this way produce encapsulating membranes that allow them to survive adverse conditions. Parthenogenesis is common in rotifers that live in freshwater habitats that undergo severe seasonal changes. In favorable conditions females produce mitotic eggs which do not need to be fertilized. If these eggs are exposed to changing day lengths, temperature changes, decreasing food resources, or increasing population density they develop into females that produce haploid eggs. These haploid eggs develop into haploid males, or can be fertilized by haploid males to produce zygotes encased in thick walls that are resistant to low temperature, desiccation, and other adverse conditions. These zygotes will themselves hatch into amictic females when favourable environmental conditions return. Most rotifers are oviparous, but some are ovoviviparous.
Most rotifers are found in freshwater, but they also occur in marine and moist terrestrial habitats. In freshwater they are often found at densities of over 5000 individuals per litre. Most rotifers are planktonic, but sessile forms are not uncommon.
| || || |
Rotifers mostly feed on the small particles and organisms that are brought into the mouth by the beating of the cilia in the corona. Some rotifers are raptorial and capture prey with the jaws in their mastax. After food is captured, the mastax grinds the food into smaller pieces before it passes into the rest of the digestive system.
Rotifers are an important food source for other rotifers, copepods, malacostracans, insect larvae and fish. Some rotifers have various spines that help protect them from predators.
| || |
Rotifers are born with a set number of cells and they can't develop any more cells. Most females only have about 900 cells!
Rotifers exhibit developmental polymorphism which means that under different ecological conditions they develop into different forms. |
We will publish here the portions one cannot afford to miss while preparing for the CBSE class 12 Physics Examination
The Chapters will be posted below. On Clicking the name of the chapter a new page will open. On that page there will be a set of collected questions. On clicking the questions, the complete solution to the questions will open.
Please note that this system is under preparation. We hope to complete this within a month. The link will be activated only after completing the project.
- CURRENT ELECTRICITY
- MAGNETIC EFFECTS OF CURRENT
- ELECTROMAGNETIC INDUCTION
- ALTERNATING CURRENT
- ELECTROMAGNETIC WAVES
- DUAL NATURE OF MATTER AND RADIATION
- ATOMS AND NUCLEI
- ELECTRONIC DEVICES
- COMMUNICATION SYSTEMS
ELECTRIC CHARGES AND FIELD
Electric Charges and Field Test Paper
Max. Marks: 30 Time: 75 minutes
(Please take care to complete the test in the prescribed time limit)
One mark Questions
- Define electric flux.
- How is the force between two charges affected when dielectric constant of the medium in which they are placed increases?
- Name the physical quantity whose SI unit is NC-1
- Why two electric lines of force never intersect each other?
- Define dielectric constant of a medium.
- What is an ideal dipole?
- Write the SI unit of electric dipole moment.
- Calculate the number of electrons making a charge of -0.1 mC.
2 Marks questions
- Vehicles carrying inflammable materials have metallic ropes (or chains)touching the ground while in motion. Explain why is it necessary?
- Derive an expression for the electric field intensity at a distance ‘r’ from a point charge ‘q’
- Two point charges 4 μC and 2 μC are separated by a distance 1 m in air. At what point on the line joining the two charges is the electric field intensity zero?
- Derive an expression for the work done in rotating a dipole through an angle θ in a uniform electric field.
3 Marks Questions
- Show that, in a uniform electric field, a dipole experiences only a torque, but no net force. Derive an expression for the torque experienced by a dipole in a uniform electric field.
- Write three points of difference between mass and charge.
- An electric dipole of length 2 cm is placed with its axis making an angle 60 degrees with respect to a uniform electric field of 105 N/C. If it experiences a torque of 8 √3 Nm, calculate the
(i) magnitude of charge on the dipole, and
(ii) potential energy of the dipole.
5 marks Questions
- Using Gauss’ theorem, derive an expression for the electric field intensity at a point due to a uniformly charged spherical conducting shell when the point is
(a) outside the sphere
(b) on the sphere
(c) inside the sphere |
Movie courtesy of the NSF.
Coral for Studying Past Climate
Climate scientists use "proxy data" to study climates of the past, before humans with thermometers began keeping temperature records. These "proxies" include tree rings, layers within ice cores pulled from glaciers and ice sheets, growth layers in coral, and layers of sediments from the bottoms of lakes and oceans.
Each year, coral colonies add a new layer of growth onto exiting coral "skeletons". Climate scientists can deduce data about past climates from these annual growth rings in much the same way they look at tree rings. The proportions of oxygen isotopes in the coral tell us about ocean temperatures when that coral was formed. Climate data from coral is the only major source of paleoclimate information from Earth's tropical regions. Oceanographers use special tools to extract cores from coral skeletons which they study in their labs.
Right-click (Windows) or Option-click (Mac) here to download a copy of this video in QuickTime format.
Shop Windows to the Universe Science Store!
The Spring 2010 issue of The Earth Scientist
focuses on the ocean, including articles on polar research, coral reefs, ocean acidification, and climate. Includes a gorgeous full color poster!
You might also be interested in:
For a glacier to develop, the amount of snow that falls must be more than the amount of snow that melts each year. This means that glaciers are only found in places where a large amount of snow falls each...more
Isotopes are different "versions" of a chemical element. All atoms of an element have the same number of protons. For example, all hydrogen atoms have one proton, all carbon atoms have six protons, and...more
Leaders from 192 nations of the world are trying to make an agreement about how to limit emissions of heat-trapping greenhouse gases, mitigate climate change, and adapt to changing environmental conditions....more
Climate in your place on the globe is called regional climate. It is the average weather pattern in a place over more than thirty years, including the variations in seasons. To describe the regional climate...more
Less than 1% of the gases in Earth's atmosphere are called greenhouse gases. Even though they are not very abundant, these greenhouse gases have a major effect. Carbon dioxide (CO2), water vapor (H2O),...more
Television weather forecasts in the space age routinely feature satellite views of cloud cover. Cameras and other instruments on spacecraft provide many types of valuable data about Earth's atmosphere...more
Predicting how our climate will change in the next century or beyond requires tools for assessing how planet responds to change. Global climate models, which are run on some of the world's fastest supercomputers,...more |
A Full Service Commitment to Quality
Home Health Care Equipment and Supplies
Hypertension is the term used to describe high blood pressure. Blood pressure measurements are the result of the force of the blood produced by the heart and the size and condition of the arteries.
Blood pressure readings are measured in millimeters of mercury (mmHg) and usually given as two numbers. For example, 120 over 80 (written as 120/80 mmHg).
Either or both of these numbers may be too high.
What causes hypertension?
Blood pressure measurements are the result of the force of the blood produced by the heart and the size and condition of the arteries.
There are two types of high blood pressure:
1. Primary (essential) hypertension
In 90 to 95 percent of high blood pressure cases in adults, there's no identifiable cause. This type of high blood pressure, called essential hypertension or primary hypertension, tends to develop gradually over many years.
2. Secondary hypertension
The other 5 to 10 percent of high blood pressure cases are caused by an underlying condition. This type of high blood pressure, called secondary hypertension, tends to appear suddenly and cause higher blood pressure than does primary hypertension. Various conditions and medications can lead to secondary hypertension.
Many factors can affect blood pressure, including:
High blood pressure can affect all types of people. You have a higher risk of high blood pressure if you have a family history of the disease. High blood pressure is more common in African Americans than Caucasians. Smoking, obesity, and diabetes are all risk factors for hypertension.
Most of the time, no cause is identified. This is called essential hypertension.
High blood pressure that results from a specific condition, habit, or medication is called secondary hypertension. Too much salt in your diet can lead to high blood pressure. Secondary hypertension may also be due to:
ACE Medical Inc.
:: All rights reserved ::
:: Site Map
Created and Powered by VGM Forbin* Designed according to National Institute on Aging guidelines for senior-friendly Web
94-910 Moloalo St
Phone: (808) 678-3600
Toll-Free: (866) 678-3601
Fax: (808) 678-3604 |
Leaning Tower of Pisa, Italian Torre Pendente di Pisa, medieval structure in Pisa, Italy, that is famous for the settling of its foundations, which caused it to lean 5.5 degrees (about 15 feet [4.5 metres]) from the perpendicular in the late 20th century. Extensive work was subsequently done to straighten the tower, and its lean was ultimately reduced to less than 4.0 degrees.
The bell tower, begun in 1173 as the third and final structure of the city’s cathedral complex, was designed to stand 185 feet (56 metres) high and was constructed of white marble. Three of its eight stories had been completed when the uneven settling of the building’s foundations in the soft ground became noticeable. At that time, war broke out between the Italian city-states, and construction was halted for almost a century. This pause allowed the tower’s foundation to settle and likely prevented its early collapse.
Giovanni di Simone, the engineer in charge when construction resumed, sought to compensate for the lean by making the new stories slightly taller on the short side, but the extra masonry caused the structure to sink still further. The project was plagued with interruptions, as engineers sought solutions to the leaning problem, but the tower was ultimately topped out in the 14th century. Twin spiral staircases lined the tower’s interior, with 294 steps leading from the ground to the bell chamber (one staircase incorporates two additional steps to compensate for the tower’s lean). Over the next four centuries the tower’s seven bells were installed; the largest weighed more than 3,600 kg (nearly 8,000 pounds). By the early 20th century, however, the heavier bells were silenced, as it was believed that their movement could potentially worsen the tower’s lean.
The foundations have been strengthened by the injection of cement grout and various types of bracing and reinforcement, but in the late 20th century the structure was still subsiding, at the rate of 0.05 inch (1.2 mm) per year, and was in danger of collapse. In 1990 the tower was closed and all the bells silenced as engineers undertook a major straightening project. Earth was siphoned from underneath the foundations, decreasing the lean by 17 inches (44 cm) to 13.5 feet (4.1 metres); the work was completed in May 2001, and the structure was reopened to visitors. The tower continued to straighten without further excavation, until in May 2008 sensors showed that the motion had finally stopped, at a total improvement of 19 inches (48 cm). Engineers expected the tower to remain stable for at least 200 years. |
On this day in 1939, months of brown-nosing Hitler’s Nazis by the winners of World War One – Britain, France and America AKA the Western Powers – resulted not in a safer Europe but in the Invasion of Poland, when, at 4.45 am, 1.5 million German troops, ably abetted by hundreds of screaming Stuka dive-bombers, stormed across the Polish frontiers from the north, south and west, destroying from the air much of the country’s considerable air force before its pilots had time even to get airborne. In Polish towns and villages, the Nazis ensured that maximum civic disruption would be caused when all local intellectuals and authority figures – the mayor, the teachers, librarians, etc. – were rounded up and removed, later to be shot without trials. Within weeks, the equally ruthless Joseph Stalin would send into Eastern Poland his own Soviet tanks and troops, like a belligerent carrion crow determined to wrestle away his own portion of the lion’s victim.
Nazi Germany’s expansion had begun in 1938 with the annexation of Austria and continued with the occupation of the Sudetenland and then all of Czechoslovakia in 1939. Despite various allied pacts and agreements, these invasions were allowed to occur without resistance or challenge from the world’s major powers. Hitler therefore proceeded with his plan to conquer Poland – which was key to his vision of Lebensraum or “living space” for the German people. According to his plan, the “racially superior” Germans would colonise the territory and the natives would be enslaved.
Hitler gambled that his invasion of Poland would, like Czechoslovakia, be accomplished without igniting hostilities with the major powers. Two days later, Britain and France could no longer maintain their policy of appeasement and acquiescence and declared war on Germany.
World War II had begun. |
The major steps involved in the process of cable sizing are
1.Gather Data about the cable,its installation conditions,the load that it will carry.
Number of phases - Single phase or three phase.
Voltage - Rated voltage,allowed voltage
Full load current (A) or power
Full load power factor
Length of line- From source to load.This length should be as close as possible to the actual route of the cable and include enogh contigency for the vertical drops/rises and termination of the cable tails.
Basic Cable Data
Type of conductor material - Cu or Al
Insulation of the cable - PVC,XLPE,EPR (IEC cables)
TW,THHW,XHH (NEC cables)
Number of cores - 1X,2X .. or 3G,4G ..
Installation Methods- cable tray/ladder,in conduit / raceways,on a wall,in air,directly etc
Environmental condition - Ambient or soil temperature at the installation site.
Cable grouping-Number of other cables bunched together or installed in the same area.
Cable spacing - whether cables are installed touching or spaced.
Soil thermal resisitivity - For underground cable
2.Determine the minimum cable size based on current rating
3.Determine the minimum cable size based on voltage drop.
4.Determine the minimum cable size based on inrush current.
5.Determine the minimum cable size based on short circuit temperature rise.
6.Select the cable based on the largest of the sizes calculated in the steps above. |
What is rickets?
Rickets is an abnormal bone formation in children resulting from inadequate calcium in their bones. This lack of calcium can result from inadequate dietary calcium, inadequate exposure to sunshine (needed to make vitamin D), or from not eating enough vitamin D - a nutrient needed for calcium absorption. Vitamin D is found in animal foods, such as egg yolks
and dairy products.
Vitamin D is made by the body when it is exposed to ultraviolet light (sunlight). Vitamin D is also added to milk, milk products, and multi-vitamin pills. Some people who do not get enough sun exposure, milk products, or green vegetables may also develop the disease, but that rarely happens anymore. Hereditary rickets, is caused by an inherited disease that interferes with the resorption of renal tubular phosphate in the kidney. Rickets can also be caused by certain liver diseases. A similar disorder can occur in adults, and is called osteomalacia. Then, it is caused by the inability of bone cells to calcify, or harden. Less frequently, nutritional shortage of calcium or phosphorus may produce rickets.
Rickets is a failure to mineralize bone. This softens bone (producing osteomalacia) and permits marked bending and distortion of bones. Up through the first third of the 20th century, rickets was largely due to lack of direct exposure to sunlight or lack of vitamin D. Sunlight provides the necessary ultraviolet rays. These rays do not pass through ordinary window glass. Once the role of vitamin D in rickets was discovered, cod liver oil (which is rich in vitamin D) became a favored, if not too tasty, remedy. Thanks to such supplements of vitamin D, nutritional rickets has become relatively rare in industrialized nations. It still occurs, for example, in breast-fed babies whose mothers are underexposed to sunlight and in dark-skinned babies who are not given vitamin D supplements. And in unindustrialized countries, vitamin D deficiency rickets continues to be a problem.
Rickets most commonly affects children, who may have low vitamin D levels due to poor diet or a condition (such as celiac disease) that makes it difficult for the body to absorb vitamin D and calcium. Rickets is most likely to occur during periods of rapid growth, when the body demands high levels of calcium and phosphate. |
Human, common name given to any individual of the species Homo sapiens and, by extension, to the entire species. The term is also applied to certain species that were the evolutionary forerunners of Homo sapiens (see Human Evolution). Scientists consider all living people members of a single species.
Homo sapiens is identified, for purposes of classification, as an animal (kingdom Animalia) with a backbone (phylum Chordata) and segmented spinal cord (subphylum Vertebrata) that suckles its young (class Mammalia); that gestates its young with the aid of a placenta (subclass Eutheria); that is equipped with five-digited extremities, a collarbone, and a single pair of mammary glands on the chest (order Primates); and that has eyes at the front of the head, stereoscopic vision, and a proportionately large brain (suborder Anthropoidea). The species belongs to the family Hominidae, the general characteristics of which are discussed below.
III. Structure and Physiology
The details of skeletal structure distinguishing Homo sapiens from the nearest primate relatives - the gorilla, chimpanzee, and orangutan - stem largely from a very early adaptation to a completely erect posture and a two-footed striding walk (bipedalism). The uniquely S-shaped spinal column places the center of gravity of the human body directly over the area of support provided by the feet, thus giving stability and balance in the upright position. Other mechanical modifications for bipedalism include a broad pelvis, a locking knee joint, an elongated heel bone, and a lengthened and aligned big toe. Although varying degrees of bipedalism are seen in other anthropoids, all have straight or bowed spines, bent knees, and grasping (prehensile) feet, and all use the hands to bear part of the body weight when moving about.
Complete bipedalism in the human freed the hand to become a supremely sensitive instrument for precise manipulation and grasping. The most important structural detail in this refinement is the elongated human thumb, which can rotate freely and is fully opposable to the other fingers. The physiological requirements for speech were secondarily established by erect posture, which positions the vocal cords for controlled breathing, and by the skilled use of the hands. The latter development occurs in association with the enlargement and specialization of a brain area (Broca's convolution) that is a prerequisite for refined control of the lips and tongue.
The large (averaging 1400 cc/85.4 cu in) brain of Homo sapiens is approximately double that of early human toolmakers. This great increase in size in only 2 million years was achieved by a process called neoteny, which is the prolongation of retention of immature characteristics. The juvenile stage of brain and skull development is prolonged so that they grow for a longer period of time in relation to the time required to reach sexual maturity. Unlike the early human adult skull, with its sloping forehead and prominent jaw, the modern human skull - with biologically insignificant variations - retains into maturity a proportionately large size, in relation to the rest of the body, a high-rounded dome, straight-planed face, and reduced jaw size, all closely resembling the characteristics of the skull in the juvenile chimpanzee. Its enlarged dimensions required adaptations for passage through the birth canal; consequently, the human female pelvis widens at maturity (with some sacrifice in swiftness of locomotion), and the human infant is born prematurely. Chimpanzees are born with 65 percent of their adult brain capacity; Australopithecine, an erect, tool-using near-human of 3 million years ago, was born with about 50 percent; modern human newborns have only 25 percent of adult brain capacity, resulting in an extended period of helplessness. The many neurological pathways to the rapidly growing brain must be organized and coordinated during a prolonged period of dependency on and stimulation by adults; lacking this close external bond in the early years of life, development of the modern brain remains incomplete.
The physiological adaptations that made humans more flexible than other primates allowed for the development of a wide range of abilities and an unparalleled versatility in behavior. The brain's great size, complexity, and slow maturation, with neural connections being added through at least the first 12 years of life, meant that learned behavior could largely modify stereotyped, instinctive responses. New environmental demands could be met by rapid adjustments rather than by slow genetic selection; thus, survival in a wide range of habitats and under extreme conditions eventually became possible without further species differentiation. Each new infant, however, with relatively few innate traits yet with a vast number of potential behaviors, must be taught to achieve its biological potential as a human.
V. Cultural Attributes
The human species has a unique capability for culture in the sense of conscious thinking and planning, transmission of skills and systems of social relationships, and creative modification of the environment. The integrated patterns of behavior required for planning and fashioning tools were accomplished at least 2.5 million years ago, and some form of advanced code for vocal communication may also have existed at this time. By 350,000 years ago planned hunting, firemaking, and the wearing of clothing were well established, as was possibly ritualized disposal of the dead. Evidence of religion, recorded events, and art date from 30,000 to 40,000 years ago and imply advanced language and ethics for the complex ordering of social groups required for such activities. From about that time the genus Homo began to stabilize into the one generalized species of Homo sapiens.
VI. Other Definitions
The preceding description rests on anatomical observation (see Anatomy) and current scientific theory on the origin of the Homo species. Humankind itself and the essence of being human are also defined in many other ways - religious, social, moral, and legal.
John Tyler Bonner, M.A., Ph.D., D.Sc.
George M. Moffett Professor of Biology, Princeton University. Author of Cells and Societies, On Development: The Biology of Form, and other books.
"Human," Microsoft® Encarta® Online Encyclopedia 2003
http://encarta.msn.com © 1997-2003 Microsoft Corporation. All Rights Reserved.
© 1993-2003 Microsoft Corporation. All Rights Reserved. |
Afrocentrism or Afrocentricity is a world view that emphasizes the importance of African people in culture, philosophy, and history. Fundamental to Afrocentrism is the assumption that approaching knowledge from a Eurocentrist perspective, as well as certain mainstream assumptions in the application of information in the West, has led to injustices and also to inadequacies in meeting the needs of Black Africans and the peoples of the African diaspora.
As an ideology and political movement, Afrocentrism has its beginnings in activism among Black intellectuals, political figures and historians in the context of the US American civil rights movement. Molefi Kete Asante describes Afrocentricity as a "systematic nationalism." According to its critics afrocentrism is grounded in identity politics and myth rather than scholarship, not as a coherent political ideology but as a set of tactics in the "culture wars".
Afrocentrists commonly contend that Eurocentrism has led to the neglect or denial of the contributions of African people and focused instead on a generally European-centered model of world civilization and history. Therefore, Afrocentrism is a paradigm shift from a European-centered history to an African-centered history. More broadly, Afrocentrism is concerned with distinguishing African achievements apart from the influence of European peoples. Some Western mainstream scholars have assessed some Afrocentric ideas as pseudohistorical, especially claims regarding Ancient Egypt as contributing directly to the development of Greek and Western culture. Contemporary Afrocentrists may view the movement as multicultural rather than ethnocentric. The leader of this category is Francis Ohanyido an African philosopher and poet whose concerns are more on quality leadership and developmental issues. According to US professor Victor Oguejiofor Okafor, concepts of Afrocentricity lie at the core of the disciplines such as African American studies.
Modern afrocentricity has its origins in the work of African and African diaspora intellectuals in the late nineteenth and early twentieth centuries. Afrocentricity has changed over time. Aspects have been hotly debated both outside and within Afrocentric circles.
Afrocentrism developed first as an argument among leaders and intellectuals in the Western Hemisphere. It arose following social changes in the United States and Africa due both to the end of slavery and expansion of British colonialism. Wanting to further establish their own identities in freedom, African Americans left white-dominated churches to establish their own. They pulled together in communities and often migrated to restore their families. African Americans eagerly sought education. They withdrew women and children from fieldwork as much as possible, the men received the right to vote and participate in public office, and their leaders took more active public roles despite severe discrimination and segregation.
By the late 19th century, the United Kingdom had become a superpower. Through the century Britain and France governments, travelers, scholars, artists and writers increasingly turned their attentions to Africa and the Near East as places of exploration (both physical and intellectual), settlement, exploitation of new resources, and playing out of their longstanding rivalries. They completed the Suez Canal in 1869, simplifying ship passage between Europe and the Far East. Based on their self-appraisal of the value of technology, industrialization, Western infrastructure, and culture, these European nations assumed their superiority to the peoples and cultures they encountered in Africa.
Blyden used that standpoint to show how the traditional social, industrial, and economic life of Africans untouched by "either European or Asiatic influence", was different and complete in itself, with its own organic wholeness. In a letter responding to Blyden's original series of articles, Fante journalist and politician J.E. Casely Hayford commented, "It is easy to see the men and women who walked the banks of the Nile" passing him on the streets of Kumasi. Hayford suggested building a University to preserve African identity and instincts. In that university, the history chair would teach
Universal history, with particular reference to the part Ethiopia has played in the affairs of the world. I would lay stress upon the fact that while Ramses II was dedicating temples to 'the God of gods, and secondly to his own glory,' the God of the Hebrews had not yet appeared unto Moses in the burning bush; that Africa was the cradle of the world's systems and philosophies, and the nursing mother of its religions. In short, that Africa has nothing to be ashamed of in its place among the nations of the earth. I would make it possible for this seat of learning to be the means of revising erroneous current ideas regarding the African; of raising him in self-respect; and of making him an efficient co-worker in the uplifting of man to nobler effort.The exchange of ideas between Blyden and Hayford embodied the fundamental concepts of Afrocentricism.
In the United States, writers and editors of publications such as The Crisis and The Journal of Negro History sought to counter the prevailing view that Sub-Saharan Africa had contributed nothing of value to human history that was not the result of incursions by Europeans and Arabs. Authors in these journals theorized that Ancient Egyptian civilization was the culmination of events arising from the origin of the human race in Africa. They investigated the history of Africa from that perspective.
Afrocentrists claimed The Mis-Education of the Negro (1933) by Carter G. Woodson, an African- American historian, as one of their foundational texts. Woodson critiqued education of African Americans as "mis-education" because he held that it denigrated the black while glorifying the white. For these early Afrocentrists, the goal was to break what they saw as a vicious cycle of the reproduction of black self-abnegation. In the words of The Crisis editor W.E.B. Du Bois, the world left African Americans with a "double consciousness," and a sense of "always looking at one's self through the eyes of others, of measuring one's soul by the tape of a world that looks on in amused contempt and pity."
In his early years, W.E.B. Du Bois, researched West African cultures and attempted to construct a pan-Africanist value system based on West African traditions. In the 1950s Du Bois envisioned and received funding from Ghanaian president Kwame Nkrumah to produce an Encyclopedia Africana to chronicle the history and cultures of Africa. Du Bois died before being able to complete his work. Some aspects of Du Bois's approach are evident in work by Cheikh Anta Diop in the 1950s and 1960s. Diop identified a pan-African protolanguage and presented evidence that ancient Egyptians were, indeed, black Africans .
Du Bois inspired a number of authors, including Drusilla Dunjee Houston. After reading his work The Negro (1915), Houston embarked upon writing her Wonderful Ethiopians of the Ancient Cushite Empire (1926). The book was a compilation of evidence related to the historic origins of Cush and Ethiopia, and assessed their influences on Greece.
The work of Cheikh Anta Diop became very influential. In the following decades, histories related to Africa and the diaspora gradually would incorporate a more African perspective. Since that time, Afrocentrists have increasingly seen African peoples as the makers and shapers of their own histories.
You have all heard of the African Personality; of African democracy, of the African way to socialism, of negritude, and so on. They are all props we have fashioned at different times to help us get on our feet again. Once we are up we shan't need any of them any more. But for the moment it is in the nature of things that we may need to counter racism with what Jean-Paul Sartre has called an anti-racist racism, to announce not just that we are as good as the next man but that we are much better.
—Chinua Achebe, 1965
Tejumola Olaniyan writes that Chinua Achebe easily might have included Afrocentrism in his list of "props." In this context, ethnocentric Afrocentrism was not intended to be essential or permanent. It was a consciously fashioned strategy of resistance to the Eurocentrism of the time. Afrocentric scholars adopted two approaches: a deconstructive rebuttal of what they called "the whole archive of European ideological racism" and a reconstructive act of writing new self-constructed histories.At a 1974 UNESCO symposium in Cairo titled "The Peopling of Ancient Egypt and the Decipherment of Meroitic Script", Cheikh Anta Diop brought together scholars of Egypt from around the world.
Key texts from this period include:
Some Afrocentric writers focused on study of indigenous African civilizations and peoples, to emphasize African history separate from European or Arab influence. Primary among them was Chancellor Williams, whose book The Destruction of Black Civilization: Great Issues of a Race from 4500 B.C. to 2000 A.D. set out to determine a "purely African body of principles, value systems (and) philosophy of life".
In the 1970s, several scholars advanced theories that the complex civilizations of the Americas were the result of trans-oceanic influence from the Egyptians or other African civilizations. Such a claim is the primary thesis of Ivan van Sertima's book They Came Before Columbus, published in 1978. These hyper-diffusionist writers seek to establish that the Olmec people, who built the first highly complex civilization in Mesoamerica and are considered by some to be the mother civilization for all other civilizations of Mesoamerica, were deeply influenced by Africans. Van Sertima himself contended that the Olmec civilization was a hybrid one of Africans and Native Americans. His book, published by a major publishing house, received broad exposure. While Van Sertima rejected the notion that his findings were driven by Afrocentrism, the book received a friendly reception among Afrocentrist proponents. His theory of pre-Columbian American-African contact has met with opposition in academia, with some Mesoamericanists charging Van Sertima with "doctoring" and twisting data to fit his conclusions, and with inventing evidence. However, archaeological finds over the last two decades in South America of rock art and human skeletal remains suggest to some scholars and academicians an ancient, pre-Columbian presence of "Australoid" or "Negroid" peoples in the New World who came from Australia and Melanesia earlier than the Asian ancestors of current Native American populations.
In his (1992) article "Eurocentrism vs. Afrocentrism", US anthropologist Linus A. Hoskins wrote:
The vital necessity for African people to use the weapons of education and history to extricate themselves from this psychological dependency complex/syndrome as a necessary precondition for liberation. [...] If African peoples (the global majority) were to become Afrocentric (Afrocentrized), ... that would spell the ineluctable end of European global power and dominance. This is indeed the fear of Europeans. ... Afrocentrism is a state of mind, a particular subconscious mind-set that is rooted in the ancestral heritage and communal value system.
Although Afrocentricity is often associated with liberal or left-wing politics, the movement is not homogeneous. During the 1980s and 1990s, sociological research became increasingly preoccupied with the problem of the "black underclass". Some Afrocentric scholars began to frame Afrocentric values as a remedy for what they perceived to be the social ills of poor African Americans. American educator Jawanza Kunjufu made the case that hip hop culture, rather than being creative expression of the culture, was the root of many social ills. For some Afrocentrists, the contemporary problems of the ghetto stemmed not from race and class inequality, but rather from a failure to inculcate Black youth with Afrocentric values.
Afrocentric ideas also received a considerable boost from the cultural shift known as postmodernism and its privileging of difference, micro-struggles, and the politics of identity. Postmodernism's general assault on the authority and universalist claims of Western "culture" is also a mainstay in many Afrocentric agendas. In turn, postmodern pluralism has begun to permeate Afrocentric thought.
In the West and elsewhere, the European, in the midst of other peoples, has often propounded an exclusive view of reality; the exclusivity of this view creates a fundamental human crisis. In some cases, it has created cultures arrayed against each other or even against themselves. Afrocentricity’s response certainly is not to impose its own particularity as a universal, as Eurocentricity has often done. But hearing the voice of African American culture with all of its attendant parts is one way of creating a more sane society and one model for a more humane world. -Asante, M. K. (1988)
By the end of the 1990s, the ethnocentric Afrocentrism of the '50s, '60s and '70s had largely fallen out of favor. In 1997, US cultural historian Nathan Glazer described Afrocentricity as a form of multiculturalism. He wrote that its influence ranged from sensible proposals about inclusion of more African material in school curricula to what he called senseless claims about African primacy in all major technological achievements. Glazer argued that Afrocentricity had become more important due to the failure of mainstream society to assimilate all African Americans. Anger and frustration at their continuing separation gave black Americans the impetus to reject traditions that excluded them.
Afrocentrists argue that Afrocentricity is important for people of all ethnicities who want to understand African history and the African diaspora. For example, the Afrocentric method can be used to research African indigenous culture. Queeneth Mkabela writes in 2005 that the Afrocentric perspective provides new insights for understanding African indigenous culture, in a multicultural context. According to Mkabela and others, the Afrocentric method is a necessary part of complete scholarship and without it, the picture is incomplete, less accurate, and less objective.
Contemporary Afrocentrists may view the movement as multicultural rather than ethnocentric.They see Afrocentricity as one part of a larger multicultural movement that has begun to shift the focus of historical and cultural studies away from Eurocentrism. Studies of African and African-diaspora cultures have shifted understanding and created a more positive acceptance of influence by African religious, linguistic and other traditions, both among scholars and the general public. For example Lorenzo Dow Turner's seminal 1949 study of the Gullah language, a dialect spoken by black communities in Georgia and South Carolina, demonstrated that its idiosyncrasies were not simply incompetent command of English, but incorporated West African linguistic characteristics in vocabulary, grammar, sentence structure, and semantic system. Likewise, religious movements such as Vodou are now less likely to be characterized as "mere superstition", but understood in terms of links to African traditions. Scholars who adopt such approaches may or may not see their work as Afrocentrist in orientation.
In recent years Africana Studies or Africology departments at many major universities have grown out of the Afrocentric "Black Studies" departments formed in the 1970s. Rather than focusing on black topics in the African diaspora (often exclusively African American topics), these reformed departments aim to expand the field to encompass all of the African diaspora. They also seek to better align themselves with other University departments and find continuity and compromise between the radical Afrocentricity of the past decades and the multicultural scholarship found in many fields today.
I am apt to suspect the Negroes to be naturally inferior to the Whites. There scarcely ever was a civilized nation of that complexion, nor even any individual, eminent either in action or speculation. No ingenious manufactures amongst them, no arts, no sciences. ...[In] our colonies, there are Negro slaves dispersed all over Europe, of whom none ever discovered the symptoms of ingenuity; though low people, without education, will start up amongst us, and distinguish themselves in every profession. In Jamaica, indeed, they talk of one Negro as a man of parts and learning; but it is likely he is admired for slender accomplishments, like a parrot who speaks a few words plainly. - David Hume 18th century Scottish historian, philosopher and essayist.
By the mid-20th century many such overtly derogatory ideas had been rejected, but Afrocentrists contended that the denial, denigration and appropriation of black historical and cultural achievements made it important to study world history from a new perspective. Thus, Afrocentric scholars have worked to engage the biased methods and approaches used by some European scholars and the European-dominated intellectual community, in relation to all the people of Africa and the diaspora.
Because of bias due to Eurocentrism, scholars sometimes overlooked or denied Africans' agency in the creation of their own histories. For example until recently, Western scholars believed cities such as Dakar, Banjul (Bathhurst), Abidjan, Conakry and others were created by Western colonizers. Although the cities were transformed by colonization (in both negative and positive ways), each of them predated colonization. Similarly, many of the existing economic and institutional patterns in Africa had origins well before colonialism.
Lynn Meskell writes that archaeologists working in Egypt have rarely considered the local and global ramifications of their interpretations of ancient history. According to Meskell, many continue to operate under the residual effects of colonialism. In 1991 Wyatt MacGaffey wrote that the bulk of scholarly work about Africa took for granted a Eurocentric distinction between "savage" and "civilized" peoples calculated to flatter the European and white audience for which it was intended. MacGaffey writes that it has only been since the 1960s that the possibility of writing any history for Africa has been generally admitted.
Nathan Glazer acknowledges that Afrocentricity and multiculturalism have played a role in shaping trends in the teaching of history and the social sciences, but he also stresses that they are not the only cultural movements responsible for the move away from now increasingly obsolete forms of Eurocentrism.
Some Afrocentric writers also include in the African diaspora the "Negritos" of Southeast Asia (Thailand, the Philippines and Malaysia; and the Africoid, aboriginal peoples of Melanesia, Micronesia, and Polynesia.
Afrocentrists who adopt this approach contend that such peoples are African in a racial sense, just as the white inhabitants of modern Australia may be said to be European. In doing so, they ignore the drastically different time frames for migration of whites from Europe to Australia within the last 200 years, and ancient peoples from the African continent to India or Polynesia tens of thousands of years ago.
In 2003, geneticist Spencer Wells' findings confirmed a clear DNA link between indigenous Africans and the Australoid peoples of India, Australia and Southeast Asia, tracing the DNA of San bushmen from southeast Africa to India and on to Australia. Earlier studies showed that some of these darker-skinned ethnic groups cluster genetically more closely with neighboring East Asians than with indigenous Africans, due to millennia of intermingling with one another in relative isolation.
Afrocentrists have adopted a pan-Africanist perspective that such people of color are all "African people" or "diasporic Africans," citing physical characteristics they exhibit in common with Black Africans. Afrocentric scholar Runoko Rashidi writes that they are all part of the "global African community."
Critics of Afrocentrism note that the Southeast Asian and Melanesian peoples did not emigrate out of Africa within any time span that relates them closely to ancient African civilizations. Wells' work indicates that the ancestors of Southeast Asian and Melanesian peoples migrated out of Africa before the ancestors of modern Europeans did. The Afrocentric designation of Southeast Asians and Melanesians as "African diaspora" is also made without reference to the self-identities of the peoples in question, who may not generally consider themselves African.
Afrocentricity contends that race exists primarily as a social and political construct. That is, that race is important because of its cultural rather than its biological significance. Many Afrocentrists seek to challenge concepts such as white privilege, so-called color-blind perspectives, and race-neutral pedagogies. There are strong ties between Afrocentricity and Critical race theory.
Afrocentrists hold that Africans exhibit a range of types and physical characteristics, and that such elements as wavy hair or aquiline facial features are part of a continuum of African types that do not depend on admixture with Caucasian groups. They cite work by Hiernaux and Hassan which they believe demonstrates that populations could vary based on microevolutionary principles (climate adaptation, drift, selection), and that such variations existed in both living and fossil Africans.
Afrocentrists have condemned what they consider to be attempts at dividing African peoples into racial clusters as new versions of what they deem older, discredited theories, such as the "Hamitic Hypothesis" and the Dynastic Race Theory. These theories, they contend, attempted to identify certain African ethnicities, such as Nubians, Ethiopians and Somalis, as "Caucasoid" groups that entered Africa to bring civilization to the natives.
Afrocentrists have also charged that a double standard exists and that Western academics have made limited attempts at defining a "true white". They believe that Western academics have traditionally limited the peoples they defined as "Black" Africans, but used broader "Caucasoid" or related categories to classify peoples of Egypt or certain other African ethnicities.
Afrocentric writer C.A. Diop expressed this belief in a double standard as follows in 1964:
"But it is only the most gratuitous theory which considers the Dinka, the Nouer and the Masai, among others, to be Caucasoids. What if an African ethnologist were to persist in recognising as white only the blond, blue-eyed Scandinavians, and systematically refused membership to the remaining Europeans, and Mediterraneans in particular--the French, Italians, Greek, Spanish, and Portuguese? Just as the inhabitants of Scandinavia and the Mediterranean countries must be considered as two extreme poles of the same anthropological reality, so should the Negroes of East and West Africa be considered as the two extremes in the reality of the Negro world. To say that a Shillouk, a Dinka, or a Nouer is a Caucasoid is for an African as devoid of sense and scientific interest as would be, to a European, an attitude which maintained that a Greek or a Latin were not of the same race.
Afrocentrists believe that European scholars define Black people as narrowly as possible, labeling as the extreme "true Negro" only those peoples living south of the Sahara. They add that all Africans who do not meet the definition of this extreme are allocated to "Caucasoid" groupings, including Ethiopians, Somalis, Egyptians and Nubians (C. G. Seligman's Races of Africa, 1966). Afrocentrists also believe strongly in the work of certain anthropologists who have suggested that there is little evidence to support that these populations are closely related to "Caucasoids" of Europe and western Asia.
For example, French historian Jean Vercoutter has claimed that selective grouping was common among scholars assessing the ethnicity of the ancient Egyptians. He has said that workers routinely classified Negroid remains as "Mediterranean", even though archaeological workers found such remains in substantial numbers with ancient artifacts. (Vercoutter 1978- The Peopling of ancient Egypt)
More recent work by geneticists, however, provides evidence that Eurasians likely are descended from populations who migrated north and east out of the Horn of Africa. Hence, certain shared genetic and phenotypical characteristics among Eurasians and Northeast African groups such as Ethiopians and Somalis. Some phenotypical similarities among Somalis and Eurasians exist at a higher structural level, such as orthognathism, tooth size, keen facial features and skull shape and size. According to anthropologist Loring Brace:
When the nonadaptive aspects of craniofacial configuration are the basis for assessment, the Somalis cluster with Europeans before showing a tie with the people of West Africa or the Congo Basin.
Genetic analyses of male DNA in the 21st century have also indicated that Somalis carry considerable E1b1b, a Y chromosome haplogroup characteristic of Northeast African, Berber, Arab, Jewish, Mediterranean and Balkan populations.
Afrocentrists argue against the classification of people they deem indigenous, "Black" Africans as Caucasoid and instead advocate use of the term Africoid to encompass the varying phenotypes of both Negroid and proto-Caucasoid African populations, as well as phenotypically Negroid Australasian populations. They contend that it is more appropriate to name Africans in a manner which reflects their geographical origin, as are Asians, as Mongoloids, and Europeans, as Caucasians.
Mainstream archaeologists and Egyptologists such as Frank J. Yurco and Fekri Hassan have stated that ancient Egyptian peoples comprised a mix of North and sub-Saharan African peoples that have typified Egyptians ever since. They said that the Egyptian people were generally coextensive with other Africans in the Nile valley.
Early Afrocentrists pointed to the work in the 1960s of Czech anthropologist Eugene Strouhal, which described physical, cultural and material links of ancient Egypt with the peoples of Nubia and the Sahara (Strouhal (1968, 1971- Strouhal, E., ‘Evidence of the early penetration of Negroes into prehistoric Egypt)., the analyses of Falkenburger (1947) which show a clear Negroid element, especially in the southern population and sometimes as predominating in the predynastic period. In 1993 C Loring Brace et al wrote "The attempt to force the Egyptians into either a “black” or a “white” category has no biological justification. Our data show only that Egypt clearly had biological ties to the north and to the south, but that it was intermediate between populations to the east and the west, and that Egypt was basically Egyptian from the Neolithic right on up to historic times.
Research by archaeologist Bruce Williams argued for Nubian cultural influence on formation of the Egyptian kingships.
Egyptians themselves called for the inclusion of Egypt in Du Bois's early drafts of the Encyclopedia Africana. The director of the Egyptian Cultural Center in Accra wrote to praise Du Boise for having "maintained faith in the African character of Egypt's achievement," and urging that the Encyclopedia Africana keep Egypt within its Afrocentric focus.
Afrocentrists have claimed a growing scholarly acceptance of Egypt as an African culture with its own unique elements. They cite mainstream scholars like Bruce Trigger, who in 1978 decried that approaches of the past were 'marred by a confusion of race, language, and culture and by an accompanying racism'. and Egyptologist Frank Yurco, who in the late 1990s viewed the Egyptians, Nubians, Ethiopians, Somalians, and others as one localized Nile valley population, that need not be artificially clustered into racial percentages. Afrocentrists have cited 1990s mainstream studies that confirmed the varied physical character of the Egyptian people, and influence on them from other peoples of the Nile (Nilotic influence).
Afrocentrists also claimed that the ancient Egyptians made significant contributions to ancient Greece and Rome during their formative periods. They also claimed that Egyptians were black, as discussed above.
This early Afrocentric view is at odds with conclusions of mid-20th c. Eurocentric scholars such as British historian Arnold Toynbee and hearkens back to the findings of earlier historians. Toynbee believed the ancient Egyptian cultural sphere had died out without leaving a successor. He regarded as "myth" the idea that Egypt was the "origin of Western civilization."
There are accounts in the historical record dating back several centuries, in which writers noted Egypt's contributions to Mediterranean civilizations.
Other critics of the Afrocentric approach in the study of history include the late Egyptologist Frank Yurco, and African-American history professor Clarence E. Walker who has stated that Afrocentrism is: "a mythology that is racist, reactionary, essentially therapeutic and is eurocentrism in black face."
Cain Hope Felder, a Professor of New Testament Language and Literature at Howard University and supporter of Afrocentric ideas, has warned Afrocentrists to avoid certain pitfalls. These include:
Nathan Glazer writes that although Afrocentricity can mean many things, the popular press has generally given most attention to its most outlandish theories. Glazer supports many of the findings in Mary Lefkowitz book Not Out of Africa but also recognizes that Afrocentricity may, at times, take the form of legitimate and relevant scholarship.
Often, the work that critics of Afrocentricity call "bad scholarship" is also rejected by Afrocentrists. Adisa A. Alkebulan writes that critics have used claims of what she calls "a few non-Afrocentrists" as "an indictment against Afrocentricity."
Robert Todd Carroll in The Skeptic's Dictionary refers to Afrocentrism as "pseudohistorical", and argues that the prime goal of Afrocentrism is to encourage black nationalism as well as ethnic pride in order to effectively combat the destructive consequences of cultural and universal racism. |
Threats to the Bottlenose Dolphin and Other Marine Mammals
Bottlenose dolphins and other marine mammals face a number of conservation threats due to anthropogenic, or human-induced, impacts on the marine environment. Marine mammals adapted to the aquatic environment when it was free from boats, pollution, noise, and human competitors for fish resources. As human beings have created boats that can travel to any part of the ocean, new challenges have developed that threaten the well being and even, existence of many marine mammal species. Some of the conservation threats to marine mammals include:
- Habitat Degradation
- Boat Traffic
- Fishing Interactions
- Yellowfin Tuna Fishery in the Eastern Tropical Pacific
- Pollution, and
- Direct Takes.
Laws must be created and strictly enforced to protect and conserve a diversity of marine mammal species. Two such laws or acts are the:
Marine Mammal Protection Act of 1972 (amendments of 1994) and the
Endangered Species Act of 1973.
Human beings have exploited the resources of near shore and offshore ecosystems. Marine mammals utilize both of these environments for a variety of behaviors, including resting, foraging for prey, traveling, and socializing. Human use of these areas affects marine mammal behavior, distribution, and energetics and may cause short- (temporary) or long-term consequences for these individuals and species. Some examples of habitat degradation include areas that are disturbed by traffic from a large number of commercial and recreational vessels; pollution from sewage, toxins, and oil spills; and noise from boats, construction, dredging, oil and gas drilling, and explosions. Bottlenose dolphins and other marine mammals may avoid these areas of habitat degradation, possibly settling for less hospitable areas with fewer food resources.
As boat traffic in the oceans increases to keep up with today's society, so do the threats to marine mammals. Whale and dolphin watching vessels along with commercial and recreational fishing boats have the potential to present dangerous consequences to marine mammals. Poorly operated dolphin watching boats and irresponsible recreational boaters may approach dolphins too closely and too quickly in order to induce dramatic behaviors such as bow riding and breaching for paying customers. These boat activities can disrupt the behaviors of marine mammals and can scatter a group, which is especially harmful to females with young calves. According to the Marine Mammal Protection Act of 1972 (MMPA), these actions constitute harassment and are illegal; however, these illegal activities are rarely enforced. Although feeding is also forbidden under the MMPA, many boaters (including whale- and dolphin-watching vessels) feed wild marine mammals to entice them closer to boats. Marine mammals change their normal behavior patterns if fed by humans and may depend on people for food, instead of foraging for it themselves. Dolphins that approach boats are more susceptible to harm from fishing gear, engine propellers, poison, and susceptibility to disease from humans and pets. Dolphins can also bite and injure human beings that do not give them the food that they expect! The harmful effects of commercial and recreational fishing vessels are discussed in the next section.
Both commercial and recreational fisheries threaten marine mammals. By-catch in commercial fishing nets occurs throughout the world's oceans. By-catch is the incidental capture of a species, such as the bottlenose dolphins, that is not the target of the fishermen. The MMPA and its amendments of 1994 were passed in large part to decrease the number of dolphin by-catch in the fishing industries. However, by-catch still occurs in both legal fishing industries such as gill nets and trawling as well as illegal drift nets that capture any marine life that swims or floats into a huge net that is towed behind a vessel! Fisheries also threaten marine mammals by fishing for the same food resources on which the animals depend. In many parts of the world, marine mammals are seen as competitors for dwindling fish resources and are poisoned and killed to maintain the fish resources for human consumption.
Yellowfin Tuna Fishery in the Eastern Tropical Pacific
In the late 1950s, San Diego tuna fishermen developed a new technology based upon purse-seine nets and a method known as "dolphin fishing." This fishing method relies on the phenomenon that some species of dolphins tend to school above yellowfin tuna in the Eastern Tropical Pacific (ETP), one of the most productive tuna fishing areas in the world; almost one quarter of the world's tuna catch comes from the ETP. Because of the close association between tuna and dolphins in the ETP, tuna boats can simply set nets around schools of dolphins, knowing tuna will be caught as well. Dolphins must frequently surface to breathe, making dolphins schools easy to spot on the surface. Tuna boats then use speedboats, helicopters and small explosives known as seal bombs to herd dolphins into purse seine nets which can be up to one mile in circumference. The dolphins become entangled in the nets along with the tuna, and die. Though data on dolphin mortality in purse seine nets was very poor prior to the passage of the MMPA, it has been estimated that mortality rates in the 1960s were as much as 250,000 per year, and that over 7 million dolphins have been killed through this fishery.
Why dolphins, primarily spotted (Stenella attenuata), spinner (Stenella longirostris) and common dolphins (Delphinus delphis), and less frequently striped (Stenella coeruleoalba), roughtoothed (Steno bredanensis), bottlenose (Tursiops truncatus) and Fraser's dolphin (Lagenodelphis hosei), school with tuna is not precisely known. It seems as though the tuna follow the dolphins, not vice versa, as dolphin fishing works because the air-breathing dolphins can be corralled into the net.
One of the main predicaments in this issue is the fact that purse seine dolphin fishing is by far the easiest and most productive ways to catch tuna. When dolphin fishing was first devised in the 1950s, catches up to 250 tons of tuna per set were not uncommon. Even by the 1980s, tuna boats were still bringing in an average of 18 tons per set. Dolphin fishing has larger average catches than other methods of purse seine fishing and tends to catch larger and more sexually mature tuna than other methods. Economically, dolphin fishing is not only the most productive method, but also is the least harmful to the tuna population as it catches sexually mature fish.
Dolphin bycatch in the yellowfin tuna fishing industry was already an issue when the MMPA was passed in 1972; in fact, the MMPA specifically ordered incidental dolphin kills associated with tuna fishing to be reduced to "insignificant levels approaching zero." The tuna industry was granted a two year grace period to develop new techniques safe for dolphins, but none were forthcoming. Federal courts enforced a decreasing quota system to reduce kills throughout the 1970s and early 1980s. A 1981 Amendment to the MMPA asserted that "[the goal of zero mortality] shall be satisfied in the case of the incidental taking of marine mammals in the course of purse-seine fishing for yellowfin tuna by a continuation of the application of the best marine mammal safety techniques and equipment that are economically and technologically practicable."
In 1984, more MMPA Amendments were passed to in regards to the dolphin/tuna issue. The phrase "insignificant levels approaching zero" was redefined as a quota of "20,500." The Department of Commerce was required to ban imports of purse-seine-caught tuna from foreign fishing fleets without dolphin kill rates comparable to the US fleet's by 1991 and from countries where governmental dolphin protection program had not yet been instituted.
By 1988, the incidental dolphin kill rate of the US tuna fleet had dropped, but foreign tuna boats still killed about four times as many dolphins as US tuna fishermen. It was at this time that the reauthorization hearings of MMPA were held. The testimony of an American named Sam LaBudde was crucial to these proceedings: LaBudde had worked as a cook on a Panamanian tuna boat and provided graphic footage of dolphins being killed. These proceedings resulted not only in the reauthorization of the MMPA itself, but new 1988 amendments required US boats to have a special panel made of fine mesh netting, called a Medina panel, in the rear of the net to help facilitate the release of dolphins from the nets. 1984 amendments were also clarified, directing foreign fleets to prove kill rates of no more than twice the US rate of 20,500 for the year 1989, and no more than 1.25 times the US rate in 1990.
Also in 1988, environmental groups, frustrated by the fact that dolphins were still being killed in tuna nets at all, launched a nationwide consumer boycott of the three major tuna processors in the US: Heinz's Starkist Tuna, Ralston Purina's Chicken of the Sea and Pillsbury's Bumble Bee Tuna. Together, these three companies controlled 70% of US tuna market. In 1990, after two years of concerted efforts by environmental groups, all three tuna processors agreed voluntarily to accept only "dolphin-safe" tuna, meaning tuna that was not caught by purse seine dolphin fishing or drift nets.
Also in 1990, the Dolphin Protection Consumer Information Act was passed. This act mandated standards for labeling tuna "dolphin-safe," following the same guidelines the tuna companies had: no tuna caught by setting a purse seine net around dolphins could be labeled "dolphin safe." Finally, the International Dolphin Conservation Act was passed in 1992. This provided for a five year moratorium on purse seine net "dolphin" fishing beginning in 1994. The US fishing fleets were prohibited from chasing, capturing and setting of nets on dolphins at all.
With the US tuna market now "dolphin-safe," the rest of the world began to take notice. At an international meeting of the Interamerican Tropical Tuna Commission (IATTC) in 1992, the "La Jolla Agreement" established the International Dolphin Conservation Program (IDCP). This represented the first time a fisheries organization recognized the need to deal with the issue of marine mammal deaths in a fishery. The IDCP called for a decrease in dolphin deaths, with a goal of under 5000 deaths by the year 2000. The program includes 100% observer coverage, captain and crew training in dolphin release techniques, data collection on dolphin biology and bycatch, and payment of funds by the tuna fishermen to support the observer program. Only those vessels which comply with these regulations are allocated portions of the annual limit, and are the only vessels allowed to set nets on dolphins.
In October of 1995, meetings were held between US and government officials from Belize, Colombia, Costa Rica, Ecuador, France, Honduras, Mexico, Panama, Spain, Vanuatu and Venezuela, and representatives from a number of environmental groups such as Greenpeace, the World Wildlife Fund, the Center for Marine Conservation, the National Wildlife Federation and the Environmental Defense Fund. The result of these meetings was the Panama Declaration, which would make the La Jolla Agreement binding under international law. Furthermore, the US would lift the embargoes on foreign tuna caught in purse seine nets and would change the definition of "dolphin safe" as stipulated in the Dolphin Information Protection Act of 1992 to mean that no dolphin mortality was observed when the tuna was caught. The IDCP would become legally binding in the US and Mexico and would set elimination of dolphin mortality in fishing program as a goal.
In the summer of 1997, the International Dolphin Conservation Act, legislation (known as HR 408 and S39) implementing the Panama Declaration into law, was passed by both the House and Senate and was signed by President Clinton on August 18th, 1997, becoming Public Law No. 105-42. The consequences of this new law is that the US tuna fishermen will again be able to catch tuna using the purse seine dolphin fishing method and the tuna embargoes on nations using this method will be lifted. A quota of 5000 dolphins killed per year was set, with no provisions set in place for future quota reduction.
Before the new "dolphin-safe" label (meaning there was no observed dolphin mortality) could go into effect, studies were conducted by the National Marine Fisheries Service to investigate the impact of purse seine fishing on dolphins. These studies found that the impacted species of dolphin (including two species listed as "depleted" under the MMPA, the Eastern spinner dolphin and the Northeastern offshore spotted dolphin) were not recovering as expected. Additionally, it was shown that the stress induced by the dolphin fishing method was likely to have a population level effect (via stress-induced changes in immune system and reproductive system function, as well as severe muscle damage (capture myopathy) from the chase itself. However, in April of 1999, Secretary of Commerce William Daley issued a rule implementing the change in the dolphin safe-label, claiming that NFMS was unable to prove that the fishery was causing in a significant adverse impact on the depleted dolphin populations. This ruling was challenged in a California court in Brower vs. Daley, where Judge Thelton Henderson blocked the labeling change from going into effect on April 11, 2000. This ruling is currently being appealed; however, for now, the former and stricter standards of "dolphin-safe" remain in effect.
Environmental groups have divided sharply on the International Dolphin Conservation Act and its effects on dolphin mortality. Supporters of the new law include Greenpeace and the Center for Marine Conservation; these groups feel that only international cooperation can eliminate dolphin deaths through the tuna fishing industry worldwide. Although the US tuna fishing fleet was prohibited from setting nets in dolphins in 1994, as a result many US boats repatriated to countries that allowed dolphin fishing of tuna. Furthermore, the primary motivation behind the La Jolla Agreement for the signing nations was to re-enter the lucrative US tuna market. The environmental groups backing the new law feared that if the US did not accept the La Jolla Agreement, the nations involved would no longer have an incentive to pursue dolphin-safe tuna fishing at all. The La Jolla Agreement has been successful in lowering dolphin deaths in the ETP, reducing that figure by over 90% since 1990, to 2657 deaths in 1996. If the US market had remained closed, the La Jolla Agreement may have been abandoned completely and worldwide, more dolphins would be killed.
However, environmental groups including the Earth Island Institute, the Sierra Club and Friends of the Earth were staunchly opposed to the new law. The dolphin fishing technique causes severe stress upon the surrounded dolphins and can have serious consequences; many dolphins may still die unobserved later. Furthermore, nighttime sets, murky water or unreliable observers may also result in a falsely labeled "dolphin-safe" catch. Finally, the currently lowered number of annual dolphin deaths may not be due to the La Jolla Agreement, but to the fact that fewer fishermen are pursuing tuna at all. As foreign fishermen are currently unable to sell their tuna in the US tuna market, fewer boats may be fishing for tuna in the ETP since they have nowhere to sell it.
The practical result of this legislation is that tuna caught by encircling dolphins may soon be appearing on US shelves; Mexico and Ecuador currently have "affirmative findings," meaning the embargoes on incoming tuna from those countries has been lifted. Tuna that does not bear the "dolphin-safe" label was caught during a set where dolphins were observed dead or mortally injured. For now, tuna bearing the "dolphin-safe" label still means it was not caught in conjunction with dolphins. However, pending appeals, it could soon mean that the tuna could still have been caught by setting nets on dolphins, but no dolphins were seen killed or mortally injured during the set. Despite these issues, the US Tuna Federation, which represents the interests of the US tuna industry, including Bumblebee, Starkist and Chicken of the Sea, has reported that all US canned tuna processors intend to keep their previous policy of only purchasing tuna that was not caught by setting nets on dolphins.
Pollution occurs throughout the oceans due to human sewage, oil spills, toxic leaks and dumping, and noise. Noise pollution is a rising concern among marine mammalogists today. Marine mammals adapted to oceans that were void of human produced sounds. Today, the oceans are being bombarded with sounds from transportation vessels, dredging and construction, oil and gas drilling, seismic exploration, explosions, oceans and geologic studies such as ATOC or Acoustic Thermometry of Ocean Climate, and sonar such as LFA sonar or Low-Frequency Active sonar. Loud sounds have the potential to cause ear damage or destruction while low-frequency sounds can affect communication, prey and predator detection, and navigation. In fact, marine mammals depend on sound for almost every daily behavior! Although most of the consequences of the sounds mentioned above are still unknown, the consequences of some of these noises on marine mammals may be devastating. Research is necessary and is now being conducted by a number of researchers to determine the effects of high- and low-frequency as well as loud sounds within the marine environment on marine mammals. Further information on LFA sonar can be found at: http://www.publicaffairs.noaa.gov/releases2001/mar01/noaa01037.html or http://www.surtass-lfa-eis.com/
Although it has been illegal to take marine mammals in the United States since the MMPA of 1972, marine mammals are still killed in some parts of the world for meat, oil, and leather. In several countries around the world such as Turkey, Peru, Sri Lanka, and Japan, dolphins are being killed for human consumption and to decrease the competition for fish resources.
Marine Mammal Protection Act of 1972 (amendments of 1994) (MMPA)
The MMPA, the primary federal legislation designed to protect marine mammals, was originally passed in 1972 due in part to concern of dolphin and other marine mammal by-catch in commercial fisheries. The MMPA prohibits a "take" of a marine mammal, which is anything that may harm, harass, or kill a marine mammal. In the amendments of 1994, two types of fishery threats to marine mammals were identified: Category I activities which are those that frequently kill or seriously injure marine mammals and Category II fisheries which are those that occasionally kill or injure marine mammals. In the Atlantic Ocean, Gulf of Mexico, and United States' Caribbean waters, bottlenose dolphins were among the species listed as potentially injured or killed in three of four Category I fisheries and three of six Category II fisheries. The MMPA is enforced by the National Marine Fisheries Service (NMFS), which is part of the Department of Commerce and the National Oceanic and Atmospheric Administration (NOAA). Information regarding the MMPA regulations can be found at: http://www4.law.cornell.edu/uscode/16/ch31.html or at NOAA's web page: http://www.nmfs.noaa.gov/
Endangered Species Act of 1973 (ESA)
The ESA is designed to protect species that are endangered to the point that they are being threatened to near extinction. The National Marine Fisheries Service (NMFS) is responsible for marine species that are protected by the ESA. All decisions that determine which animals should be considered threatened or endangered are based on scientific and commercial data, rather than on the economic needs of human beings. The species that are considered endangered are those that are in imminent danger of extinction in all of their significant habitats. A threatened species is one that is likely to become endangered in the foreseeable future. A recovery plan is designed for all endangered and threatened species to aid in their conservation and recovery. Marine mammal species within the United States' waters that are endangered include the blue, bowhead, fin, humpback, Northern right, sei, and sperm whales, along with Caribbean and Hawaiian monk seals. Domestic threatened marine mammals consist of the Steller sea lion. Other protected species in international waters include the Chinese and Indus River dolphins, the gray whales of the Western North Pacific population, the Gulf of California harbor porpoise, the Southern right whale, the Mediterranean monk seal, and the Ringed Seal. More information on these species can be found at: http://www.nmfs.noaa.gov/prot_res/species/ESA_species.html
More information on the ESA can be found on NMFS's web page at: http://www.nmfs.noaa.gov/prot_res/laws/ESA/ESA_Home.html or a full version of the act can be seen at: http://www.nmfs.noaa.gov/prot_res/laws/ESA/esatext/esacont.html
Back to Top
Dolphin Programs | Whale Programs | Education Programs | Our Research | Resource Guide
Copyright © 2002, The Dolphin Institute |
Today’s machines tools routinely operate on a 24/7 basis with little, if any, downtime allowed for routine maintenance. They’re also working faster with increased metal-removal capabilities and are more versatile in terms of machining capabilities. But machining faster means higher accelerations and metal-cutting speeds, as well as fast traversing motions. Higher speeds place greater demands on machine tools’ linear drives. At the same time, high-speed cutting and dry-machining hardened and exotic materials generate lots of fine particles, which can accelerate wear on unprotected drives.
Wear in ball screws
Precision-ground ball screws continue to be the preferred drive for machine tools. Their compact design, economical cost, and efficiency are key advantages, especially for high-speed machines. However, the life of a drive screw no longer depends solely on its rated load capacities in relation to the mean applied load. Two other factors must now be considered when designing modern high-speed machines — lubrication and contamination-induced wear.
For steel balls rolling between ground-steel surfaces, there are generally three types of friction. One is direct contact or so-called solid friction if there is no lubricant between them. The other is electrohydrodynamic (EHD) friction, which arises if there is always an oil film between balls and race. And mixed friction means there is some oil film acting as a lubricant but also some direct contact between balls and race.
Compared to other machine components, such as deep-groove or angular-contact ball bearings (or even plain bearings), ball screws exhibit more sliding and twisting. Sliding stems from the lack of retainers holding balls in place. It can be reduced by adding ball chains or plastic spacers to reduce friction between adjacent rolling balls, but both are subject to design limitations. Ball chains, for example, have to fit through ball returns. Twisting is due to the inclined contact lines similar to those found in angular-contact ball bearings. There is also more mixed friction due to lower relative speeds. Ball returns in which balls do not roll uniformly cause additional friction.
Ball screws are usually sized based on the Hertzian pressure of an applied load and the number of load cycles (the classic L10 life equations). The applied load causes material fatigue over time. This is reflected in the equation commonly used to calculate life expectancy:
L10 = (Ca/Fm)3 × 106,
where Fm = mean equivalent load, Ca = dynamic load rating, and L10 = number of hours or revolutions 90% of the bearing will survive.
From this equation, an engineer might conclude that either reducing Fm or increasing the load rating by using a larger unit should increase a bearing’s useful life. This would be a valid conclusion for conventional, slower-turning bearings. Modern machinery moves much faster, a regime where abrasion and adhesion, factors left out of the classic L10 equation, become more important.
Abrasive wear is caused by contamination inside the ball-screw nut. Adhesive wear stems from microwelding — a result of the lubricant-film breakdown. The effect of these wear factors is premature failure from loss of preload. Because advanced servodrives are sensitive to changes in friction, avoiding preload loss becomes critical.
Therefore, to extend the life of ball screws, cleanliness and lubrication to reduce abrasion and adhesion must be considered, as well as dynamic-load capacity.
Lubrication and sealing
Cutting tools can lead extended lives if they are made of ceramic or carry hard coatings. And ball-screw life can be improved with better sealing and lubrication.
It is normally not possible to cover or house ball screws tightly enough to completely keep out contaminants such as dust and finer particles. In many cases, wipers seal nuts from abrasive particles. Regular, noncontacting wipers (labyrinth seals) will not keep out small particles because they inherently have a gap of several tenths of a millimeter between the seal and shaft. Brush wipers are also ineffective in keeping out contaminants.
Lip seals made of elastic materials are substantially better. Like shaft seals, they provide a tight fit. But because these seals typically include a lip that must slide against the screw with considerable pressure to ensure a tight seal, it causes high friction and wear. And in large-pitch ball screws, the lip seals may flex axially, compromising their effectiveness.
Currently, the best solution is segmented plastic wipers with several edges oriented at approximately right angles to the direction of motion. The wipers mount on both ends of the nut and must fit closely because their effectiveness depends largely on their line of contact with the screw. Segments must be stiff enough to remove tough dirt from the screw but remain flexible enough to stay in contact despite the spring preload.
When specifying segmented wipers, ensure that wipers with protruding “fingers” do not contact the elastic end of travel bumpers. In such cases, flush mounted wipers are necessary. Wedge-shaped segments or beveled edges are best when large amounts of dirt are present.
Combining seals and lubrication
Wipers with the added ability to store and distribute lubricant are becoming increasingly popular, especially as environmental regulations on oil disposal become more restrictive. This type of wiper substantially reduces lubricant consumption yet still properly lubricates the ball nuts. Even though the amount of lubricant lost using these wipers is less than that from conventional auto-lubrication devices, regular replenishment of the reservoir is still necessary — and close attention must be paid to cleanliness. As a general rule, when lubricant flow is reduced, sealing must be improved, as the ball nut is no longer flushed with fresh lubricant.
Segmented plastic wipers, combined with felt rings, have proven to be an especially effective combination. The first “stage” in combination wipers can be made from plastics with the mechanical properties to create the right amount of contact with the shaft and guide contaminants away from the screw. The second “stage” is a felt ring that stores lubricant and distributes it as a thin, even film. Felt rings can store up to 75% of their volume in lubricant and are easily refilled.
Another advantage of combination wipers, besides reducing oil consumption, is that they can be used with any liquid lubricant — liquid grease or oil. Liquid grease is especially well suited for applications with short-stroke oscillating moves or low speeds.
As an added bonus, the felt absorbs tiny particles that may get past the plastic wiper, letting it work as a second wiper. This is especially beneficial in applications with fine, abrasive dust or sludge, such as grinding machines. Note, however, that felt cannot be used alone as a wiper, except in relatively clean conditions, as it absorbs coolant and particles. Thus, felt ensures trouble-free operation and enables a 90% reduction in lubricant consumption, but only when used with a good plastic wiper.
Other products on the market combine sealing and lubricant dispensing within a polymer ring. But unlike felt, the polymer needs heat to operate properly — thus adding undesirable friction. The ring, impregnated with oil, forms a seal around the thread, often aided by a spring preload. Friction created by the plastic ring rubbing on the rotating thread warms the ring which lets oil flow out of the polymer. The polymer can reabsorb some of the oil as it passes over the thread, but there is no reservoir to refill. Therefore, these polymer wipers are often used in multiple sets, greatly increasing ball-nut length, to provide enough lubricant. And there is no choice between grease and oil — you must use whatever oil is embedded in the plastic.
Substantial improvement in ball-screw life is possible, even in the most demanding applications, if you provide adequate lubrication and protection from abrasive contamination. A combination wiper that can store and replenish lubricant, together with an effective finger type seal, also improves ball-screw reliability. The cost for this option is minor compared to the downtime from unexpected and premature failures. And, in some cases, long-term lubrication or even “for-life” lubrication may be feasible if the proper lubricant is selected.
Edited by Stephen J. Mraz |
WHO Report on Global Surveillance of Epidemic-prone Infectious Diseases - Yellow fever
Immunization is the single most important measure for preventing yellow fever. In populations where vaccination coverage is low, vigilant surveillance is critical for prompt recognition and rapid control of outbreaks. Mosquito control measures can be used to prevent virus transmission until vaccination has taken effect.
Yellow fever vaccine is safe and highly effective. The protective effect (immunity) occurs within one week in 95% of people vaccinated. A single dose of vaccine provides protection for 10 years and probably for life. Immunization with yellow fever vaccine can and should be part of the routine immunization system (administered during the same visit as measles vaccine). In addition, preventive immunization can be done in mass "catch-up" campaigns to increase immunization coverage in areas where it is low. This is often done on an emergency basis after the beginning of an outbreak. WHO strongly recommends routine childhood vaccination, which includes yellow fever. This is more cost effective and prevents more cases (and deaths) than emergency immunization campaigns to control an epidemic. Mosquito control measures can also play a role in reducing the risk of yellow fever, but are not as effective as immunization. |
What is really driving our need for food, besides the pleasure of taste?
We all know that this need is related, in some way, with life sustainability, but just how, exactly? The task of converting food into energy –and back again into organic compounds– is carried out by our metabolism, which, for instance, allows us to do sports using food as a fuel. This magic happens thanks to a complex set of chemical reactions that break down molecules into energy and vice-versa. Metabolism is thus what allows our cells to “carry on” and keeps us alive. Studying how metabolism works will allow us therefore to unveil the fundamental processes that sustain life. Additionally, a deep understanding of these processes may also help to devise strategies to tackle all those diseases in which cellular metabolism deviates from its normal behaviour, as, for instance, in the case of cancer.
Although one may imagine that studying these problems is mainly an experimental task, we have today enough computational resources and empirical data to reproduce biological systems in a computer, saving much time and money. This is exactly what we have done in our work: we have simulated a metabolic system in a computer and studied its behaviour under different circumstances. As it is normally done in science, where one studies first a model system, we examined in our work the reduced metabolic network of Escherichia coli, a bacterium commonly found in wild type in the lower track of our intestines. Much like us, bacteria need to “eat” and do convert the nutrients they get from the environment into growth and energy, but are much simpler organisms easier to study as compared to human cells.
Although computational studies of cell metabolism were already carried out before, those works generally focused on one possible state only, i.e. the one that maximizes growth. In a simile, they only considered the situation in which, having eaten chocolate, your body invests it all into growth. But, of course, depending on the circumstances, your metabolism may need to use food in some other way, as, for instance, to convert it into physical activity. Our work takes look at all possible metabolic states – what we call in our paper the feasible flux phenotypic space (FFP) – for different growth conditions of the bacterium E. coli. The full characterisation of this space provided us with a reference frame to quantitatively assess the difference between metabolic states. With this tool at our disposal, we were next able to check that the maximal growth condition is rather different from the bulk of possible states. This finding is very interesting from an evolutionary perspective, since it suggests that, to attain maximal growth –which is often observed experimentally– evolution pushed metabolism towards a rather atypical region of the metabolic space. Last but not least, we were also able to reproduce and locate in the space some specific metabolic states that were observed experimentally but not by using other computational approaches. Quite interestingly, these states resemble a metabolic behaviour that is found in cancerous cells: by describing these states at the whole cellular system level, our work may ultimately provide hints on how to rescue a physiological state.
ICREA Research Professor
Mapping high-growth phenotypes in the flux space of microbial metabolism.
Güell O, Massucci FA, Font-Clos F, Sagués F, Serrano MÁ
J R Soc Interface. 2015 Sep 6
|Chemesthesis affects taste The sensations of taste, which are divisible into several distinct qualities: sweet, sour, bitter, salty, umami, and possibly fatty, play an important role for accepting or rejecting food and serve…|
|“Driving force”-dependent block in the inward… Ion channels are protein pores allowing specific ions to pass through cell membrane to maintain resting membrane potential, or to generate various physiological electrical signals. The inward rectifier K+ channel…|
|Dendritic cells: Implication of molecular mechanism… Adverse drug reactions are a serious health problem, accounting for 3% of revenue and becoming the fourth leading cause of death in the United States. These reactions include allergic reactions…|
|Cyagen Biosciences – Helping you choose the right… While many animal models are available “ off the shelf ” through various repositories and collaborations, generation of novel animal models has allowed for more effective studies, not limited by…|
|Lessons learned from the study of non-cancerous… Meningiomas are the most common among tumors inside the skull (35.6%). The cellular origin of these tumors is the membrane that surrounds the brain, the meninges (Fig. 1) Most meningiomas…|
|A formula for better asthma, cold and heart medications Imagine that you had found a way to drop the dose of your asthma medication ten-fold, increase its duration of activity from a couple of hours to all day, and…| |
Not a true ivy plant, western poison ivy is well known for producing a skin irritant.
General description: Western poison ivy is a smallish, nonclimbing shrub usually about knee high, with a single stem and only a few stubby branches or no branches at all.
Leaves: The leaves can be relatively large but always with three leaflets. Virginia creeper (Parthenocissus quinquefolia) and woodbine (P. vitacea) are similar but have 5 leaflets instead of 3. Jack-in-the-pulpit (Arisaema triphyllum) and the trilliums (Trillium spp.) do have 3 leaflets, but they have nonwoody stems. It may be enough to keep in mind that western poison ivy has a short woody stem and 3 leaflets.
Habitat and range
Western poison ivy occurs essentially statewide and is common everywhere except the northern tier of counties. Although it is primarily a forest species, it is adapted to a remarkably wide range of ecological conditions. It occurs in the interior of mature hardwood forests but also in young successional forests, forest ecotones, and brushy thickets. It is also found in native prairies (where fire has been suppressed), sand dunes, talus, rock fields, and floodplains. It seems to be absent only from permanently wet habitats.
Population and management
Western poison ivy often forms colonies, sometimes 20 ft (6 m) or more across. They grow quickly and can spread aggressively, especially in damaged habitats. This is certainly the case on roadsides, ditch banks, utility rights-of-way, and old fields. It is also notoriously adept at encroaching into mowed lawns from adjacent woods.
About the poison
The sap contains a toxic oily compound (3-n-pentadecyl-catechol) that is found in the leaves, flowers, stems, and roots. If any portion of the plant is bruised or broken, the poison may exude onto the surface, which is how people typically come in contact with it. It is initially a clear liquid, but it turns into a black gummy substance in a few hours and can remain toxic for an indefinite period, reportedly for several hundred years.
Contact may be direct between plant and bare skin, or the poison may travel on the fur of a dog, camping equipment, clothing, or other intermediary. The compound is not volatile, so it is not normally transmitted through the air, although it can be carried as droplets on particles of ash in the smoke of burning plants. Such particles are sometimes inhaled, causing serious problems, or they can settle on surfaces and be picked up from there.
Sensitivity to poisoning can vary from individual to individual and can change over time. Very few individuals are immune, and those that appear so could easily lose their immunity unexpectedly. The poison is absorbed by the skin almost immediately, although symptoms may not appear for 12 to 24 hours or in some cases several days. Washing the exposed skin with soap and cold water (warm water speeds absorption through the skin) probably will not prevent symptoms from appearing unless done with 1 to 3 minutes after exposure, but washing can remove residual poison and prevent it from being spread. The fluid in the blisters does not contain the poison and cannot spread the rash. A number of animal species regularly eat the fruit and leaves with no apparent harmful effects; in fact, it appears that only humans are susceptible. |
An unfortunate victim of hunting, disease, natural disasters and habitat loss, the mountain chicken population has recently undergone catastrophic declines, estimated at around 80 percent since 1995 (1) (2). On Dominica, this Critically Endangered frog is favoured for its meaty legs, which are cooked in traditional West Indian dishes, and which are in fact the country’s national dish (6). Annual harvests were thought to be taking between 8,000 and 36,000 animals before a ban on hunting was introduced and, as a result of this exploitation, the population on the island is thought to be near extinction (1).
The mountain chicken is particularly vulnerable to overharvesting as it has a relatively small brood size, limiting its ability to recover from heavy losses. The removal of breeding females is particularly damaging, as the tadpoles are dependent upon the females for food and moisture. The species’ large size, loud calls and tendency to sit in the open also makes it a particularly easy target for hunters (3).
The mountain chicken has also lost huge areas of its habitat to agriculture, tourist developments, human settlements and, on Montserrat, volcanic eruptions. On Dominica, the species is largely confined to coastal areas where there is great demand for land for construction, industry and farming, while on Montserrat, volcanic activity since 1995 has exterminated all populations outside of the Centre Hills (1) (3). Human encroachment upon the species’ habitat has also brought it into contact with a range of pollutants, including the highly toxic herbicide Gramazone, which is known to kill birds and mammals. Predation from introduced mammals, such as feral cats, dogs, pigs, rats and opossums, is also a relatively new threat to the species on Dominica (3) (7).
Perhaps the greatest, and least understood, threat to the mountain chicken today is the deadly fungal disease chytridiomycosis (1). This disease, which has wiped out many amphibian populations across the globe, became established on Dominica in 2002, and frog populations on the island declined by approximately 80 percent within 2 years (2) (7). The Dominican population is now so small that there may not be enough individuals to ensure the survival of the mountain chicken on this island. The fungus was introduced to nearby Montserrat in 2009, potentially via infected frogs hiding in shipments of fruit and vegetables. Like on Dominica, huge mountain chicken declines of 80 to 90 percent occurred. By July 2009, the last remaining healthy population of mountain chickens on Montserrat had succumbed to the disease (7). |
In 1930, Clyde W Tombaugh completed the search for Planet X, the outermost planet at the far reaches of the solar system. This object was named Pluto. Then, on 24 August 2006, the IAU (International Astronomy Union) formally voted on a new definition of the word 'Planet', which excluded Pluto. For seventy six years, our solar system had been understood to have nine planets, then suddenly it went back to eight. The response was furious. Both the astronomical community and the general public were up in arms over this seemingly arbitrary decision. To the public, it was a travesty that school textbooks would have to be re-written, and astronomers felt that the vote had been manipulated by scheduling it for the final day of the General Assembly in Prague when most of the attending astronomers had already flown home. The state of Illinois even passed a law decreeing that Pluto was still a planet (rather like the attempt to legislate a new value for Pi!), and to this day there's any number of websites, clubs and T-shirt vendors devoted to reversing the decision. So what was going on? What did this international body of astronomers have against poor Pluto?
Once Pluto had been discovered, named and classified, our picture of the solar system was pretty clear-cut. One star, nine planets, a bunch of comets and a multitude of asteroids broadly spread out in a region of space roughly between Mars and Jupiter. But there was one huge problem that had managed to slip through the cracks: there was, and never had been, a definition for the word "Planet". Everybody just knew what a planet was, which actually meant that nobody knew what it was. It was just accepted that planets were one of nine large things orbiting the sun, which tells you almost nothing about planets at all. While this was a hugely unscientific way to approach things, in practice it didn't matter since there were only nine known planets in the entire universe. Nobody using the word risked confusing their audience.
Shortly after the discovery of Pluto, astronomers started speculating about what might lie further out. Comets were well known and studied, but every now and then a very bright new comet would appear, following an orbit so large that it must have fallen in from way beyond the orbit of Neptune. Such comets would shoot by dramatically, and then vanish once more into the cold depths of space never to be seen again. Astronomers began speculating that there might be a a vast number of tiny icy objects orbiting out beyond Neptune, and that occasionally a gravitational interaction would eject one inwards, where it would fall past the sun and become one of these Long Period Comets. In 1950, one Gerhard Kuiper speculated that these objects could have been arranged in a structure similar to the asteroid belt, only colder and further away. Despite the fact that Kuiper didn't believe that such a belt actually existed (It was still thought that Pluto must be about the size of Earth, and so would have scattered such a belt a long time ago), it now bears his name: The Kuiper Belt.
In 1978, when Pluto's largest moon was discovered, and astronomers were able to determine just how small Pluto was, the idea of the Kuiper Belt became plausible, and the first object in the belt was discovered in 1992 by David Jewett and Jane Luu. This opened a floodgate of new Trans-Neptunian Objects (or TNO, meaning literally: "Things that are past Neptune". Scientists like using big words to describe simple things!), of which there are now many hundreds known and catalogued. This put Pluto in an uncomfortable situation of having a lot more in common with the TNO's than with any of the other 8 planets. After all, it was extremely small (two third's the size of our own moon, and not much larger than the largest asteroid, Ceres), its orbit was so elliptical that sometimes it was actually closer to the Sun than Neptune, and it's orbit was inclined away from the ecliptic.
Then, in 2003, a team of astronomers at the Palomar Observatory discovered a TNO with a diameter of 2500 km - larger than Pluto.
As if all that wasn't bad enough, planets were starting to be found orbiting other stars. The technology couldn't detect anything on the scale of our own familiar planets, but it was finding monstrous super-planets many times the size of Jupiter. These "planets" were so huge that they were on the verge of igniting and becoming stars - they were the often-theorised Brown Dwarfs. These things seemed to sit on the boundary between star and planet, highlighting more and more the need for a strict definition.
Astronomers began debating how to resolve the mess. Unfortunately it was harder than it seemed to come up with a scientifically rigorous definition that wouldn't change the common meaning of the word. Some suggestions would eliminate Pluto, others would end up giving us over three hundred planets around our own Sun (as the number of discovered TNO's and exoplanets increased), and others lacked clarity. After much debate at the 2006 General Assembly of the IAU in Prague, a vote was held during the closing ceremony, and the issue was considered closed. The new definition was as follows:
A Planet is defined as any celestial body that:
It's not hard to see why astronomers revolted. It was highly irregular to relegate the decision on such an important topic to a quick vote during the closing ceremony, and many have suggested that this was done deliberately to prevent dissenting astronomers from voting against the new definition. The new definition seems to raise more questions than it answers, as well, in that it seems to preclude planets around stars that aren't the Sun. The third term especially is a problem, since to make it fit many of the existing planets you have to do a lot of hand-waving to explain what is meant by "Cleared the neighbourhood". It's not at all clear, and it's plain why so many astronomers claim after the fact that had they been present, they would have voted against it.
Nevertheless, for all its problems, the new definition is a step forward. It is a vast improvement on the old situation, and the general public's issue with it (the "demotion" of Pluto) is a purely emotional one. It is widely accepted within the astronomical community that had we known more about the solar system in 1930, Pluto would never have been classified as a planet in the first place. And anyway, what's in a name? Pluto continues on its weird orbit, out in the cold wastes of the outer solar system, blissfully unaware and uncaring of how we choose to categorise it.
Comments? Questions? Why not mail me at [email protected] |
Central American Federation
Central American Federation or Central American Union, political confederation (1825–38) of the republics of Central America—Costa Rica, Guatemala, Honduras, Nicaragua, and Salvador. United under a captaincy general in Spanish colonial times, they gained independence in 1821 and were briefly annexed to the Mexican empire formed by Agustín de Iturbide. The nations joined in a loose federal state, appointing (1825–29) as first president Manuel José Arce, who was succeeded (1830–38) by the liberal leader Francisco Morazán. Political and personal rivalries between liberals and conservatives, poor communication, and the fear of the hegemony of one state over another led to dissolution (1838) of the congress and the defeat (1839) of Morazán's forces by Rafael Carrera. In 1842, Morazán made an abortive attempt to reestablish the federation from Costa Rica. Later efforts by Nicaragua, Honduras, and Salvador failed, and the attempts of Justo Rufino Barrios (1885) and José Santos Zelaya (1895) only increased existing enmities. At the Central American conference of 1922–23, the U.S. recommendation of a union was not favorably received, partly because of earlier U.S. policies in Panama and Nicaragua. Nevertheless, geography, history, and practical expedience are factors that constantly encourage union. In 1951, the Organization of Central American States was formed to help solve common problems, and in 1960 the five nations established the Central American Common Market.
See T. L. Karnes, The Failure of Union: Central America, 1824–1960 (1961); N. Maritano, A Latin American Economic Community (1970).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on Central American Federation from Infoplease:
See more Encyclopedia articles on: International Organizations |
Most snakes indigenous to the United States are not poisonous. The exceptions are copperheads, coral snakes, rattlesnakes, and water moccasins [source: CDC]. If you're bitten by one of these snakes, seek medical attention immediately, as the venom could be life threatening [source: Mayo Clinic].
Most poisonous snakes in the United States can be identified by the following characteristics:
- Slit eyes. The only exception is the coral snake.
- Triangle-shaped head
- Depression between the eyes and the nostrils [source: Mayo Clinic]
In addition to these general characteristics, each type of snake has its own distinctive features.
- Copperheads range in color from red to gold, with hourglass shapes on its body [source: CDC]. Young copperhead snakes have a tail with a bright yellow tip. These snakes can grow as large as 24 to 40 inches (61 to 102 centimeters) long, and are usually found in the Eastern United States [source: Andrews, Willson].
- Coral snakes have colorful red, yellow, and black rings, with the red and yellow rings touching each other. These snakes are usually slender and about 18 to 30 inches (46 to 76 centimeters) long, although they are sometimes a bit longer [source: Barrentine]. Unlike the other venomous snakes, coral snakes don't have slit eyes [source: Mayo Clinic]. These snakes can be found in the Southern United States [source: CDC].
- Rattlesnakes are the most common type of poisonous snake, and can be found all over the United States. There are 32 different types of rattlesnakes, all with their own identifying features. One thing all rattlesnakes have in common is a tail that makes a rattling sound when the snake feels threatened [source: CDC].
- Water moccasins, also known as cottonmouths, can be totally brown or black, or can have yellow cross bands. Younger snakes are usually more colorful, and sometimes have a yellow-tipped tail. These snakes can grow quite large. Adult water moccasins are often 24 to 48 inches (61 to 122 centimeters) long, and are sometimes even longer [source: Andrews]. Water moccasins can be found in the Southeastern United States, near rives and lakes [source: CDC]. |
To answer this question first we need to discuss what makes beer fizzy and how a head forms.
Most beers are carbonated with carbon dioxide (CO2). When the beer is in the can some of this CO2 is dissolved in the beer and some is at the top of the can. The CO2 that is dissolved in the beer is what makes it fizzy. When the can is closed the pressure inside is higher than the pressure outside, so that when you open the can the sudden drop in pressure and the agitation of pouring causes some of the CO2 to bubble out of solution, forming a head on your beer.
A stout like Guinness has a creamier, longer lasting head than a canned lager beer. In addition, Guinness is less fizzy than a regular lager beer. Guinness is canned with a mixture of carbon dioxide and nitrogen. Nitrogen is not absorbed into the beer nearly as well as carbon dioxide, so even though a can of Guinness may be at the same pressure as a can of lager, it contains less CO (and is therefore less fizzy) because the nitrogen makes up some of the pressure.
Because a beer like Guinness contains less dissolved CO2, if you poured it from a can with no widget, the head not be very thick because most of the CO2 would stay dissolved.
The purpose of the widget is to release the CO2 from some of the beer in the can to create the head. The widget is a plastic, nitrogen-filled sphere with a tiny hole in it. The sphere is added to the can before the can is sealed. It floats in the beer, with the hole just slightly below the surface of the beer.
Just before the can is sealed a small shot of liquid nitrogen is added to the beer. This liquid nitrogen evaporates during the rest of the canning process and pressurizes the can. As the pressure increases in the can, beer is slowly forced into the sphere through the hole, compressing the nitrogen inside the sphere.
When you open the can, the pressure inside immediately drops, the compressed gas inside the sphere quickly forces the beer out through the tiny hole into the can. As the beer rushes through the tiny hole, this agitation causes the CO2 that is dissolved in the beer to form tiny bubbles that rise to the surface of the beer. These bubbles help form the head. |
This page last modified 2006 February 14
Glossary of Astronomical Terms
This glossary is intended to grow as time permits. If you have any requests for inclusion, suggestions for improvement or clarity of explanation, or corrections, please . In the mean time, you may find what you need in the other tutorials on this web site, where many of the terms below are expanded upon in greater detail and context.
Where a term is a combination of words, it will normally be indexed under the first word of the term. For example, Apparent Magnitude is indexed as Apparent Magnitude, not Magnitude, Apparent.
Aberration of Starlight The apparent displacement of a star's position as a consequence of Earth's motion through space and the finite speed of light. More here.
Ablation The vaporisation of the surface layers of a body entering the atmosphere as a consequence of the heating that results from the compression of air ahead of it.
Absolute Magnitude The apparent magnitude that an object would possess it if was placed at a distance of 10 parsecs from the observer. In this way, absolute magnitude provides a direct comparison of the brightness of stars.
Achromatic Literally "no colour". A lens combination in which chromatic aberration is corrected by bringing two colours to the same focus.
Airy disc The bright central part of the image of a star. It is surrounded by diffraction rings and its size is determined by the aperture of the telescope. About 85% of the light from the star should fall into the Airy disc.
Altazimuth Mount A mounting in which the axes of rotation are vertical and horizontal, i.e. in altitude and azimuth. An altazimuth mount requires motion of both axes to follow an astronomical object, but is simpler to make than an equatorial mount and can, in some forms, be held together by gravity.
Altitude The angle of a body above or below the plane of the horizon negative altitudes are below the horizon.
Albedo The proportion of incident light which a body reflects in all directions. The albedo of Earth is 0.36, that of the Moon is 0.07 and that of Uranus is 0.93. The true albedo may vary over the surface of the object so, for practical purposes, the mean albedo is used.
Analemma The lemniscate-shaped form that results from plotting the position of the Sun at the same time every day.
Anomaly The angle at the Sun between a planet and its perihelion.
Ansae Literally handles. Originally a description of the appearance of Saturn's rings before they were recognised as being a ring system. Now used to describe (i) the extension of Saturn's rings outside the disc of the planet, and (ii) extensions from the central star of some planetary nebulae (due to bipolar outflow of material).
Apastron The position in an orbit about a star at which the orbiting object is at its greatest distance from the star.
Aphelion The position in a heliocentric orbit at which the orbiting object is at its greatest distance from the Sun.
Apochromatic A lens combination in which chromatic aberration is corrected by bringing three colours to the same focus. Some manufacturers use the term to describe achromatic doublets whose false colour is approximately equivalent to that of an apochromatic triplet lens.
Apogee The position in a geocentric orbit at which the orbiting object is at its greatest distance from Earth.
Apparent Magnitude The brightness of a body, as it appears to the observer, measured on a standard magnitude scale. It is a function of the luminosity and distance of the object, and the transparency of the medium through which it is observed.
Apsides The points where the major axis of an elliptical orbit meets the orbital path. The periapse (or pericentre) is the point of closest approach to the primary body; the apoapse (or apocentre) is the point of greatest distance.
Arcminute One sixtieth of a degree.
Arcsecond The second division of a degree of arc. One sixtieth of an arcminute. (1/3600th of a degree.)
Astigmatism An optical aberration resulting from unequal magnification across different diameters.
Astronomical Twilight When the centre of the Sun is between 12° and 18° below the horizon; faint stars become visible.
Astronomical Unit (AU) The mean distance from the Earth to the Sun, i.e. 149,597,870 km or 499.005 light seconds.
Attitude The orientation of a spacecraft or satellite with respect to its direction of motion.
Autoguider A CCD that is optically attached to a guidescope or off-axis guider and electronically attached to the control of the telescope mount. It monitors the position of a guide object on the CCD array and adjusts the telescope's drives so as to keep the object in the same position, thus correcting for any errors in the drive or in polar alignment. It enables long-exposure photography or imaging through the main OTA without the astronomer having to make manual corrections to the drive in response to what he sees in a guidescope.
Barlow lens A diverging lens which has the effect of increasing (usually doubling) the effective focal length of the telescope.
Bolometric Magnitude The total radiation received from an object.
Catadioptric A telescope whose optics, not including the eyepiece, consists of both lenses and mirrors. The most common examples of these are the Schmidt-Cassegrain telescopes, whose "lens" is an aspheric corrector plate, and the Maksutov-Cassegrain telescopes, whose "lens" is a deeply curved meniscus.
Celestial Co-ordinates A system by which the position of a body on the celestial sphere is plotted with reference to a reference plane and a reference direction. For more detail, see the tutorial on positional astronomy. The four systems in use are Ecliptic Co-ordinates, Equatorial Co-ordinates, Galactic Co-ordinates, and Horizon Co-ordinates.
Celestial Sphere The projection of space and the objects therein onto an imaginary sphere surrounding the Earth and centred on the observer.
Central Meridian The imaginary line through the poles of a planet that bisects the planetary disc.
Chromatic Aberration An aberration of refractive optical systems in which light is dispersed into its component colours, resulting in false colour in the image.
Circumpolar An object that does not set from its observer's latitude.
Civil Twilight When the centre of the Sun is less than 6° below the horizon; normal daylight activities are possible.
Collimation The bringing of the optical components of a telescope into correct alignment.
Coelostat A device, usually consisting of two mirrors, that is designed so as to reflect the light from a celestial object into a fixed instrument, where it forms a non-rotating image.
Coma (i) The matter surrounding the nucleus of a comet it results from the evaporation of the nucleus. (ii) An optical aberration in which stellar images are fan-shaped, similar to comets.
Conjunction There are at least three definitions of conjunction. Bodies are said to be in conjunction when they have the same ecliptic longitude (this is the strict definition) or when they have the same Right Ascension or when they are at their closest (this is strictly an appulse). Planets are said to be "at conjunction" when they are in conjunction with the Sun. (See diagram.) For extended bodies (e.g. Sun, Moon, planets), the body's position is taken to be its centre.
Culmination An object culminates when it reaches greatest and least altitudes ( upper culmination and their lower culmination respectively). For non circumpolar objects, the lower culmination is below the horizon. Most objects (the Moon sometimes being a notable exception) culminate when they reach the observer's meridian.
Dichotomy When the phase is exactly 50%.
Diffraction limited A measure of optical quality in which the performance is limited only by the size of the theoretical diffracted image of a star for a telescope of that aperture.
Direct motion Another term for prograde motion.
Dobsonian named for John Dobson, who originated the design. An altazimuth mount constructed usually of plywood or MDF suited to home construction. Also refers to a telescope so mounted.
Eccentricity The eccentricity of an orbit is a measure of its departure from a circle. Elliptical orbits have an eccentricity >0 and <1, parabolic paths have an eccentricity =1, and hyperbolic paths have an eccentricity >1.
Eclipse An alignment of two bodies with the observer such that either the nearer body prevents the light from the further body from reaching the observer (strictly speaking, these are occultations), e.g. solar eclipse or eclipsing binary stars, or when one body passes through the shadow of another, e.g. lunar eclipse, eclipses of Jovian satellites.
Ecliptic The apparent path the Sun on the celestial sphere. It intersects the celestial equator at the equinoxes. It is so named because, when the Moon is on the ecliptic, solar and lunar eclipses can occur.
Ecliptic Co-ordinates A system of celestial co-ordinates that uses the ecliptic as the reference plane and the First Point of Aries as the reference direction. The co-ordinates are given as ecliptic latitude (β) and ecliptic longitude (λ). (These are also called celestial latitude and celestial longitude.)
Elongation The angular distance between the Sun and any other solar system body, or between a satellite and its parent planet. The greatest elongation of an inferior planet is its maximum angular distance from the Sun; at this time the planet sets (greatest elongation east) or rises (greatest elongation west) at the greatest time from sunset or sunrise. (See diagram.) For extended bodies (e.g. Sun, Moon, planets), the body's position is taken to be its centre.
Equatorial Co-ordinates A system of celestial co-ordinates that uses the celestial equator as the reference plane and the First Point of Aries as the reference direction. The co-ordinates are given as Right Ascension (RA) and Declination (Dec).
Equatorial Mount A mounting in which one of two mutually perpendicular axes is aligned with Earth's axis of rotation, thus permitting an object to be tracked by rotating this axis so that it counteracts Earth's rotation.
Equinox Literally "equal night". it refers to the time of year when day and night are of equal length. (i) The positions where the centre of the Sun crosses the celestial equator. (ii) The dates when the declination of the Sun is zero (i.e. when it is on the celestial equator).
Escape Speed (Escape Velocity) It is the speed at which an object on the surface of a body must be propelled in order not to return to that body under the influence of their mutual gravitational attraction. Alternatively, it may be defined as the speed required to propel an object on the surface of a body into a parabolic trajectory about that body.
Exit pupil The position of the image of the objective lens or primary mirror formed by the eyepiece. It is the smallest disc through which all the collected light passes and is therefore the best position for the eye's pupil.
Extinction Loss of light from an object as a consequence of absorption or scattering by an intervening medium. An example is the atmospheric extinction of light from stars near the horizon.
Eye ring An alternative name for the exit pupil.
Faculae Unusually bright spots on the Sun's surface.
Finder A small telescope, ideally of wide field of view, that is fixed to the main telescope in order to facilitate the finding of objects.
First Point of Aries (FPA) The Vernal Equinox point, i.e. that where the centre of the Sun, moving northwards, crosses the equator. It is the reference direction for the equatorial system of co-ordinates.
Focal length The distance from the centre of a lens or mirror to its point of focus.
Focal plane The plane (usually this is actually the surface of a sphere of large radius) where the image is formed by the main optics of the telescope. The eyepiece examines this image.
Focuser The part of the telescope which varies the optical distance between the objective lens or primary mirror and the eyepiece. This is usually achieved by moving the eyepiece in a drawtube, but in some catadioptric telescopes it is the primary mirror that is moved.
Fork mount A mount where the telescope swings in declination or in altitude between two arms. It is suited only to short telescope tubes, such as Cassegrains, and variations thereof. It requires a wedge to be used equatorially.
Galactic Co-ordinates The system of celestial co-ordinates in which the galactic plane as the reference plane and the galactic centre as the reference direction. The positions are given in galactic latitude and galactic longitude.
Galilean Moons The four Jovian moons first observed by Galileo ( Io, Europa, Ganymede and Callisto). They are observable with small amateur telescopes.
Geosynchronous Orbit The orbit of a satellite in which the orbital period of the satellite is equal to Earth's period of rotation. If the orbit is in the equatorial plane, the satellite will be geostationary; if the orbit is inclined to the equatorial plane the satellite will appear to trace a lemniscate in the sky.
German Equatorial Mount (GEM) A common equatorial mount for small and medium sized amateur telescopes, suited to both long and short telescope tubes. The telescope tube is connected to the counter-weighted declination axis, which rotates in a housing that keeps it orthogonal to the polar axis. Tracking an object across the meridian requires that the telescope be moved from one side of the mount to the other, which in turn requires that both axes are rotated through 180°, thus reversing the orientation of the image. This is not a problem for visual observation, but is a limitation for astrophotography.
Gnomon (i) The "pointer" in a sundial. (ii) Vertical stick, rod or pillar, the length and direction of whose shadow indicates the altitude of the Sun and the time of day.
Granulation The "grains of rice" appearance of the Sun's surface, which results from convection cells within the Sun.
Great circle A circle formed on the surface of a sphere which is formed by the intersection of a plane which passes through the centre of a sphere. A great circle path is the shortest distance on a spherical surface between two points.
Horizon Co-ordinates The system of celestial co-ordinates in which the observer's horizon is the reference plane and the north point is the reference direction. The positions are given in altitude and azimuth.
Inferior Planets Planets (i.e. Mercury and Venus) whose orbits lie inside Earth's orbit.
Integrated Magnitude The magnitude which would apply if all the light energy from an extended object was coming from a point source.
Kepler's Laws The three laws of planetary motion formulated by Johannes Kepler. For more detail see the tutorial on the Heliocentric Revolution.
Limb The edge of the disc of a celestial body.
Luminosity The amount of energy radiated into space per second by a star. The bolometric luminosity is the total amount of radiation at all frequencies; sometimes luminosity is given for a specific band of frequencies (e.g. the visual band).
Maksutov, Maksutov-Cassegrain, Maksutov-Newtonian Forms of catadioptric telescope.
Magnification The increase in the angle subtended by an object. See the tutorial on telescope function.
Micrometer A device, of which various types exist, that is used in a telescope for measuring small angular distances between objects.
Minor Planet Another term for an asteroid.
Nautical Twilight When the centre of the Sun is between 6° and 12° below the horizon; the marine horizon becomes invisible.
Near-Earth Asteroid (NEA) An asteroid whose orbit brings it close to Earth's orbit.
Occultation An alignment of two bodies with the observer such that the nearer body prevents the light from the further body from reaching the observer. The nearer body is said to occult the further body. A solar eclipse is an example of an occultation.
Opposition The position of a planet such that Earth lies between the planet and the Sun. Planets at opposition are closest to Earth at opposition and thus opposition offers the best opportunity for observation. (See diagram.) For extended bodies (e.g. Sun, Moon, planets), the body's position is taken to be its centre.
Orbital Elements The six numerical values that completely define the orbit of one body about another of known mass. They are the semi-major axis (a), the eccentricity (e), the inclination to the reference plane (i), the mean anomaly (M), the argument of the pericentre (ω), and the longitude of the ascending node (Ω). The elements vary with time as a consequence of perturbations of other bodies, so their epoch is important. For comets and asteroids, the perihelion conditions are often of interest, so the date of perihelion (T) and perihelion distance (q) are usually used instead of M and a. (At T, M=0; q = a(1-e) )
Osculating Orbit The orbit that a body would follow if the only gravitational force acting on it was that of the primary body, i.e. if its motion was not perturbed by the presence of other bodies.
OTA Abbreviation for Optical Tube Assembly. It is normally considered to consist of the tube itself, the focuser and the optical train from the objective lens (refractor), primary mirror (reflector), or corrector plate (catadioptrics) up to, but not including, the eyepiece.
Penumbra Literally "next to the umbra". (i) The shadow that results when only part of the bright object is occulted; e.g. an observer will see a partial eclipse when he is in the penumbra of the shadow of the moon. (ii) The lighter area surrounding a sunspot.
Periastron The position in an orbit about a star at which the orbiting object is at its least distance from the star.
Perigee The position in a geocentric orbit at which the orbiting object is at its least distance from Earth.
Planisphere The projection of a sphere (or part thereof) onto a plane. It commonly refers to a simple device which consists of a pair of concentric discs, one of which has part of the celestial sphere projected onto it, the other of which has a window representing the horizon. Scales about the perimeters of the discs allow it to be set to show the sky at specific times and dates, enabling its use as a simple and convenient aid to location of objects.
Position Angle (PA) Equivalent to a bearing on Earth. It is the angle from one position (X) to another (P) measured from the direction of the NCP (N) through east (i.e. anticlockwise in the sky). i.e it is the angle NXP. It could refer to the position of one object with respect to another, the direction of proper motion of a star, or the position on the limb of the Moon where an occultation occurs.
Precession A rotation of the direction of the axis of rotation. Normally refers to the precession of the equinoxes, a consequence of the effect of the Sun's gravity on Earth's equatorial bulge. Earth's axis of rotation precesses with a period of about 25,770 years, during which time the equinoxes make a complete revolution about the celestial equator. Because the Vernal Equinox is the reference direction for the equatorial co-ordinate system, the co-ordinates of "fixed" objects changes with time and must therefore be referred to an epoch at which they are correct.
Primary body The body that is being orbited. E.g. the Sun is the primary of the orbits of the planets and comets. With respect to multiple star systems, it is the most massive star.
Prograde The apparent eastward motion of a planet with respect to the stars.
Proper motion The apparent motion of a star with respect to its surroundings.
Rayleigh criterion (Rayleigh limit) Lord Rayleigh, a 19th century physicist, showed that a telescope optic would be sensibly indistinguishable from a theoretical perfect optic if the light (strictly, the wavefront) deviated from the ideal condition by no more than one quarter of its wavelength.
Red Shift The lengthening of the wavelength of electromagnetic radiation resulting from one or more of three causes: Doppler redshift: resulting from bodies moving away from each other in space. Gravitational redshift: resulting from strong gravitational fields. Cosmological redshift: resulting from the expansion of space-time itself.
Reflector A telescope whose optics, apart from the eyepiece, consist of mirrors.
Refractor A telescope whose optics consist entirely of lenses.
Resolution A measure of the degree of detail visible in an image. It is normally measured in arcseconds.
Reticle A system of lines and/or concentric circles at the focal plane of a telescope, used for positioning or guiding the telescope, or polar-aligning an equatorial mount. Is usually incorporated into an eyepiece and may be illuminated in order to render the lines visible against a dark background sky.
Retrograde Apparent westward movement off a planet with respect to the stars.
Schmidt, Schmidt-Cassegrain, Schmidt Newtonian Forms of catadioptric telescope.
Scintillation The twinkling of stars, resulting from atmospheric disturbance.
Secondary Abbreviation for secondary mirror. Small mirror that directs the light from the primary mirror to the eyepiece.
Semi-major Axis Half the distance across an ellipse measured along a line through its foci.
Solstice Literally "sun still". It refers to the apparent standstill of sunrise and sunset points at midsummer and midwinter. (i) The most southerly and northerly declinations of the Sun. (ii) The date on which the Sun attains its greatest declination.
Spherical Aberration An optical aberration in which light from different parts of a mirror or lens is brought to different foci.
Superior Planets Those planets whose orbits lie outside Earth's orbit.
Terminator The boundary of the illuminated part of the disc of a planet or moon.
Topocentric Referred to a position on the surface of the Earth (cf geocentric, which is referred to the centre of the Earth.)
Transit (i) The passage of Mercury or Venus across the disc of the Sun (ii) The passage of a planet's moon across the disc of the parent planet (iii) The passage of a planetary feature (such as Jupiter's Great Red Spot) across the central meridian of the planet. (iv) The passage of an object across the observer's meridian (see culmination). In the latter case, for extended bodies (e.g. Sun, Moon, planets), the body's position is taken to be its centre.
Twilight The period of decreasing sky brightness after sunset, or of increasing sky brightness before sunrise. There are three definitions of twilight: Civil Twilight, Nautical Twilight, and Astronomical Twilight. Twilight lasts longer in higher latitudes. For more information, see the tutorial on Twilight.
Worm drive Probably the most common drive on equatorial mounts. It consists of a spirally cut cylinder (the "worm") which rotates longitudinally such that its thread engage with the specially shaped teeth on the circumference of a disc (the "worm wheel"), which in turn drives the shaft of the mount.
Zenithal Hourly Rate (ZHR) The theoretical hourly rate of meteors which would be observed at the peak of a shower, by an experienced observer, with the radiant at the zenith, under skies with a limiting naked eye magnitude of 6.5.
Thanks are due to the following for suggestions and corrections:
Martin Frey (who did some of the diagrams)
Dr John Stockton |
On August 3, 2016, seven kilometers above Alaska’s Aleutian Islands, a research plane captured something mysterious: an atmospheric aerosol particle enriched with the kind of uranium used in nuclear fuel and bombs.
It’s the first time scientists have detected such a particle just floating along in the atmosphere in 20 years of plane-based observations.
Uranium is the heaviest element to occur naturally on Earth’s surface in an appreciable amount. Normally it occurs as the slightly radioactive isotope uranium-238, but some amount of uranium-235, the kind humans make bombs and fuel out of, occurs in nature. Uranium-238 is already rare to find floating above the Earth in the atmosphere. But scientists have never before spotted enriched uranium, a sample uranium containing uranium-235, in millions of research plane-captured atmospheric particles.
“One of the main motivations of this paper is to see if someobody who knows more about uranium than any of us would understand the source of the particle,” scientist Dan Murphy from NOAA told me. After all, “aerosol particles containing uranium enriched in uranium-235 are definitely not from a natural source,” he writes in the paper, published recently in the Journal of Environmental Radioactivity.
Murphy has led flights around the world sampling the atmosphere for aerosols. These tiny particles can come from polution, dust, fires, and other sources, and can influence things like cloud formation and the weather. The researchers spotted the mystery particle on a flight over Alaska using their “Particle Analysis by Laser Mass Spectrometry” instrument. They considered that perhaps the signature came from something weird, but evidence seems to point directly at enriched uranium.
They were not intending to look for radioactive elements. “The purpose of the field campaign was to obtain some of the first global cross-sections of the concentration of trace gases and of dust, smoke, and other particles in the remote troposphere over the Pacific and Atlantic Oceans,” according to the paper.
But where the particle came from is a mystery. It’s pretty clear it came from recently made reactor-grade uranium, the authors write (aka, not from Fukushima or Chernobyl). Perhaps from burnt fuel contaminated with uranium, they thought. They tried to trace it to a source using the direction of the wind—but their best estimate pointed vaguely to Asia. Higher probability areas include some parts of China, including its border with North Korea, and parts of Japan.
You don’t need to worry about atmospheric radiation from just one particle, though. “It’s not a significant amount of radioactive debris by itself,” Murphy said. “But it’s the implication that there’s some very small source of uranium that we don’t understand.”
One author, Thomas Ryerson from NOAA, told me that he needs other scientists’ help. “We’re hoping that someone in a field that’s not intimately associated with atmospheric chemistry can say ‘a-ha!’ and give us a call.”
Correction: The image previously used was a WB-57, but the plane actually used to find the particle was a DC-8. Sorry! |
Learn something new every day More Info... by email
Deposit feeding is one of five feeding modes used by organisms to obtain food, the others being fluid feeding, filter feeding, bulk feeding, and phagocytosis. Deposit feeders obtain food particles by sifting through soil, vaguely analogous to the way that filter feeders get food by filtering water. Prominent examples are earthworms, other annelids such as polychaete worms, and fiddler crabs. Insects and their larvae, which may burrow through living or dead plants and animals, or feces, are also considered deposit feeders.
Deposit feeding is a feeding strategy that only works in fertile areas with a lot of preexisting life. The top layer of soil is targeted, typically within six inches of the surface, as this is the soil most likely to contain food particles that haven't been completely broken down yet. Biologists call these food particles detritus. After detritus has been broken down to a chemically neutral state, it becomes known as humus. Humus has a black color due to its high carbon content.
Among deposit feeders, precise strategies vary. Earthworms are unique animals in the world of deposit feeding, and in general, by having an oral cavity that connects directly to their digestive system without any intermediary. Lauded for their benefits to the soil, of earthworms, Charles Darwin wrote, "It may be doubted whether there are many other animals which have played so important a part in the history of the world, as have these lowly organized creatures." By breaking detritus into humus, breaking soil down into tiny pieces that maximize available nitrogen and phosphates for plants, and aerating the ground by poking it full of tunnels, earthworms have a trifold benefit for the earth and its plants.
Besides earthworms, terrestrial deposit feeding is practiced by fiddler crabs. These crabs pick up little balls of dirt, reach them to their mouths, and pick out any edible material, including colonies of microbes. Then, the spheres are discarded as quickly as they were picked up. These little dirt spheres can be found wherever fiddler crabs dwell.
There are marine species that practice deposit feeding as well, which burrow through the ooze on the ocean floor. These include polychaete worms, some bivalves, and giant protozoa called xenophyophores. Marine deposit feeders are more poorly understood due to their remote location and fragility when brought to the surface.
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK! |
The latest news from academia, regulators
research labs and other things of interest
Posted: Apr 11, 2014
New physical phenomenon on nanowires seen for the first time
(Nanowerk News) Very tiny wires made of semiconducting materials – more than thousand times thinner than a human hair – promise to be an essential component for the semiconductor industry. Thanks to these tiny nanostructures, scientists envision not only a more powerful new generation of transistors, but also to integrate optical communication systems within the very same piece of silicon, making possible to transfer data between chips at the speed of light.
But for optical communication to happen, it is essential to convert the electrical information used in the microprocessor into light, by using light emitters. On the other end of the optical link, one needs to translate the information contained in the stream of light into electrical signals by using light detectors. Current technologies use different materials to realize these two distinct functions – silicon or germanium for light detection and materials combining elements from the III-V columns of the periodic table for light emission. However, this might be going to change soon thanks to a new discovery.
False-colour scanning electron micrograph of a nanowire strain device.
Using this new physical phenomenon, scientists might be able to integrate the light emitter and the detector functions in the very same material. This would drastically reduce the complexity of future silicon nanophotonic chips.
IBM scientist Giorgio Signorello explains, "When you pull the nanowire along its length, the nanowire is in a state that we call “direct bandgap” and it can emit light very efficiently; when instead you compress the length of the wire, its electronic properties change and the material stops emitting light. We call this state “pseudo-direct”: the III-V material behaves similarly to silicon or germanium and becomes a good light detector."
IBM Fellow Heike Riel comments, “These are unique and surprising properties and they all come from the fact that the atoms are located at very special positions within the nanowire. We call this crystal structure “Wurtzite”. This structure is possible only because the nanowire dimensions are so small. You cannot achieve the same properties at dimensions visible to the eye. This is a great example of the power of nanotechnology.”
This remarkable properties might find interesting applications also outside the field of optical communication.
Source: IBM Zurich
If you liked this article, please give it a quick review on reddit or StumbleUpon. Thanks!
Check out these other trending stories on Nanowerk: |
The cockroaches are an ancient group, dating back at least as far as the Carboniferous period, some 320 million years ago. There are about 30 cockroach species out of 4,600 are associated with human habitats. In Australia, three species are most common and well known as household pests: German cockroaches, American cockroaches and Australian cockroaches.
The German cockroach is a small species of cockroach, the German cockroach adults can grow up to 1.6 cm long. In colour it varies from lightly brown to almost black. Although it has wings, it can not fly. The German cockroach has very short breeding cycle, it reproduces faster than any other residential cockroach. Each egg case may contains up to 40 eggs. Under favourable conditions, It only takes as short as 40 days to grow from egg to reproductive adult. This is why the population of German cockroach increase incredibly fast in infested areas. German cockroaches are attracted particularly to meats, starches, sugars, and fatty foods. Where a shortage of foodstuffs exists, they may eat household items such as soap, glue, and toothpaste. In famine conditions, they turn cannibalistic, chewing at each other’s wings and legs. In a domestic property with German cockroaches infestation, they can be easily found in kitchens where closed to the food source, they can also be found in bathroom and laundry room where they can find water to drink. In a heavy infestation situation, they can even be found in bed rooms as they try to forage for food and look for shelters. German cockroaches are the most common and troublesome pest in commercial restaurants.
The American cockroach is the largest species of common cockroach. American cockroaches can grow up to 53mm long. They are reddish brown and have a yellowish margin on the body region behind the head. American cockroach nymph emerge from egg cases in 6–8 weeks and require 6–12 months to mature. American cockroaches eat a great variety of materials such as cheese, beer, tea, leather, bakery products, starch in book bindings, manuscripts, glue, hair, flakes of dried skin, dead animals, plant materials, soiled clothing, and glossy paper with starch sizing. They prefer fermenting foods. They have also been observed to feed upon dead or wounded cockroaches of their own or other species. The American cockroach can fly and it is one of the most fastest running insects. They normally live in moist areas, but can survive in dry areas if they have access to water. These cockroaches are common in basements, crawl spaces, cracks and crevices of porches, foundations, and walkways adjacent to buildings. In residential areas outside the tropics these cockroaches live in basements and sewers, and may move outdoors into yards during warm weather.
The Australian cockroach is brown overall, with the tegmina having a conspicuous lateral pale stripe or margin, and the pronotum with a sharply contrasting pale or yellow margin. It looks similar to the American cockroach but slightly smaller. It has a yellow margin on the thorax and yellow streaks at its sides near the wing base. It often lives around the perimeter of buildings. It appears to prefer eating plants more than its relatives do, but can feed on a wide array of organic (including decaying) matter. |
When a flu outbreak was first detected in Mexico City in mid-March, locals thought it was just a late flu season. But when the disease quickly surged, the Center for Disease Control and World Health Organization discovered these cases to be a new strain of H1N1—first known as “Swine Flu.” What sent people running to their doctors for checkups and to stores for antibacterial gel, was fear of the unknown.
At first it seemed there was reason for great concern. The disease was spreading rapidly. Unlike the common flu, young and healthy people were dying. We felt powerless to stop it. A pandemic seemed imminent.
As of this writing there are over 5,000 confirmed cases of H1N1 in the United States and over 10,000 worldwide. These have resulted in 87 deaths. A map showing states and countries affected, and the number of cases, is being continually updated at http://www.msnbc.msn.com/id/30435064. As high as the confirmed cases look, the death rate is low compared to deaths from the regular flu--an estimated 250,000 to 500,000 annually.1
Although thankfully it hasn’t reached the pandemic level, the H1N1 virus is here to stay for a while. So here are some simple steps we can all take to help protect ourselves:
1. Cover your nose and mouth with a tissue when you cough or sneeze. Throw the tissue in the trash after you use it.
2. Wash your hands often with soap and water for 10-15 seconds. Wash especially after you cough or sneeze. Alcohol-based hand cleaners are also effective. If using a gel, rub until your hands are dry.
3. Avoid touching your eyes, nose or mouth. Germs spread this way.
4. Try to avoid close contact with sick people.
5. Stay home if you are sick for 7 days after your symptoms begin or until you have been symptom-free for 24 hours, whichever is longer. This is to keep from infecting others and spreading the virus further.2
In addition we need strong immune systems to avoid contracting this virus, or to fight it should we become sick. Things that weaken our immune systems seem to have the word “lack” in them: lack of sleep, lack of exercise, lack of a balanced nutritious diet, lack of water, and lack or fresh air and sunshine. But “excesses” can also weaken our immune system: excess sugar, excess stress, and excess refined foods.
This most recent scare causes us to stop and think about our own health. But we shouldn’t want good health out of fear of the H1N1 virus. We should want good health because we wish to live life to its fullest. Life is too short to be holding on to bad health habits that cause us to feel run-down and tired. Let’s take care of ourselves so that we can enjoy ever moment of every day. |
Subject: Nomenclature: Geography
of Israel Puzzle Map
4+ *note-could be offered as sensorial for 2 ½+
(four gospels), Bible maps, handout map; RPC p. 113, SPSA Spring 2007
Liturgical Time: anytime; before Advent
1. Jesus is real; he lived, healed, traveled,
taught, died and rose again in the land of Israel.
2. Jesus was baptized (by John) in the Jordan.
3. Jerusalem is the most important city for the
people of Israel.
1. to name and identify the regions (Galilee,
etc.) and waterways (Jordan, etc.) in the land of Israel and provide a framework for
important events in life of Jesus.
1. control of movement as an aid to prayer.
2. to re-announce the paschal mystery.
3. to prepare for later geography work.
4. to provide a framework for listening to the
5. to know the historical reality of Jesus more
fully and deepen understanding of his ministry on earth.
6. to foster the child’s growing sense
of time and place.
7. review names and importance of Nazareth,
Bethlehem & Jerusalem.
Description of Materials:
wooden puzzle with box and lid (the lid is optimal but
puzzle pieces have dowels for easy removal.
a. Mediterranean Sea
b. Sea of Galilee
c. Jordan River
d. Dead Sea
Removable for tracing, painted pieces per control map (muted
control map same size as puzzle (laminated) with the regions
and waterways colored and labeled.
regions colored and labeled.
c. Samaria –green
d. Perea –yellow
e. waterways –
printing on map and flags control identical with printing on
box with flags of regions and waterways.
each flag on stick (toothpick) in clay base, painted buff color,
flag dot, with corresponding dot to waterway.
(Alternatively you can just have labels)
8 ½ x 11 blank maps (same as top map).
crayons or pencils, appropriate colors.
9 x 12 paper or what fits puzzle box for tracing.
Lay out mat. Place LOI Puzzle map on left of mat.
Invite children to gather at Land of Israel map
“You remember we talked about Jesus and where he walked”
“We saw the globe and flags?”
“We have a new way to look at Israel,
the land that Jesus knew, it is a puzzle map.”
“What do you recognize?” (water, different colors, different parts of LOI)
“All these colors show us different regions of Israel that have different names.”
Naming and Identifying:
“This part – the orange part – is Judea:
the largest and most important region. Here is Bethlehem
(where Jesus was born), Here is Jerusalem (where Jesus died,
Take out piece, place on mat
Point to Galilee,
then give the descriptive words.
“This is Galilee-“An angel appeared to
Point to the Sea
of Galilee-“Jesus fished.”
Now remove the Galilee piece and
the Sea of Galilee if it is a separate piece.
“This part is Samaria- “Jesus walked through
Remove the Samaria piece and put it on the mat.
“All along Samaria and Judea is the Jordan River-“empties
into the Dead Sea, which is very salty – this is where Jesus was baptized. The Jordan River.”
Remove River and place on mat.
“Across from Judea is Peraea - Jesus taught
and healed in Peraea”
“Over here all of this blue is the Mediterranean Sea – it was a rich resource
for Israel. Mediterranean is huge this is only the part by Israel.”
“We can put it back now”
“Judea, Galilee, Samaria
Waterways- “Sea of Galilee, Jordan River, Dead Sea”
“I am going to take them out again and you can put it back together.”
2nd period lesson-
“I’ll name places you put them in.”
“Can you put in big piece, Judea?” (“can you find Judea,” (waterways are
easier to identify), “orange one” etc.
“Galilee where Mary lived?”
“Waterways, now Peraea, Samaria,
where Jesus walked”
At a later time
“We are going to look at the puzzle map again.
Now we have flags that we are going to put on the regions of the map.”
“This is Judea, the orange region.”
Give child flag and have them place
the flags on the map
invite the children to match the
flags to the control map and place them. When they are done, read them to the
child and celebrate their work
possible songs- Holy Ground, He’s Got the Whole World in His Hands
Children’s Work with the Materials:
work with puzzle
make their own map
write in names on map
produce own artwork |
(PhysOrg.com) -- The world's first x-ray laser, the Linac Coherent Light Source (LCLS), first unveiled in 2009 at the Stanford Linear Accelerator Center in Palo Alto California, has been undergoing testing by group of physicists determined to find out how many of the photons it emits are synchronized and have found, as they describe in their paper in Physical Review Letters, the x-ray radiation that it produces, is the most coherent ever measured.
Following on the heels of the maser, invented in 1957, which was based on microwave radiation, researchers have searched for ways to make lasers with shorter and shorter wavelengths, with the hope being that coherence could be improved. Coherence is a measurement of how in-sync the photons in a laser beam are; the best laser would be one where all of the photons flow perfectly in sync with one anther, but of course, at least thus far, thats not possible. This leaves researchers working to see how close they can get. The better or higher the coherence a laser has the more precise it can be diffracted which means it can be used to create sharper images of atomic structures.
To measure the coherence of the LCLS, the researchers shone the laser beam through two successive materials, each with a tiny hole in it, then measured the bands of dark and light produced on the other end; they found the contrast to be very high. Then by slowly increasing the size of the hole, they were able to see the interference introduced by those photons that were not in sync with the others causing a decrease in visibility. It is in measuring the decrease that the coherence of the beam can be measured. For the LCLS, it was shown to be 16.8 microns.
The team also tested the lasers monochromatic abilities, which is a way of saying they measured the coherence time of the laser. The coherence time is the time interval that the wave is considered to be predictable. To do this they examined the edge patterns created when shooting the laser beam through the very tiny holes. For the LCLS they found the coherence time was 0.55 femtoseconds. The end result was that the majority (78%) of the photons were held within the confines the of the directed beam.
This all means that researchers using such an x-ray laser will soon be able to more precisely understand the atomic structure of materials they are working on, which should prove useful for pharmaceutical, archeology and engineering projects.
Explore further: New approach to form non-equilibrium structures
More information: Coherence Properties of Individual Femtosecond Pulses of an X-Ray Free-Electron Laser, Phys. Rev. Lett. 107, 144801 (2011) DOI:10.1103/PhysRevLett.107.144801
Measurements of the spatial and temporal coherence of single, femtosecond x-ray pulses generated by the first hard x-ray free-electron laser, the Linac Coherent Light Source, are presented. Single-shot measurements were performed at 780 eV x-ray photon energy using apertures containing double pinholes in diffract-and-destroy mode. We determined a coherence length of 17 μm in the vertical direction, which is approximately the size of the focused Linac Coherent Light Source beam in the same direction. The analysis of the diffraction patterns produced by the pinholes with the largest separation yields an estimate of the temporal coherence time of 0.55 fs. We find that the total degree of transverse coherence is 56% and that the x-ray pulses are adequately described by two transverse coherent modes in each direction. This leads us to the conclusion that 78% of the total power is contained in the dominant mode.
Linac Coherent Light Source (LCLS): slacportal.slac.stanford.edu/s… c/Pages/Default.aspx |
Visual information is essential in traffic: traffic lights tell us when to cross the street. Zebra crossings signalise visually street sections where car drivers have to pay special attention to pedestrians. Children are taught to look to the left and to the right before crossing the street. Motorists are aware of the problems and hazards occurring by darkness, rain, snow or fog when range of sight is decreased. A lot of accident avoidance deals with the issue to see and to be seen, e.g., the failure of motorists to detect and recognise motorcycles in traffic was regarded as the predominating cause of motorcycle accidents .
Blind and visually impaired people have to manage their way through a world of traffic which was mainly created for people with full eyesight. According to statistics compiled by the National Center for Health Statistics of the US Centers for Disease Control and Prevention, as summarised by the American Foundation for the Blind, in 1996, there were approximately 1.3 million legally blind people in the United States. 5.5 million elderly individuals are regarded as blind or visually impaired . As others, these individuals benefit from a barrier free architecture.
The idea of barrier free architecture is to provide a structure which enables everyone independently of his/her physical and mental condition to enter buildings, to move within them, and to use technical devices or other devices of daily life (as rest rooms or ticket machines) without major restrictions. Nowadays in western countries, the exclusion of handicapped people from everyday life’s activity is prohibited, e.g., in the US by the ‘Americans with Disabilities Act’ (ADA) or in Germany by the ‘Equality Law of Disabled Persons’ (‘Behindertengleichstellungsgesetz’) . These rights were also brought up in a United Nations resolution (56/168) . Two aspects of barrier free architecture should be differentiated: one is the elimination of barriers, the second is the compensation of handicaps by appropriate designing and implementation of technical devices. Tactile floor indicators are thought to compensate the loss or impairment of vision by the provision of complementary haptic information. |
Galaxies are one of the most spectacular astrophysical object and are essential building blocks of the Universe. They are characterised by rich diversity in size, colour and morphology, depending both on their local environment and on their evolutionary past. Thus, galaxies provide us with invaluable clues on the large scale properties of the Universe in which they are embedded. However, equally important they tell us about the physical processes which are responsible for star formation, giving us further insight of how our own galaxy, The Milky Way, was formed.
The figure on the right is showing a beautiful HST picture of the M100 spiral galaxy, very similar to our Milky Way. From the so-called bulge - bright central stellar concentration - spiral arms are unfolding, being the place where the new stars are born. Image credits: D. Hunter (Lowell Observatory) and Z. Levay (Space Telescope Science Institute)/NASA.
During the last decade a wealth of observational evidences (e.g. by Hubble Space Telescope, SDSS Survey, and Spitzer Space Telescope among others) has put the processes of galaxy formation and evolution in a completely new light. It has emerged that variety of physical mechanism, acting from very small scales, where for example the stars are formed, up to the intragalactic distances, where encounters with other galaxies may lead to major merger events, are determining the properties of galaxies and their evolution in time. Henceforth, the study of such complex systems needs to be tackled with highly sophisticated numerical simulations, that take properly into account the cosmological growth of these structures.
A group of researchers at IoA are focusing on the so-called "damped Lyman alpha systems" -- galaxies in the process of formation which are not seen directly, but only as absorption when they are in a direct line between us and a yet more distant luminous object. Understanding the nature of these galaxies can ultimately providing a more direct link between them and modern-day galaxies, and thus unveil crucial aspects of galaxy formation. In order to achieve this, some of the most advanced simulations of galaxy formation have been employed and in detail compared with observations.
The figure shows the distribution of the neutral hydrogen responsible for the damped Lyman alpha absorption around a forming galaxy, in the centre.
Figure credits: Andrew Pontzen, based on simulation data run at the Arctic Region Supercomputing Centre; thanks to Fabio Governato and the N-Body Shop at the University of Washington.
Remarkable observational evidence is indicating that probably most if not all galactic bulges in their very core contain a supermassive black hole, including our own Milky Way. Moreover, there appears to be a crucial link between the properties of host galaxies and their central black holes, implying that the evolution of galaxies is intimately linked to the evolution of black holes, and vice versa. By employing large scale cosmological simulations, where both the formation and growth of black holes and galaxies is tracked self-consistently, researchers at IoA are trying to understand in which way supermassive black holes are affecting their hosts, and whether indeed their feedback processes lead to the population of galaxies, as we see them today.
The figure is illustrating the relationship between the black hole mass and stellar bulge velocity dispersion obtained from simulations, while the dashed line denotes observational finding.
Figure credits: Debora Sijacki; the simulations were carried out at Computing Centre of Max-Planck Society in Garching, performed with massively parallel TREESPH code GADGET-2 in collaboration with Volker Springel. |
General Astronomy/First Steps into Space
The first man-made object to be launched into space was the Soviet unmanned satellite named Sputnik I. This began the "Space Race" between the United States and the Soviet Union. The Soviets won almost every aspect (1st object, 1st man in space, 1st woman in space, etc.), except for the United States' 1st man on the moon. |
The current world has undergone a diverse transformation because of advancements in technology. Most activities carried out in it require technological devices in order to be successful. For instance, the current education system in colleges and high schools depends almost entirely on technology. This is also evidenced by the apparatus used in the laboratories. Additionally, the students and tutors greatly rely on technological devices, such as computers and smart-phones in learning. Thus, it is evident that the topic “Impacts of Technology on New Generation” is crucial for research.
Technology and relationships
Currently, most children relate very well among themselves because of the spirit of togetherness natured by technology. Invented software, such as computer games, brings children from different backgrounds together. This is also evidenced by the argument provided by most people we came across in our survey. According to most people, many children spent much of their time twitting or texting. For instance, our survey showed that the children of ages between 12 and 17 sent over 100 text messages per day.
Technology and ways of thinking
Technology has the potential of changing the way young children think and solve issues. It has also the potential of changing the way peers, adults, and children interact with each other. This is evidenced by the argument provided by the majority of the people that took part in our survey. According to many people, the time spent by children on computers can influence their mode of thinking and learning. For instance, the majority of them argued that children used visual cues to understand complicated things in life. Children with emotional disabilities have also been found to use computers with much ease.
Technology and ways of working
Compared to the old generation, the new generation has completely different ways of working. Unlike the old generation, the new generation loves teamwork. From our research, it is also evident that the new generation is more creative and quick in thinking than the old generation. This is evidenced by their invention of diverse technological devices. The majority of the young generation uses technology in almost all their activities in order to avoid spending much time in offices.
Technology and education
According to our survey, it is evident that technology hampers the attention of learners. Most of the people that participated in our survey argued that most students lost focus as a result of using devices, such as computers and video games. According to the argument provided by most of them, it is evident that technology impairs an individual’s behavior and brain.
|No. of individuals||Strongly agreeing||Slightly agreeing||Strongly disagreeing||Slightly disagreeing|
Impacts of technology on jobs
Advancement in technology has contributed significantly to job destruction. According to our survey, the introduction of robots and software has contributed to the replacement of most people in organizations of specific industries, such as automotive manufacturing. Additionally, there is the possibility of technology to eliminate the need for most jobs in the future.
Technology and ways of dressing
Technology has also influenced most people’s manner of dressing. According to our survey, social media and Facebook impact the way people dress in current societies. Most people in the new generation love mimicking what other people do in different countries. Additionally, technology has contributed to the emergence of new designs of clothes that are loved by the new generation.
Technology and society structure
According to our survey, the new generation in many countries has the potential of changing family structures and relationships. For instance, most people believe that technology will contribute to most people disregarding marriages for cohabitation. There is also a great possibility for not only many children being born outside marriages, but also for a significant increase in the rate of divorces.
Technology and innovativeness
The new generation is also prone to lose of innovativeness. This is as a result of overdependence on technology. According to our survey, over 70% of the current generation relies directly on the technology of all activities they do. Unlike in the past, technology has led to most people becoming incompetent in society; it has led to the fact that many people do not come up with new ideas.
Impacts of technology on lifestyle
Despite technology contributing to the success of human life, it is evident that most people have become slaves of technology. People love having something electronic in their hands. Thus, technology has forced people to bypass the normal expectations of the world for the sake of digital facets. According to our survey, most parts of accidents also happen as a result of technology. For instance, frequently, people indulge in texting while driving.
It is fair to conclude that technology has a great influence on the lives of the new generation. It has the ability to make life easier. It has also the potential of uniting children via video games and social media. However, its negative impacts outweigh its advantages in the new generation. For instance, it can lead to the loss of innovativeness, lives, and traditional family structures. It can also influence the way people dress in different societies and encourage the loss of jobs. In addition, technology can affect the manner due to which knowledge is passed in schools. |
Crystals are periodic structures in space. They are used in the electronics industry to control and manipulate electrons. These structures create energy gaps by allowing only electrons with certain energies to propagate, and preventing the propagation of other energies. Can similar energy gaps be designed for the light propagation in a medium? For a long time, such structures were only discussed in theory. However, with the ability to fabricate sub-micron structures, we can control light propagation in a medium in a whole new way. Materials that create energy gaps for light propagation are referred to as a photonic crystal.
It turns out, photonic crystals are not new to nature. For example, they have been observed in the bright colored coats of insects such as longhorns. Various species are able to show different colors by varying the periodicity of the crystals, as shown in the figure below. Similarly, a chameleon can switch its skin color by changing the periodicity of the photonic crystals on its skin.
Photonic crystals are designed in 1D, 2D are 3D structures as periodic arrangements of dielectric materials. 1D structures consist of alternating layers of dielectrics. In the past, they have been used to design reflectors for optical cavities. 3D structures are used for controlling the cavity modes to enhance or suppress spontaneous emission. This is done by controlling the cavity modes that a material can emit light into. In this article, the use of 2D photonic crystals is stressed. This is because of their unconventional ability to guide light in small cores and sense gases, liquids or biomolecules.
2D photonic crystal (PhC) fibers consist of a core region and a surrounding cladding region, similar to conventional optical fibers. However, rather than having the cladding be uniform, the region is a periodic layer. PhC fibers come in two types: solid core, and hollow core fibers. Each of them can in turn come in hexagonal, triangular, or other geometries. The following is an image of a hollow photonic crystal fiber in hexagonal geometry.
The mechanism of guiding in PhC fibers is rather different from conventional fibers. Instead of total internal reflection, the fibers guide light by letting the guided wavelength be in the energy gap of the surrounding photonic crystal. This can also be explained by constructive interference of the various scattering amplitudes. Light propagating in the fiber is scattered from each periodic entity in the surrounding PhC layer. Light propagates when this interference is constructive. If the waves interfere destructively, we say that there is an energy gap at that wavelength, and the light is not guided.
Photonic Crystal Fibers for Gas Sensing
In a work published by M. Morsed et al, a group of scientists from Mawlana Bhashani Science and Technology University in Bangladesh, demonstrated the use of photonic crystal fibers for gas sensing applications. They used a hollow core fiber to accommodate the analyzed substance. The article described a hexagonal PhC fiber just as we described earlier in this post. As it turns out, many gases have their absorption lines in the near infrared region of 0.8 um to 1.8 um. Such gases include methane and hydrogen halides. Methane, for example, has its absorption line at 1.33 um wavelength.
In their experiment, the PhC layer was designed by a periodic arrangement of air holes, with the operation tuned to the near-infrared region (NIR). They let gas enter into the surrounding holes in the fiber, and absorb the evanescent light from the core fiber. Depending on the strength of the absorption line, the output power was reduced strongly at the wavelength of interest. This change was then measured at the fiber output. In this way, various gases with their absorption lines in the NIR region could be sensed, by measuring the change in the output power at the corresponding wavelengths.
PhC fibers can also be used for sensing fluids and biomolecules, just as the gas sensing method we reviewed above. In all cases, the substance being sensed can still be let in through the holes in the PhC layer. The analyzed substance then absorbs the guided light from the core by evanescent waves. However, the materials and the periodic layer need to be re-configured for different sensing applications. A recent review of the various applications of PhC fibers for sensing applications details the the variety of uses of these materials. Applications range from spectroscopy, to bio-medicine, to metrology.
Interested in using photonics for sensing applications? Check out FindLight’s collection of products for gas sensors. |
Fungal infections are caused by fungi, which are tiny organisms that can live on the skin, nails, and hair. Fungal infections are common and can affect people of all ages. They are typically not serious, but they can be uncomfortable and may cause embarrassment.
There are several types of fungal infections, including:
1. Athlete's foot: This is a fungal infection that affects the skin on the feet, causing itching, redness, and the formation of blisters. It is often caused by wearing tight, closed-toe shoes and is more common in men than women.
2. Ringworm: This is a fungal infection that affects the skin, causing a circular rash that is red and itchy. It is most commonly found on the arms, legs, and torso.
3. Candidiasis: This is a fungal infection that affects the mouth, throat, and genitals. It is caused by the yeast Candida and can cause symptoms such as redness, itching, and discharge.
4. Tinea capitis: This is a fungal infection that affects the scalp and hair, causing itching, redness, and the loss of hair.
Treatment for fungal infections often involves the use of antifungal medications, which can be taken orally or applied topically. It is important to follow the treatment plan prescribed by your healthcare provider and to practice good hygiene to help prevent the spread of the infection. If you are experiencing symptoms of a fungal infection, it is important to speak with a healthcare provider to determine the most appropriate treatment. With proper treatment, it is possible to effectively manage and prevent fungal infections. |
Soil is an essential natural resource that plays a crucial role in sustaining life on earth. It supports the growth of plants, which is the primary source of food for humans and animals. Unfortunately, soil or land pollution has become a significant problem in recent years. Soil pollution occurs when contaminants such as heavy metals, pesticides, and industrial chemicals, among others, are introduced into the soil. These contaminants have severe effects on soil quality, human health, and the environment at large. In this article, we will discuss the sources and consequences of soil pollution.
Sources of Soil Pollution
There are several causes of soil pollution, and they vary depending on the type of pollutant. Below are some of the most common sources of soil pollution:
- Agricultural Activities – The use of fertilizers and pesticides in agriculture is a major source of soil pollution. These chemicals find their way into the soil, where they can accumulate over time and cause significant harm to the environment.
- Industrial Activities – Industrial activities such as mining, manufacturing, and construction can lead to soil pollution. Industrial waste, heavy metals, and other toxic chemicals are often released into the soil, causing contamination and degradation of the soil quality.
- Improper Waste Disposal – Improper disposal of household and industrial waste is a significant source of soil pollution. When waste is dumped on the ground, it can leach into the soil and contaminate it with harmful chemicals and pollutants.
- Atmospheric Deposition – Atmospheric deposition occurs when air pollutants such as sulfur dioxide and nitrogen oxide are deposited on the ground. These pollutants can enter the soil through rainwater, causing soil acidification and degradation.
- Landfills – Landfills are areas where waste is buried in the ground. They can lead to soil pollution if the waste is not properly contained, and the contaminants leach into the soil.
Consequences of Soil Pollution
Soil pollution has severe consequences on the environment, human health, and the economy. Below are some of the most significant effects of land pollution:
- Soil Degradation – Soil pollution can lead to soil degradation, making it unsuitable for agriculture and other purposes. This can have severe consequences on the economy, as it can reduce food production and lead to food shortages.
- Water Pollution – Soil pollution can lead to water pollution, as contaminants can leach into groundwater and surface water sources. This can have severe consequences on human health, as contaminated water can cause illnesses and diseases.
- Air Pollution – Soil pollution can also lead to air pollution, as contaminants can be released into the air through soil erosion and other processes. This can cause respiratory problems and other health issues for humans and animals.
- Loss of Biodiversity – Soil pollution can lead to a loss of biodiversity, as it can harm plants and animals that rely on healthy soil for survival. This can have severe consequences on the environment, as it can disrupt ecosystems and lead to the extinction of species.
- Human Health Impacts – Soil pollution can have severe impacts on human health, causing illnesses such as cancer, birth defects, and neurological disorders.
In conclusion, soil pollution is a severe problem that has severe consequences on the environment, human health, and the economy. It is essential to take steps to prevent soil pollution by reducing the use of harmful chemicals, properly disposing of waste, and implementing sustainable agricultural practices. By taking these steps, we can protect our soil and ensure that it remains a healthy and vital natural resource for generations to come. |
The orbits of the twelve newly discovered moons of Jupiter are shown here in bold. One moon is located in the outer group but orbits in the opposite direction. (Credit: Roberto Molar-Candanosa/Carnegie Institution for Science)Jupiter’s family has really grown since Galileo first recorded its four largest moons in 1610. On Tuesday, the International Astronomical Union (IAU) announced the discovery of 10 new moons orbiting Jupiter. Along with two found through the same research project but announced in June 2017, this brings the roster of Jupiter’s known natural satellites to 79. One of these new moons turned out to be a bit of a rebel. Of the 12 latest moons to join Jupiter’s family, it’s a maverick whose odd orbit may give astronomers crucial insights to understanding how the moons of Jupiter came to be.
Two Birds with One 'Scope
The discovery of these moons came from a totally different search for new solar system bodies. Astronomer Scott Sheppard of the Carnegie Institution for Science is on the hunt for Planet Nine, a hypothetical planet many astronomers think should exist in the distant reaches of our Solar System, beyond Pluto. He and his team have been photographing the skies with some of today’s best telescope technology, hoping to catch sight of this mysterious ninth planet. In the spring of 2017, Jupiter happened to be in an area of sky the team wanted to search for Planet Nine. Sheppard, who is broadly interested in the formation of solar systems and has been involved in the discovery of 48 of Jupiter’s known moons, realized this was the perfect opportunity to advance two separate research goals with the same telescope data. The Blanco 4-meter telescope Sheppard was using is uniquely suited to spotting potential new moons both because the camera installed on it can photograph a huge area of sky at once and because it’s particularly good at blocking stray light from bright objects nearby — say, Jupiter — that might wash out fainter ones. “It’s allowed us to cover the whole area around Jupiter in a few shots, unlike before, and we’re able to go fainter than people have been able to go before,” says Sheppard. Once the Blanco telescope spotted previously unidentified objects near Jupiter, the research team used other telescopes to follow up on these moon candidates and confirm that they were orbiting Jupiter.
Jupiter's moon Valetudo (pointed out with orange bars) moves relative to background stars in these images taken with the Magellan Telescopes at the Las Campanas Observatory. Jupiter is not in the frame and is off to the upper left. (Credit: Carnegie Institution for Science)
On a Collision Course
One moon in particular caught the researchers’ attention. “The most interesting find is this object we’re calling Valetudo,” Sheppard says. “It’s like it’s going down the highway in the wrong direction.”Of the 79 moons now known, most orbit in the same direction as other moons nearest them. The moons closer to Jupiter, including the four Galilean satellites, orbit Jupiter in the same direction as the planet’s rotation — astronomers call this a prograde orbit. The outer moons move in the opposite direction — a retrograde orbit. Eleven of the twelve new moons follow these conventions, but Valetudo is the odd one out. It’s out where the outer, retrograde moons are, but it’s orbiting Jupiter in the prograde direction, driving into the oncoming traffic.The curious find might shed light on how many of Jupiter’s current moons were formed. Aside from the hulking Galilean moons that stretch thousands of miles in diameter, most of Jupiter’s moons, including the new twelve, are between a mile and a few tens of miles across. The outer moons are clustered in at least three groups based on their distances from Jupiter and the angles of their orbits, and astronomers think these moons are fragments of three larger objects that were captured by Jupiter’s gravity and later broken up by collisions — though whether that was with passing comets, rogue asteroids, or other moons is unclear.Because Valetudo’s orbit crosses the orbits of some of the outer retrograde moons, it’s possible that it suffered a head-on collision in the past. The research team thinks Valetudo could be a leftover chunk from a once-larger moon that rammed into another past Jovian satellite, creating the many smaller objects that exist today. To check whether this could have happened, the researchers are working on supercomputer simulations of these orbits to calculate how many times an object with Valetudo’s orbit could have collided with the retrograde moons in the solar system’s lifetime. https://www.youtube.com/watch?v=8sOFuNbdeWM
Finding lots of these small moons also tells us about conditions in the early solar system. When Jupiter and the other giant planets were forming, the solar system was a disk of gas and dust that surrounded the infant Sun. “The giant planets formed out of material that used to be in that region. They were like vacuums, they sucked up all that material and that created the planets,” Sheppard explains. “We think these moons are the last remnants of the material that formed the giant planets.” The fact that these smaller moons exist today is evidence that any collisions that created them happened after this era of planet formation. If small moons like these were around when the solar system was still thick with gas and dust, drag forces would have slowed them down and caused them to fall into Jupiter, never to be seen again. Only in today’s much emptier solar system, after the giant planets finished forming and clearing their surroundings of gas and dust, would small moons like these have been able to survive. Once they finish running and analyzing the simulations, the team plans to publish the results in early 2019. In the meantime, they’re waiting for the IAU to formally accept “Valetudo” as the name for the oddball moon. The IAU requires moons of Jupiter to have names related to the Roman god Jupiter. Valetudo is the name of Jupiter’s great-granddaughter and a Roman goddess of health and hygiene, so it fits the bill. But why hygiene? Sheppard says it comes from an inside joke with his girlfriend. “I kind of always jokingly say that she’s a very cleanly person; she likes to take multiple showers a day,” Sheppard says. “And so when she told me about Valetudo, which is the goddess of hygiene, I said ‘That’s it, that’s what we’re naming it.’” |
In their third report from arid Australia, James and Thibaud Aronson discuss some of the serious issues facing conservationists and restorationists.
Concerning the non-native animals in Australia, the general consensus today is that eradication is impossible: the only option that remains is control, in the form of fences or culling, or both. Yet, conflicts of opinion on the ethics of culling abound, even for the armies of feral cats that reportedly kill 75 million native animals every single night. Even fences have their pros and cons, in particular the interruption of the migration of thousands of emus.
Both feral cats and foxes are most lethal in areas with relatively little vegetation cover, that is, the massive dry interior of the continent. This is compounded by the monster fires that have plagued Australia since European settlement. A single such fire can burn down hundreds of thousands of hectares, leaving small mammals and other animals with nowhere to hide.
What’s more, as mentioned in our previous blogpost, most land managers continue to burn on an annual basis without sufficient attention to the impact on animals and indeed many plants. Things are changing though.
In the seasonally dry, tropical Kimberley region, in the northwest, the Australian Wildlife Conservancy, or AWC, is testing new methods, focusing on patchy prescribed burning in the early dry season, and controlling cattle grazing. They are having good results with this approach in preserving more plant cover for small native animals and thereby reducing the lethal impact of feral cats. The AWC has also shown that their fire management techniques are not only beneficial for native animals, but also for pasture quality, and would therefore benefit pastoralists, whom Australians call graziers. Since most landowners in the area are graziers, let’s hope they will follow suit and try new fire management regimes. It is in this region, by the way, that occurs the endemic baobab of Australia, known here as Boab. To our surprise there are thousands of them, in a wide range of habitats. Some are estimated to be well over 1000 years old. Survival of this tree, at least, is clearly not threatened by fire or foxes, even if other problems – such as climate change – do exist. Let’s hope they go on thriving for another 1000 years.
Another reason invoked for the proliferation of cats and foxes in Australia is the virtual absence of top predators to control them. This phenomenon, called meso-predator release, is also found in North America, where coyotes have greatly expanded following the extirpation of wolves throughout large portions of the continent. Therefore, some have suggested that allowing dingoes to maintain higher population numbers would have a significant effect on controlling cats and foxes. However, dingoes are still considered pests by pastoralists, and large amounts of money go into controlling them.
And that’s not the last of it. In the last 200 years, people have also introduced many exotic plant species, some of which have become terrible weeds, such as buffel grass (Cenchrus ciliaris) (see our previous blog post), but also Tamarix, Kutch (aka Bermuda grass, Cynodon dactylon), Karroo thorn (Acacia horrida) and others. By 2009, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) estimated that introduced invasive plants were costing the country 4 billion Australian dollars a year in weed control and lost agricultural production, and causing “serious damage to the environment”. With climate change, it seems possible that numerous “lurking” or “sleeper” weeds such as the White weeping broom, Retama raetam, may increase their ranges and their negative impacts.
Buffel grass presents a particularly severe problem – and like the cats, and dingoes, it is controversial. It was one of dozens of African grasses intentionally introduced by Australian agricultural researchers to “improve” pasture for cattle. Indeed cattle do like it, but the problem is that the grass spreads with amazing tenacity and crowds out native grasses, and all other groundstory plants where it invades, and, it carries fire like few other plants. Control is possible, but it is tedious and expensive and is never 100% effective at a large-scale. Furthermore, the ranchers prefer it to the native grasses, and their ideas on when and how to burn are very different from those concerned with conservation. Indeed, only few of the people we met envision stopping prescribed fire altogether.
For example, Peter Latz, a native of the Red Centre, plant ecologist, and author we met in Alice Springs, has been conducting manual removal of buffel and Kutch on his own land. But his main focus has been on excluding fire altogether, and achieving thereby pretty impressive results.
For more on Peter Latz’s views and lifetime of experience in central Australia, see The Flaming Desert: Arid Australia – a Fire Shaped Landscape.
In our next blog post, we’ll talk about some of the other people and groups in arid and SW Australia undertaking serious steps towards restoration, while fully aware of the obstacles and the complexity of the challenge. |
Childbirth, also known as labour and delivery, is the ending of pregnancy where one or more babies leaves the uterus by passing through the vagina or by Caesarean section.In 2015, there were about 135 million births globally. About 15 million were born before 37 weeks of gestation, while between 3 and 12 percent were born after 42 weeks. In the developed world most deliveries occur in hospitals, while in the developing world most births take place at home with the support of a traditional birth attendant.
The most common way of childbirth is a vaginal delivery. It involves three stages of labour: the shortening and opening of the cervix, descent and birth of the baby, and the delivery of the placenta. The first stage typically lasts 12 to 19 hours, the second stage 20 minutes to two hours, and the third stage five to 30 minutes. The first stage begins with crampy abdominal or back pain that last around half a minute and occur every 10 to 30 minutes. The pain becomes stronger and closer together over time.During the second stage, pushing with contractions may occur. In the third stage, delayed clamping of the umbilical cord is generally recommended. A number of methods can help with pain, such as relaxation techniques, opioids, and spinal blocks.
Most babies are born head first; however about 4% are born feet or buttock first, known as breech. Typically the head enters the pelvis facing to one side, and then rotates to face down. During labour, a woman can generally eat and move around as she likes.However, pushing is not recommended during the first stage or during delivery of the head, and enemas are not recommended. While making a cut to the opening of the vagina, known as an episiotomy, is common, it is generally not needed. In 2012, about 23 million deliveries occurred by Caesarean section, an operation on the abdomen. C-sections may be recommended for twins, signs of distress in the baby, or breech position. This method of delivery can take longer to heal from.
Each year, complications from pregnancy and childbirth result in about 500,000 maternal deaths, seven million women have serious long-term problems, and 50 million women have negative health outcomes following delivery. Most of these occur in the developing world. Specific complications include obstructed labour, postpartum bleeding, eclampsia, and postpartum infection. Complications in the baby may include lack of oxygen at birth, birth trauma, prematurity, and infections. |
May 2020 | Volume 21 No. 2
Filling the Vacuum
When Professor Xiang Zhang undertook his qualifying PhD examination at the University of Berkeley, California, in 1994, he was asked to explain how his examiner’s voice could be heard across the table. “I answered, ‘it is because your sound travels by vibrating molecules in the air.’ He further asked me, ‘what if we suck all air molecules out of this room? Can you still hear me?’ I replied no, because there would be no medium to vibrate.”
But times – and human knowledge – have changed. Two new studies guided by Professor Zhang and involving colleagues from Berkeley have shown that there is indeed much going on in vacuums.
The focus of their research is a recent theory based on quantum mechanics, which argues empty space cannot be truly empty because it still contains fluctuating electromagnetic waves that cannot be completely eliminated. These waves produce a force, called the Casimir effect, that connects two objects such that if one object starts shaking or oscillating, the nearby object will be set into motion even in a vacuum.
The scientists demonstrated for the first time that the Casimir effect can enable heat transfer, which has implications for high-speed computation and data storage. Furthermore, they proposed and demonstrated that the Casimir effect could also cause objects to repel from each other, which has implications for frictionless mechanics that are important for medical robots and quantum sensors.
In the heat transfer study, Professor Zhang and his team overcame the significant hurdle of transmitting heat in a vacuum.
They engineered extremely thin, gold-plated silicon nitride membranes in a dust-free clean room, then placed two of these membranes a few hundred nanometres apart inside a vacuum chamber. The temperature of the membranes was precisely controlled and monitored using optic and electronic components. As predicted, when one membrane was heated up, the other warmed up, too – in other words, heat leapt from a hot membrane to a colder one inside the vacuum.
The size and design of the membranes were important in enabling this thermal transfer, as was the distance between them in order to rule out thermal radiation as the cause. The findings were published in Nature.
“Although this interaction is only significant at very short lengths – a few hundred nanometres – the implications could be profound for the design of computer chips and other nanoscale electronic components that are affected by heating issues that could limit their performance,” Professor Zhang said.
The team used highly sensitive optics to monitor the temperature of the silicon nitride membranes during the experiment.
(Courtesy of Violet Carter, UC Berkeley)
In the experiment, the team showed that heat energy, in the form of molecular vibrations, can flow from a hot membrane to a cold membrane even in a complete vacuum. This is possible because everything in the universe is connected by invisible quantum fluctuations.
(Courtesy of Zhang Lab, UC Berkeley)
In the study on frictionless mechanics, Professor Zhang and a separate team pushed scientific understanding even further. They proposed creating a ‘Casimir quantum trap’ without energy input by exploiting both attractive and repulsive forces – the attraction between two objects of the same material would be reversed at short distances and preserved at long ones without them ever touching each other.
The repulsion effect was confirmed in experiments similar to those for heat transfer, except in this case the objects were coated with Teflon. At short electromagnetic wavelengths Teflon’s low reactivity gave a repulsive force, while at longer waves gold’s higher-refractive index caused an attraction, thus creating the Casimir trap without using additional energy.
The finding is important for mechanical systems, which typically experience friction between objects such as gears and wheels that require costly maintenance and replacement. It also has implications for magnetic systems, such as Maglev trains, which have high energy demands.
“This quantum trap is totally passive and the trapping distance can be controlled by adjusting the thickness of the coating layer. The same principle can be applied to many other materials,” he said. The discovery was published in Science and selected as a top 10 Breakthrough of the Year 2019 by Physics World, a membership publication of the Institute of Physics in the United Kingdom.
Professor Zhang noted that these discoveries were just the beginning. The heat transfer effect is particularly resonant with his PhD examination experience. “Because molecular vibrations are also the basis of the sounds we hear, the discovery opens up the possibility that sounds could also travel through a vacuum. So I was wrong in my 1994 examination. Now, you can shout through a vacuum,” he said.
Because molecular vibrations are also the basis of the sounds we hear, the discovery opens up the possibility that sounds could also travel through a vacuum.
PROFESSOR XIANG ZHANG |
All of our activities have been developed to align with and support the NSW Science and Technology K-6 syllabus and the Science 7-10 syllabus. Many of our activities also link with Mathematics, Agricultural Technology, Technology Mandatory, Creative Arts, PDHPE and languages.
The philosophy that science and technology is “an integrated discipline that fosters in students a sense of wonder and curiosity about the world around them and how it works” underpins the work of Discovery Voyager, and we aim to facilitate collaborative and creative ways to make this happen for all students.
Have a look at each of the activity teacher sheets to get a feel for how each one addresses the NSW syllabus.
We do not expect students to have any prior knowledge in a particular area. However, if you are focussing on a particular unit of work that relates to a specific activity, then great! Let us know what you’re up to and our visit can extend their knowledge in active learning environments.
Students will benefit from being comfortable and ready for activity for our visit. We suggest that they wear sports uniform, including runners. However, students and teachers do not need any special Personal Protective Equipment (PPE). We provide PPE when required (and in varying sizes to fit everyone).
We can cater for up to 30 students in one group. Please do not allocate groups of more than 30. Our ideal group size is around 18-24.
All of our activities involve the students in hands on, touch and feel, play-based and exploratory activities. In A Day in the Life of Soils, students become munching microbes and soil invertebrates; in Plants, Poop and Pollinators they go searching in the school grounds for insects, or become busy dung beetles in a fast paced role playing game; in Astrometrics they explore the planets and stars using hands on measurement and models; Busybots involves communicating with programmable robots to explore the basics of coding; the Rocking through Time show uses sights, sounds, props and audience participation and play to explore our Earth’s amazing history; students play with models and use state of the art brain sensing technology to visualise their brain functions in Power of the Brain; in Think Like a Rock students play with 3D landscape models to explore the geological features or our local area, and don lab coats and a can-do attitude to create fizzling concoctions and colourful reactions in Creative Chemistry.
We ask teachers to please remain in your teacher role, but learn alongside us! Because you know your students best, please be ready for any individual or group disciplinary guidance you might need to give. However, we find that because our team are new, bringing fun and novel ways of learning into your classroom, we rarely have any issues.
Teachers, be ready for some noise! We allow, and encourage enthusiastic responses to our exploratory activities, but we understand that this is not sustainable for you. We try and give the students space to learn in their own ways in each activity, and ask that you do too.
We cater for Kindergarten to Year 10 students. This might seem like a wide age range with the same activities. The Voyager team is adept at flexible facilitation for each activity, and we adapt language, content depth and structure according to the age of the students. The team is pretty amazing at doing this, and because many of them are scientists, we find the questions come thick and fast!
Yes. We will leave you with follow up experiences and extension activities for you and your students.
Yes. While we are not trained as educators for students with special needs, we understand that integrated and small schools cater for a very broad range of abilities and needs. We are happy to talk to you about a structure, and activities that will work for different learning styles and needs. Click here to drop us an email.
No! We’ll bring our own sustenance. But we do know there are some expert bakers out there, and we love sampling the fruits of your talents…. 😉
Thanks for asking! There are various opportunities you can plug into on the UNE campus in beautiful Armidale. Each year UNE hosts several science camps, Open Days and hands-on science days. Far Out Science is one of our most popular science days on campus.
In addition, the UNE Natural History Museum is open from 9:30am to 4:30pm weekdays, and has a great café and outside space. Let your curiosity get the better of you and visit us with your school students or your family. Schools and community groups can also book Voyager activities and tours at the UNE Natural History Museum. Group sizes are limited due to COVID-19 restrictions, however email us today to ask how we can customize a COVID-safe museum experience for your group. Click here to email directly or click here to visit the Museum website and check out our amazing collections.
General hygiene practices not only reduce the transmission of viruses and other pathogens, but also helps to reduce any chance of COVID-19 transmission. Our updated Discovery Voyager protocols reflect our desire to keep students, staff and visitors safe and healthy while adhering to Government and University regulations. If your question is not answered here, please get in touch.
We have developed and put in place extensive protocols to prevent the spread of COVID-19 during Voyager visits to schools. These protocols are provided to every school before we visit. If circumstances change and we need to postpone visits or modify how we do things, we will be in touch with your school and keep you informed. We:
We clean and disinfect our equipment after every school visit, and as those of you who have hosted us before know, we have a lot of stuff! The reduction in activities makes the cleaning of equipment manageable for our Voyager staff. By reducing the number of sessions, we are also limiting potential exposure of students, school staff, and Voyager staff to transmission opportunities.
Our cost per student has changed to $5/student this year. This small increase ensures that the program remains sustainable and valued. Although we have had to reduce activity and session numbers in schools, this does not change the costs of travel, preparation, and resourcing for the program, and the small fee we charge goes some way in covering our costs. For the expertise, equipment and specialist knowledge that students have access to when we visit, we think $5/student is a pretty good deal!
In the event of ambiguity or cancellation due to COVID related restrictions, we will be in touch with your school immediately. Where possible, we will reschedule a face-to-face visit. There is also the option of an online (zoom) version of our activities. If an online experience, or rescheduling of a face-to-face booking is not possible, we will put your school on our waiting list so you can be contacted to rebook. In the meantime, you can access online games and other resources through our online resources and YouTube channel. |
Color is dynamic. We all see it a little differently, because color is the physical response of the eye to light + the mental interpretation of those responses. This makes printing color accurately a bit tricky until one has a basic understanding of color space.
First, all color starts with light. The color of a physical object is the result of projected light reflecting off the object. Your eyes + brain interpret what gets reflected, and the result is the color you see. A red apple, for example, reflects red wavelengths of light and absorbs all others. You see the reflected red wavelengths. As ambient light decreases, colors appear to fade because there is less light and, therefore, less color. |
Why does water matter?
There are a few things we can’t ignore to keep our health and live life to the fullest. These are: diet, rest and exercise.
No matter how well you eat, if you don’t sleep and are not physically active, problems arise. Likewise, if you exercise a lot, and with poor diet and insufficient rest, your body won’t do that.
Most of us know the importance of a good diet. It’s just as important to bring in the right amount of water, directly or indirectly. There are countless good sides of taking water. Water is the basic building ingredient of all of us.
Water, water everywhere
Approximately 60-70% of the human body is made up of water. The exact value depends on the age and proportion of muscle according to fat tissue (because muscles contain more water than fat). Although water does not contain calories and nutrients, it is necessary for life. We can survive for weeks without food, but without water for days.
The organism does not retain excess water as it does with fats, so it is necessary to drink or consume the appropriate amount of water daily in order to maintain good health.
It’s hard to bring in too much water, because more water means more going to the toilet, and the color of your urine becomes colorlessly transparent. Nevertheless, you should not overdo the water in this way and try to keep your urine color pale yellow like lemonade.
However, if you drink too little water, the body begins to dehydrate, and the first symptom is thirst. Visiting the toilet will certainly thin out and your urine color will turn dark yellow or even brown.
When you drink water or take it in with your diet, you hydrate. If you don’t get enough water, your body will suffer, and this condition is called dehydration. Water is the main ingredient in our body, so every cell, tissue and organ. It plays an incredibly important role in almost all bodily functions such as:
- Temperature regulation
- Transfer of oxygen and nutrients
- It is a component of many biochemical reactions
- It’s used to cleanse- detoxify the body through urine and feces
- Lubricates joints
- Has a role in digestion
- Has a role in burning fats
- It’s a basic ingredient in bodily fluids like saliva and tears
- Provides shape and stability to cells, etc.
Water also helps remove toxic substances from the body. An obvious example is that people who don’t have enough water in their diet have a problem with acne. Drinking and taking in enough water will help your skin’s health.
A sufficient amount of water makes the skin hydrated, so people who don’t ingest it enough have a problem with dry skin.
Daily water needs
Water is a basic food and is necessary for life. Good health also depends on how much water we drink every day. Human water needs depend on a variety of factors: health, activity, submission and seasons. The universal formula for how much water to drink a day does not exist, but some optimal water intake can easily be set for everyone.
In general, adults should consume 2 to 3l of water every day. That’s about 7-8 glasses a day.
This assessment does not include a person’s specific medical conditions, exercise habits and place of residence (say, high altitude or in tropical regions). Daily activities can increase daily water needs, for example, hard physical work during a warm part of the year can increase this assessment several times.
How do you know that’s enough? All you need to do is look at the color of the urine that should be pale yellow or transparent. If you drink less than you lose, the color is dark yellow.
In addition to drinking water, we can bring it in through the food we eat. Many fruits and vegetables have a high water content, which is why we eat it in greater quantities in fresh condition. Water, of course, is also found in juices, soft drinks, tea and coffee.
Pay attention, caffeine and alcohol are diuretics!
When drinking coffee, know that caffeine dehydrates the body because it acts as a diuretic. Alcohol is an even worse diuretic.
Beverages containing caffeine (coffee, tea, carbonated soft drinks) give the body water, but also extract it from the body, so they dehydrates the body. Headache is one of the first symptoms of dehydration.
Fruit juices are fine when it comes to hydration, but they contain a lot of calories, so take care of this as well. For proper hydration, let the base of your fluid intake still be clean, natural water.
Water and diet
Because our body has such water content, a smaller or higher intake can have an impact on body mass. People who are on restrictive diets with low carbohydrates quickly lose water (because carbohydrates bind water well), so people then think they’ve lost weight when in fact they’ve lost water quickly.
Rapid weight loss is, therefore, another term for dehydration. Water reduces appetite. Don’t underestimate the importance of this. If you drink less, you’re more likely to eat more.
How much water do we even lose?
We all constantly lose water through the process of breathing, sweating, urinating and through the stool (feces). Given that, it’s imperative that we make up for that lost water.
By sweating and breathing, we lose about a liter of water a day, while urinating and stool lose slightly more, about a liter and a half. How much water we lose, that’s what we have to make up for. These two and a half liter need to be ingested through food and fluids so we don’t get dehydrated.
The water in brought in by food is only half a liter a day, which is about 20% of daily needs. The other two liter should be ingested through all the liquids we drink during the day.
When should we drink more water?
Not every season is the same in terms of the need to drink water. Summer warm and humid weather significantly affects sweating and increases our water needs. By sweating by the water, we lose electrolytes, so it’s a good idea to make up for them by drinking. Half an hour of medium-intensity physical activity (fast walking, jogging, cycling) increases the need for water by half a liter.
High altitudes over 2500 m also affect water loss by breathing and urinating, and even then the needs are greater. Physical activity, a disease or condition can also increase water needs.
Water needs in some diseases (accompanied by high temperatures) and conditions (pregnancy, breastfeeding) also increase daily water needs, by as much as half a litre more than average.
It’s a basic rule to drink water when we’re thirsty, but sometimes it’s not a good idea. As we get older, it’s harder to notice that we’re dehydrated, so it’s better to drink when we’re not very thirsty.
On the other hand, you shouldn’t over-pour, how much water is needed and drink, with every meal and physical activity. |
This article originally appeared in OnlineSchools.org.
By the time students reach the end of high school, they will be well acquainted with cognitive skill assessments. Math, critical reading, and fact recall are prioritized in our traditional school systems, subjects that rely heavily on cumulative knowledge and memorization. Standardized scoring systems like the SAT, ACT, and GPA ratings might not provide a full picture of students’ abilities. Gifted individuals can be left behind, if their scores aren’t up to par or if they aren’t challenged enough by standardized curricula.
Colleges are starting to examine non-cognitive skills, sometimes known as “soft skills.” These abilities and traits are much harder to measure, such as grit, resilience, and empathy. Research shows that these skill sets can be better predictors of academic and career success, causing schools to rethink the use of standardized testing for admissions. Now, colleges like the University of Notre Dame are using non-cognitive exams like the ETS Personal Potential Index to get a better idea of how students will perform in graduate school.
Types of Non-Cognitive Skills
- Verbal communication
- Interpersonal skills
- Emotional maturity
Why Non-Cognitive Skills Matter
In one student study, researchers from the University of Pennsylvania revealed that academic perseverance, rather than intelligence, is a better indicator of a student’s success in college. Student with high levels of grit and self-control, two non-cognitive skills associated with academic perseverance, tended to earn high GPAs but had low SAT scores. These two qualities help ensure that a student attends class regularly, remains on task with assignments, and maintains steady performance levels throughout a semester. Researchers at the University of Chicago have found evidence to back this idea up. Gender gaps in collegiate performance went down by 21 percent once students were sorted by attendance rates and study habits.
International businesses and academic departments are quickly realizing important non-cognitive skills are in the workplace. The Institute for the Study of Labor released a report examining skills like agreeableness, emotional stability, conscientiousness, autonomy, and extraversion. They discovered that many businesses value these traits quite a bit, when income levels were examined. Subjects who held leadership positions in high school typically earn between four and 24 percent more income later in life. Many employers report dissatisfaction with college graduate’s lack of communication, leadership and interpersonal skills. An emphasis on non-cognitive skills may help to remedy this unpreparedness.
Measuring Non-Cognitive Skills
The Duckworth Lab at the University of Pennsylvania has created several scales to measure grit and self-control. Individuals are ranked based on a series of eight to twelve survey questions and statements, requiring responses that fall on a scale between “Not like me at all” and “Very much like me” or “Almost never” to “At least once a day.” You can take the full surveys on the Duckworth Lab website. Here are a few statements from these tests:
THE GRIT SCALE STATEMENTS
- New ideas and projects sometimes distract me from previous ones.
- I finish whatever I begin.
- I become interested in new pursuits every few months.
THE SELF-CONTROL SCALE STATEMENTS
- My mind wandered when I should have been listening.
- I said something rude.
- I lost my temper at home or school.
These scales can help you discover how deliberate and diligent you are, or if you have a tendency to be neglectful and impulsive. These behavioral tendencies have not been the focus of quantitative measurement before, since most standardized tests, such as the SAT and ACT, focus on cognitive skills such as mathematical reasoning and critical reading.
Developing Non-Cognitive Skills
An Amsterdam-based research firm known as The Argumentation Factory has created a visual map of non-cognitive skills and how they can be developed. Here are a few of the highlights from this data, which was collected during the Organisation for Economic Co-operation and Development (OECD) conference in 2012.
- Shifting parenting methods to emphasize non-cognitive skills
- Schools incentives
- Questioning and restructuring societal values
- Including non-cognitive skills into classroom learning goals and evaluations
- Incorporating non-cognitive skill development in public policy
How Non-Cognitive Skills Can Reshape College Admissions
Education experts like Dr. William Sedlacek of the University of Maryland drawing more attention to the value of non-cognitive skill, and questioning why schools rely on outdated standardized intelligence tests like the SAT. Sedlacek argues that standardized tests overlook what’s actually important to educators – such as our teamwork skills, our ability to creatively solve problems, and our overall potential in the workplace and classroom.
These discussions are causing schools to change their admissions process and prioritize non-cognitive skills. Schools like Eastern Washington University use an insight resume, which helps prospective students showcase their creative pursuits, group work, and leadership opportunities in their college applications. The focus on non-cognitive abilities is catching on, with testing centers like ETS and ACT launching new tests to measure these often overlooked skills:
Self-Evaluations and Essays
Here are a few sample essay prompts used by colleges and companies to evaluate non-cognitive skills:
- How much should you charge to wash all the windows in Seattle? (A Google interview question)
- Rutgers University is a vibrant community of people with a wide variety of backgrounds and experiences. How would you benefit from and contribute to such an environment? Consider variables such as your talents, travels, leadership activities, volunteer services, and cultural experiences. (Rutgers University)
- If you were offered a good job, would you leave college? (University of Utah)
- What are your experiences facing or witnessing discrimination? (Oregon State University)
Transcripts and Resumes
More colleges are recognizing the importance of non-cognitive skills, which means that they are looking for well-rounded resume and transcript items that show how you perform outside of standardized tests and curricula. Here are some experiences you can highlight to demonstrate ways you’ve persevered and adapted to challenges:
- Leadership positions – TA or mentor roles in school show that you have the initiative to take on additional work and responsibility. These roles require a considerable skill in communication, grit, and interpersonal savvy.
- Student clubs – Participation in a student group demonstrates your curiosity toward a particular topic or effort.
- Volunteer work and activism – Unpaid work can demonstrate reliability and persistence for a cause that you believe in.
Situational Judgment Tests
Popular testing companies, such as ACT and ETS, now administer non-cognitive assessments in classroom. The ACT ENGAGE is a test given to students in grades six all the way through college. They help teachers, students, and parents measure the following:
- Social engagement
- Commitment to college
- Goal striving
- Academic self-confidence
The ETS Personal Potential Index (PPI) can be taken by graduate students to measure the following skills:
College admissions panels and hiring committees often like to conduct interviews to explore prospective candidate skills. Face-to-face interviews can help an organization get a better grasp of your interpersonal and communication skills. Here is a list of some basic dos and don’ts to keep in mind as you prepare for an interview.
- Ask a friend, family member, or mentor to give you a mock interview with potential questions.
- Research the organization you will be interviewing with
- Make a list of your non-cognitive skills, extracurricular activities, and hobbies. Practice discussing these aspects of your life during mock interviews.
- Fidget throughout the interview
- Rush or blurt out your answers. It’s generally acceptable to take a few minutes to formulate a response during an interview.
- Get off topic. Try to answer the question with a relevant answer, unless another topic can be tied to the question in a meaningful way.
Colleges That Embrace Non-Cognitive Assessments
- Appalachian State University
- Asheville-Buncombe Technical Community College
- DePaul University
- Eastern Washington University
- Evergreen State College
- George Mason University
- Harvard University
- Michigan State University
- Northern Illinois University
- Oregon State University
- Saddleback College
- Simmons College
- University of Akron
- University of California, Berkeley
- University of London
- University of Maryland
- University of Pennsylvania
- University of Southern California
- University of Texas at San Antonio
- Beyond the Big Test: Noncognitive Assessment in Higher Education – William Senlacek
- Development and Validation of Measures of Noncognitive College Student Potential – CollegeBoard
- College and Career Ready: Soft Skills are Crucial – Edutopia
- The Role of Noncognitive Skills in Academic Success – ETS
Research on non-cognitive or “soft” skill sets might revolutionize the way colleges and employers recruit people. Education organizations are finding new ways to measure these skills, which can predict a student’s future work and academic success. These new perspectives give us a better picture of individual performance than current standardized testing systems. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.