content
stringlengths 275
370k
|
---|
Age Range: KS4
Students develop skills and approaches to improve their writing at GCSE, and work in groups to gather and explore ideas. Highly illustrated worksheets, in four sections, cater for all abilities and provide practice across a range of writing styles. Includes differentiation and opportunities for self-assessment.
Sections are: Writing to explore, imagine, entertain; Writing to inform, explain, describe; Writing to argue, persuade, instruct; Writing to analyse, review, comment. |
What's a Decision Tree?
This post is based on the chapter in the great book: Artificial Intelligence: A Modern Approach
So what’s a decision tree? Simply put a way to make predictions based on observations from a knowledge base. Let’s say we observed the following behavior data set for deciding whether to wait for a table at a restaurant. It contains twelve observations in total and is based on 10 attributes.
The goal now is to see if there is a pattern in this data that allows us to predict whether a new observation based on these attributes will yield a positive or negative decision for the “will wait?” question.
But how can we model this decision? Decision Trees to the rescue! In this model we try to build a tree that leads us through intermediate decisions (stored in internal nodes) to a definite decision (stored in it’s leaves) with the minimum steps required. Trivially one might just create a branch for each example, however this is very inefficient and has terribly predictive performance for new unobserved examples.
This whole idea actually more intuitive than it sounds, let’s look at the algorithm as defined in the book:
- If there are some positive and some negative examples, then choose the best attribute to split them.
- If all the remaining examples are positive (or all negative), then we are done: we can answer Yes or No.
- If there are no examples left, it means that no such example has been observed, and we return a default value calculated from the majority classification at the node’s parent.
- If there are no attributes left, but both positive and negative examples, we have a problem. It means that these examples have exactly the same description, but different classifications. This happens when some of the data are incorrect; we say there is noise in the data. It also happens either when the attributes do not give enough information to describe the situation fully, or when the domain is truly nondeterministic. One simple way out of the problem is to use a majority vote.
Alright, so basically it’s a recursion that tries to find the attribute that best splits the remaining examples in each step.
First of the main
decision_tree_learning function. It’s basically a 1:1 translation of the
informal description above.
majority_value returns the
WillWait value that occured most often for this
subset of examples.
Next of is the
choose_attribute function that selects the attribute that
yields the best split of the example subset. That’s pretty much it for the
decision tree itself. the
choose_attribute function is generic and can use
different heuristics. Here we’ll use a heuristic that is based on which
attribute provides the highest “information gain”.
distinct is a helper function returning an array with the distinct values
of the input array, in this case the different values
for an attribute.
As stated the heuristic in this example is “information gain” the mathematical details are beyond the scope of this quick post. You can find the details in the book or here. In general, the information gain from the attribute test is the difference between the original information requirement (in bits) and the new requirement.
split_pn is just a simple function that splits the
examples into positive (
"true") and negative (
And that’s it. The resulting tree for running this code can be seen at the bottom.
Note: the tree we arrived at here is slightly different than Russel’s & Norvig’s, it’s actually slightly smaller! If that’s due to a bug in my implementation please let me know ;)You have a question or found an issue?
Then head over to Github and open an Issue please! |
Force measurement is used in many differing types of applications and may be accomplished with devices utilizing differing technologies. Typical applications include measurements of tension and/or compression and are directed at capturing properties such as strength of a material, component or a bond. The technologies include mechanical measurements utilizing calibrated springs or electronic measurements using strain gages that employ the piezoelectric effect to capture the effect of force on a load cell or strain gage. Regardless of the technology employed, the desired result is the same-measure the force applied to capture a reading for validation or further analysis.
Tension measurements are typically used when there is a desire to obtain information on the behavior of a product or material when it is pulled apart. This is a useful measurement for understanding how strong an adhesive bond is, which is relevant, for example, when a handle will support the weight of what is contained in a case, or if a child can break off a small part of a toy.
Compression measurements are typically used to obtain information on the behavior of a product or material when it is pushed together or crushed. This type of measurement is useful in understanding how much force it will take to open a door, to compact a box and open a blister-package.
Validation, Testing and ResearchThese measurements can be used for validation, testing and research. One particular group of measurements is even used to test and evaluate muscle strength.
Measurements may be done in validating samples of a product such as testing a button or joint. These types of measurements are most often simple procedures performed in the production environment. Researchers and design engineers also require such measurements as they work through product development or selecting materials and parts for use in manufacturing. The goal would be to have proper materials used in a product or package, or for the components to behave in a certain way. These measurements are typically performed in a lab or even may be done by a specialized testing company.
The testing that is performed may be done to reach a certain value based on the criteria established by engineering, scientists or a governing agency such as the American Society for Testing and Materials (ASTM). The acceptable result could be a value of tension or compression and may involve ensuring that there is no damage. This is called nondestructive testing (NDT). In some cases, the test is done to cause a failure: this is destructive testing. Both would fall under the broad term of force measurement.
Mechanical instruments are typically used for lower accuracy measurements or measurements in conditions that are hazardous or detrimental to electronic instruments. These instruments are usually rugged and use calibrated springs and gearing to gage and display force readings. Applications such as flight line testing, product validation on the production floor and muscle strength evaluations may use a mechanical device for force measurement.
Electronic instruments are used more often for higher accuracy measurements. Electrical instruments typically offer more flexibility for the operator in that they may have communications capability for data transfer, changeable load cells for different measuring ranges, remote load cells for testing in limited spaces and incorporated functions such as statistical analyses. However, electrical instruments may not be designed for some environmental conditions and are more susceptible to damage from the elements. Some applications such as tearing force of a cloth, sampling of raw materials, plastics testing and puncture force may use an electronic instrument for force measurement.
In any case or application, one key thing to remember is that the device is only a part of the test. Many applications require repeated tests or may have to be performed several times at a specified interval. For consistency, the sample needs to be tested in a specific manner. If it is not, then the data or result becomes questionable.
There are many methods available to ensure a proper and consistent test result. A basic requirement is to ensure proper calibration of the device. Even within the calibration cycle, if the device has been over-ranged, it should be calibrated. The other requirements are wrapped into test setup and performance.
Using such devices as test stands, travel limits, load (force) limits and programmable systems are tools that are used to ensure consistency in the test setup. Performance consistency may involve such items as specific fixtures or tips for the device, using the same operator and applying the force to a specified location on the sample. All of these factors play a part in ensuring that the results are repeatable and are valid for the test that is being performed.
Consistency is KeyThe selection of the device and peripheral set-up options are a function of the requirement to perform the test. If the test is being used for quality verification and has a go/no-go result, the test setup may be very simple and use a basic device to capture the results at the end of production. If the test is being used to gage the acceptability of raw materials for use in producing an end item, the test may be much more complex and require a higher level of accuracy and repeatability so a test stand, digital gage and data capture may be in order.
There are many variables to consider and the selection is most often based on the need for compliance to a standard and/or a design criterion and the type of test being performed. In all cases, consistency is the key to capturing good data and readings and will always play a large part in obtaining valid and useable results. |
Researchers transformed a leaf skeleton into iron carbide
Nature’s fine structures are also suitable for technical applications because they exist in a numerous variety of forms, they usually display high mechanical stability and, due to their large surfaces, they provide suitable templates for catalysts and electrodes. Researchers from the Max Planck Institute of Colloids and Interfaces in Potsdam have succeeded in converting the filigree skeleton of a leaf into iron carbide by using a very simple method.
Materials scientists are interested in metal carbides because they are magnetic, conduct electricity and can withstand both high temperatures and mechanical stress. However, due to the stability of the material, researchers weren’t able to shape it into a specific form. The Potsdam-based chemists have solved this problem by dipping the leaf skeleton of leaves from a rubber tree into an iron acetate solution. They then air-dried the soaked skeleton at 40 degrees Celsius before treating it with nitrogen gas and heating it to 700 degrees Celsius.
“The structure was conserved down to the last detail”, said Zoe Schnepp, who carried out the experiment. “The skeleton provides both the basis for the form and the carbon for the reaction. As a result, we can convert the organic substance in just one step. This is what distinguishes our method from other techniques which also use biological forms as templates for inorganic structures.”
When heated, the iron acetate in the leaf skeleton is converted into iron oxide, which is then reduced by the carbon in the leaf skeleton to iron carbide. Researchers have been producing metal oxides on the basis of natural materials like leaves for some time now.
“One team has already succeeded in generating silicon carbide from pre-treated natural materials”, said Schnepp. “We’ve now developed this process even further.”
To test whether the leaf was fully converted into iron carbide, the researchers hung it in an electrolytic cell as an anode. Oxygen from the cell bubbled at the leaf and hydrogen bubbles rose at the cathode. “The experiment confirms that most of the leaf was converted into iron carbide. Apart from that, it only contains a bit of carbon,” says Zoe Schnepp. The researchers also used a permanent magnet to demonstrate that the leaf had acquired the magnetic characteristics of the iron carbide.
The new method should function with all natural carbonaceous materials. “We would now like to test it on other materials”, said Schnepp. “What is important about this study is that it shows how we can exploit nature’s formal variety to produce wafer-thin metal carbide structures in one simple step.”
You can find more information in the paper they published: “Biotemplating of Metal Carbide Microstructures: The Magnetic Leaf“. |
A new Coal-like Biofuel can be a Clean and Green Energy Source
This "instant coal" biofuel might be an eco-friendly alternative to coal. Photo Credit: Natural Resources Research Institute
Coal is one of the most commonly used fossil fuels across much of the world, yet it’s also one of the most pernicious. Coal-fuelled power plants release plenty of CO2, thereby worsening climate change, while the burning of coal is also responsible for horrendous air pollution. A typical coal plant releases 3.5 million tons of CO2 per year. In the United States alone, coal plants spew a whopping 1.7 billion tons of CO2 into the atmosphere in a single year.
But here comes good news: researchers at the National Resources Research Institute at the University of Minnesota Duluth have devised a type of solid biofuel, which boasts the high energy efficiency of coal but without the polluting side-effects. Better yet: it can be made from agricultural waste, which means it could serve as a clean and green energy source for powering our homes and offices.
The burning of this coal-like biofuel comes without heavy metal pollutants and with reduced sulfur levels. “As an added benefit, the biomass feedstock can be invasive plants, woody and agricultural waste, secondary wood species, and beetle-killed wood resources,” the researchers explain, adding the envision this biofuel “to be a supplement to fossil coal that helps reduce harmful coal emissions … at existing power and industrial plants.”
As an added benefit, the new type of solid biofuel can be produced within a few short hours. “If you think about how Mother Nature made fossil coal, it’s time, pressure and heat,” a member of the research team, Tim Hagen, explains. “We’re doing those same processes, but instead of millions of years, we’re doing it in a few hours. And because minerals don’t get into the mix, we don’t have those potential pollutants.”
The team of researchers already had a trial run of their “instant coal” biofuel at a electric plant Portland, where. they replaced fossil coal with 3,500 tons of biofuel, which required only a few minor tweaks with the mechanics. The biofuel preformed amazingly well and will thus be suitable for use on a commercial scale sometime soon. The research center’s lab is already producing up to 6 tons of the biofuel.
The solid briquettes of biofuel are manufactured in a way similar to roasting coffee in a process known as torrefaction: raw biomass is dried and heated up to 249°C in a low-oxygen atmosphere. It is then compressed. “Maybe you like light roast coffee, it’s not as concentrated… or you can take it further and have a dark roast coffee. We can do the same thing here,” one of the researchers explains. The biofuel can also be produced in another process known as hydrothermal carbonization. |
Beavers and sea otters are known for having this amazing fur that traps air when they dive underwater, helping to keep their little blubber-less bodies warm. This is what inspired MIT engineers to create a fur-like rubbery pelt. They wanted to figure out how these mammals star warm and even dry while diving in and out of icy waters.
The Plan: Make precise, fur-like surfaces of various dimensions, dunk the surfaces in liquid at different speeds, and use video imaging to measure the air that is trapped in the fur during each each dive.
“We are particularly interested in wetsuits for surfing, where the athlete moves frequently between air and water environments,” says Anette (Peko) Hosoi, a professor of mechanical engineering and associate head of the department at MIT. “We can control the length, spacing, and arrangement of hairs, which allows us to design textures to match certain dive speeds and maximize the wetsuit’s dry region.”
The team at MIT made several molds by laser-cutting thousands of tiny holes in small acrylic blocks. Each mold was altered, varying in size and the spacing of individual hairs. The molds were then filled with a soft casting rubber.
Researchers mounted each hairy piece of rubber and submerged them in silicone oil. They chose oil so they could better observe any air pockets forming.
Results: The team learned that the spacing of individual hairs, and the speed at which they were plunged, played a large role in determining how much air a surface could trap. Surfaces with denser fur, plunged at higher speeds, generally retained more air within the hairs.
So, what does this mean? If you’ve ever worn a wetsuit you know they can be heavy and hard to move around in. Let’s pretend a wetsuit is made out of this fabricated hairy material, using air for insulation instead of soggy rubber. The bio-inspired wetsuit would be lightweight and behave better in water.
Can you imagine? Light, warm, furry wetsuits? I’m in! 🏄
The results were published in the journal Physical Review Fluids. You can view the study here -along with some pretty epic charts, diagrams, and photos. |
The astronomy world is abuzz today because of ESA's announcement of the first release of data from the Gaia mission. Gaia is a five-year mission that will eventually measure the positions and motions of billions of stars; this first data release includes positions for 1.1 billion of them, and proper motions for 2 million. The map below does not show all 1.1 billion stars; rather it's a map of the density of stars that Gaia has measured so far, with brighter areas corresponding to more stars.
There's an excellent and very detailed writeup about today's announcement from the BBC. Right now the data set "only" has motions for 2 million stars because it takes more than one year's worth of observations to detect those motions; they can only compare the first year's worth of Gaia data to the previous best set of star positions derived from the Hipparcos and Tycho-2 star catalogs.
How is Gaia important to planetary science? Planetary science is, by definition, the study of the sky's moving objects. Astronomers discover minor planets (asteroids, comets, and other small bodies) by observing their motion against the background of stars. Our ability to predict the future motions of planets is therefore limited by how precise our catalog of the stellar background is. For the greatest precision, we need to know not only where the background stars are, but where they were in the past, because although we think of them as being fixed in the sky, the stars also move. That's where Gaia comes in. Comparing observations of our solar system's small bodies to the Gaia catalog will allow us to determine their orbital motions much more precisely than we have in the past.
More precise orbit determination will lead to better predictions of stellar occultations by asteroids and other bodies, helping astronomers plan observations to determine their sizes and shapes. For the outer solar system, the more precise orbits derived from Gaia data will lead to better-targeted observations and better population statistics, maybe helping to detect objects in interesting resonances with Neptune, astronomer Michele Bannister says. Another outer solar system astronomer, Wes Fraser, told me he's trying to use Gaia data to detect the wobbles arising from an trans-Neptunian object being not one bodies but two in a binary pair.
One very practical use of Gaia data in planetary exploration is for navigation by New Horizons to its Kuiper belt target, 2014 MU69. Because 2014 MU69 was discovered so recently, its orbit is not known to as high precision as they would like; Gaia can only help. The more precisely the New Horizons team can predict its future path, the more precisely they can point the spacecraft instruments at it -- in some cases they may be able to save significantly on precious data volume by reducing the number of observations they have to do to be sure they get the little thing in their field of view. They may also be able to target the point of closest approach closer, which means we could get higher-resolution images of it -- valuable because MU69 is expected to be quite small, only about 30 kilometers in diameter.
Although Gaia has detected hundreds of thousands of asteroids and a smaller number of trans-Neptunian objects, it won't lead to many new object discoveries, because its limiting magnitude of 20 is not nearly as faint as modern surveys like the Outer Solar System Origins Survey (OSSOS), which goes to about 25. However, there are hopes that Gaia observations will lead to the discovery of about 20,000 exoplanets.
For amateur astronomers and school groups, there's an ongoing citizen science opportunity with Gaia data. The public is invited to contribute observations of transient and variable phenomena through the Gaia Alerts website. Check it out and make your contribution to science! |
Not to be confused with adsorption (which is what a sponge does), absorption is the attraction of tiny particles or dissolved molecules to a solid surface and holding them there by weak intermolecular forces. It is similar in concept to magnetism and the attraction due to static electricity, but much weaker. In theory, every atom in the universe has some degree of affinity for every other atom in the universe, just like gravity. But, just as gravity requires enormous masses like planets and stars to show its effects, adsorption requires extremely tiny distances to show its effects. In adsorption, the particle in question is randomly bounced around the solution by collisions with water molecules and other molecules in the water. (This is called Brownian motion. It is estimated that an atom or molecule in water is involved in a million-billion-trillion or 1027 collisions with other atoms or molecules every second. This is part of the definition of temperature.) Eventually, by chance, it will be bounced so close to the surface of a wall or another larger particle that there are very few water molecules separating it from the surface. When that happens, those few collisions from the other sides and tends to become “plastered” to the surface by a continual barrage of collisions from the solution. This is the “physical” half of adsorption. The “chemical” half occurs if there is any chemical, affinity between the particle and the material of the surface. If there is, the particle will become attached (adsorbed) and stay there; if not, it will bounce off, right away or just diffuse away, later. The adsorptive forces (called van der Waals or London forces) are so weak that adsorbed substances can become desorbed rather easily-by adding certain, acids, by heating the system, or by merely removing the contaminant from the influent water. For example, activated carbon filters or ion exchange beds nearing exhaustion are subject to desorption if the water quality suddenly changes for the better. That shows that these treatment techniques are equilibrium (balance) phenomena in which sorption and desorption both occur and achieve an average condition, like a well-matched tug-of-war. Since adsorption requires a surface, commercial adsorbent materials have very large surface areas and are exemplified by activated carbon, activated alumina, and fine powders such as baking soda. But many substances are so very insoluble or otherwise so readily adsorbable that even small surface areas can make a big difference. For example, most heavy metal ions (lead. mercury, copper, cadmium, silver, chromium) adsorb so strongly to the walls of both glass and plastic sample bottles that more than half of the total contamination can be missed in an analysis if the sample bottles are not treated with nitric acid first, to cause desorption. Similarly, many chlorinated hydrocarbons biphenyls (PCBs) adsorb so readily to both metal and plastic plumbing and filter materials that even coarse prefilters remove them very well.
The adsorption and reduction of disinfectant chlorine by activated carbon is a special case. Activated carbon is a mild reducing agent and chlorine is a strong oxidizing agent, so after chlorine becomes adsorbed, it then actually reacts with the carbon. The chlorine is reduced to chloride ion (as in table salt and sea water), one atom of carbon is oxidized to carbon dioxide, and both are released to the solution (desorbed). Meanwhile, most of the spots on the activated carbon where all this took place become “auto-regenerated” back to their original, like new condition, ready to adsorb again. For free available chlorine (FAC), this takes only about fifteen minutes, which means that a small amount of carbon can achieve an acceptable steady-state condition if the flow rate is slow or intermitted. For “combined chlorine” (monochloramine), the reaction is much slower, and more carbon or more contact time is needed to achieve equivalent reductions. The chemical reactions between activated carbon’s “active sites” (C*) and these forms of chlorine are shown below. Note that any surface oxides’ on the carbon are recycled when reacted with monochloramine, while they are oxidized to CO2 and lost when reacted with free chlorine.
Molecules produce only a few collisions from that side, and the particle is overwhelmed by Free Chlorine
CI2 + H2O <=> HOCI + H+ + CI (forming “aqueous chlorine”)
C* + 2CI2 + 2H20 => C*O2 + 4H+ + 4CI– (overall reaction)
C* + HOCI => C*O + H+ + CI–
C*O + HOCI => C*O2 + H+ + CI–
Combined Chlorine: Monochloramine
C* + NH2CI + H2O => C*O + NH3 + H+ + CI–
C*O + 2NH2CI => C* + N2 + H2O + 2H+ + 2CI–
Finally, most dissolved/suspended particles and molecules in drinking water that are highly adsorbable to something usually do become adsorbed to a larger particle before reaching the point of use. Thus, adsorbable contaminants can often be removed by mechanical fine-filtration because the contaminant in question is already adsorbed to a larger particle. If you remove the particle, you remove the adsorbed contaminants along with it. This commonly applies to heavy metal ions, many pesticides, other chlorinated hydrocarbons, viruses, and asbestos fibers.
About Activated Carbon: Granular activated carbon (GAC) and powdered activated carbon (PAC) are the predominant adsorbents used in our industry. They can be made from nearly anything organic: coal, petroleum, wood, coconut shells, peach pits, ion exchange resin beads, fabrics, even waste plastics. The starting material is first charred-heated without air or oxygen, so it doesn’t burn up. Everything that can be vaporized or melted bubbles out as tar or pitch, leaving many holes and channels. Then the charred material is heated further, to above 1000ºC (hot enough to melt aluminum and lead), with the introduction of live steam or other activating chemicals. The superheated water vapor is extremely corrosive, etching more holes and extending channels to an amazing degree. Metallic impurities are preferentially attacked and washed out, resulting in a significant purification of the original material.
However, the heat of activation does more than extend holes and channels and increase the surface area of carbon: it also changes the fundamental crystal form from amorphous “carbon black” to the perfect crystalline array of graphite plates. The carbon atoms in graphite are arranged in sheets or plates of interlocking six-atom rings that look like slices through a honeycomb. Such a perfect arrangement causes the London forces to focus and concentrate at the surface, making activated carbon the best (strongest and most general) adsorbent known.
After activation, the carbon may be treated further to produce specific chemical qualities on the surface. For example, an acidic environment produces carbon with maximum capacity for heavy metals but minimal capacity for chlorinated organics, while an alkaline environment does the opposite. Most grades used in our industry are made for organic adsorption. When activation is complete, the carbon is a delicate, airy material that is so full of holes, it can barely hold together. It is crushed to a powder, and then proprietary binders are added to form granules of the desired size. The final product has a total internal and external surface area of more than 1000 square meters per gram, or half a football field inside a piece the size of a pea.
Activated carbon adsorption is useful because the material has strong chemical affinities for several important classes of contaminants that are common in water.
- Disinfectant chlorine: “Free available chlorine” (FAC) is readily adsorbed, then chemically reduced, and finally desorbed as chloride ion along with one molecule of carbon dioxide, with auto-regeneration of most of the carbon’s active sites and nearly infinite capacity. “Combined chlorine” (monochloramine) is less easily adsorbed, requiring more carbon or reduced flow rate for equivalent performance.
- Organic compounds containing chlorine and other halogens: Simple halogenated hydrocarbons are highly adsorbable to activated carbon. This includes a great many pesticides (DDT, Endrin, Lindane, Chlordane, etc.), industrial solvents (trichloroethylene, trichloroethane, tetrachloroethylene, carbon tetrachloride, etc.), and disinfection byproducts (THMs including chloroform, chloral hydrate, etc.).
- Organic compounds containing benzene rings: These include some of the most toxic chemicals, such as benzene, toluene, dioxins, polychlorinated biphenyls (PCBs), and phthalate esters (plasticizers for vinyls).
- Heavy metals: Lead, cadmium, and mercury adsorb readily, both as dissolved ions and colloidal oxide or carbonate particles, but the capacity is limited-similar to the capacity for THMs.
- Taste and Odor (T&O) compounds: The substances produced by microbes that are responsible for the common musty-earthy-mildewy T&O are extremely well adsorbed and with very great capacity.
There is great variation in the adsorbability of dissolved/suspended substances, and also great variability in the adsorptive capacity of different adsorbents. A bed of granular activated carbon (GAC) may be exhausted with respect to chloroform and other volatile organic compounds (VOCs) after only a few hundred bed volumes, yet continue to adsorb PCBs for many thousands more. Different grades and types of activated carbon have different capacities for the same contaminant as well as various contaminants, which means that one must be very careful and specific in making comparisons.
Freundlich carbon isotherm
Chemists have developed a standard procedure for comparison of adsorption (capacities, called an isotherm. A Freundlich carbon isotherm is determined by preparing several identical bottles of powdered activated carbon suspended in water. (In German, “eu” is pronounced “oi,” so Freundlich sounds like “Froindlich.”) Varying amounts of a contaminant are added to the bottles, and all are mixed until adsorption has reached equilibrium under those conditions of temperature and pressure. Then the carbon is filtered out and the solutions are analyzed to find how much contaminant remains unadsorbed in each one. The Freundlich equation is used to calculate the amount of contaminant that was adsorbed per milligram of carbon, and each data point is graphed on logarithmic graph paper with the carbon capacity in mg/g on the Y-axis and the final equilibrium concentration in mg/L on the X-axis. Finally, the “average” line representing all of the data points is drawn. That average line on the graph paper is called the isotherm for that contaminant and that carbon under those conditions. But that line covers a range of capacities; the one value used for comparison purposes has been designated by international agreement to be the capacity in mg/g of carbon on the Y-axis that corresponds to the value of 1.0 mg/L on the X-axis. If the isotherm does not cross the 1.0 mg/L point, the line is artificially extended (“extrapolated”) to that level for the purpose. The Freundlich Equation can be represented as:
Ce = concentration of contaminant in solution at equilibrium (X-axis value)
K = a constant
1/n = another constant
The Ce is determined by analysis; the qe is calculated using equation and the two Freundlich constants that were determined by the chemist who published the data.
By convention in our industry, chloroform, the main THM, has been selected as the least adsorbable contaminant that activated carbon can be claimed to adsorb effectively – any contaminant that has a Freundlich isotherm qe value less than that for chloroform cannot be said to be removed by carbon adsorption. We “draw the line” at chloroform; anything less adsorbable than chloroform is deemed not worth the trouble. Example: in one test series, the Freundlich capacity for chloroform is 2.6 mg/g GAC (the isotherm passes through 1.0mg/L above the X-axis at the point where the Y-axis reads 2.6 mg/g GAC), while the equivalent value for trichloroethylene is 30 mg/g GAC. That is a much higher value, meaning that the particular GAC will adsorb trichloroethylene much more easily than chloroform. However, in the same data set, the capacity value for methylene chloride is only 1.3 mg/g, and thus we say that methylene chloride cannot be adsorbed efficiently by that GAC. The carbon’s capacity for it is too small to make an economically viable product. See the example below.
It is important to remember that carbon capacity figures derived from Freundlich isotherms are to be used only for comparisons – e.g., carbon A is better than carbon B, or contaminant X is easier to remove than contaminant Y. they should not be used as concrete capacities to calculate how long a filter should last. The reason is that the isotherm data are produced at equilibrium, which may take several days of stirring in the lab to achieve. But a bed of GAC or a filter cartridge operates on a dynamic, flowing basis, and equilibrium conditions may not be achieved even after a weekend downtime. |
This is one of the most commonly asked questions and deserves an honest answer. Below is first a short answer then a more thorough answer. There are three things we need to consider when answering the starlight question.
1. Scientists cannot measure distances beyond 100 light years accurately.
2. No one knows what light is or that it always travels the same speed throughout all time, space and matter.
3. The creation was finished or mature when God made it. Adam was full-grown, the trees had fruit on them, the starlight was visible, etc.
Let me elaborate on these 3 points.
First, no one can measure star distance accurately. The farthest accurate distance man can measure is 20 light years (some textbooks say up to 100), not several billion light years. Man measures star distances using parallax trigonometry. By choosing two measurable observation points and making an imaginary triangle to a third point, and using simple trigonometry, man calculates the distance to the third point. The most distant observation points available are the positions of the earth in solar orbit six months apart, say June and December. This would be a base for our imaginary triangle of 186,000,000 miles or 16 light minutes. There are 525,948 minutes in a year. Even if the nearest star were only one light year away (and it isnít), the angle at the third point measures .017 degrees. In simpler terms, a triangle like this would be the same angle two surveyors would see if they were standing sixteen inches apart and focusing on a third point 8.24 miles away. If they stayed 16 inches apart and focused on a dot 824 miles away, they would have the same angle as an astronomer measuring a point 100 light years away. A point 5 million years away is impossible to figure with trigonometry. The stars may be that far away but modern man has no way of measuring those great distances. No one can state definitively the distance to the stars. The stars may indeed be billions of light years away, but man cannot measure those distances.
Several other methods such as luminosity and red shift are employed to try to guess at greater distances but all such methods have serious problems and assumptions involved. For a more complex and slightly different answer to the star light question from a Christian perspective, see the book Starlight and Time by Russell Humphry available from www.icr.org.
Second, the speed of light may not be a constant. It does vary in different media (hence the rainbow effect of light going through a prism) and may vary in different places in space. The entire idea behind the black hole theory is that light can be attracted by gravity and be unable to escape the great pull of these imaginary black holes. No one knows what light is let alone that itís velocity has been the same all through time and space. Since atomic clocks use the wavelength of the Cesium 133 atom as a standard of time, if the speed of light is decaying, the clock would be changing at the same rate and therefore not be noticed.
Third, the creation account states that God made light before He made the sun, moon, or stars. The rest of creation was mature, so starlight was probably mature at creation as well. I would ask the question, How old was Adam when God made him? Obviously he was zero years old. But how old did he look? He was a full-grown man. The trees were full-grown with fruit on them the first day they were made. The creation had to be that way; it would not work otherwise. Stars and their light were made at the same time. The God that I worship is not limited by anything involving time, space or matter. |
Written by: Nancy Khalek
It is important to remember that Islamic culture was initially an oral one, based not on the written word but on the memorization and recitation of all types of knowledge, from poetry to the Quran to battle stories and hadith themselves. It is unclear precisely when the transition from oral to written culture took place, but there is some evidence that suggests people were compiling notes and "books" as early as the mid-1st century of Islam, or the beginning of the 7th century of the Common Era. The earliest recorded fragments of the Quran are not from books, but from verses painted or inscribed on artifacts such as camel bones that date from the mid-7th century.
Sunnis, like all Muslims, believe that the Quran is the only actual "scripture" revealed to Muhammad by God, and they consider the text to be the inimitable and uncorrupt record of God's communication with humans during the twenty-three years of the Prophet's career. What distinguishes Sunni Islam, however, is its reliance upon hadith within the broader historical and literary traditions. The hadith elucidate, clarify, and even emend some of the legal rulings and prescriptions contained in the Quran, and Sunni jurists developed methodologies for approaching hadith in order to apply this second body of texts to rulings and interpretations based or stemming from the Quran itself. Thus the hadith and their accompanying literary genres are crucial for the formation of Sunni doctrine. They serve as secondary sources for the interpretation of the Quran. Different schools of law and different sects have devised varied methods for interpreting this body of texts.
Every hadith is accompanied by an isnad, or list of names also called a "chain of transmission" that details who heard and passed down a particular narrative report. Therefore the credibility and scholarly pedigree of those men and women listed in an isnad was of vital importance for determining the veracity and accuracy of any given hadith. Over time, certain transmitters developed reputations ranging from "extremely trustworthy" to "well intentioned, but of faulty memory" to outright "deceitful." Analyzing the names in given a isnad thus provided medieval scholars with technical criteria for determining the utility, either for determining doctrine and practice, or for applicability in legal rulings, of a given hadith. Credibility and scholarly pedigree comprised the essential information of a given transmitter or scholar. This information was contained in biographies of these men and women, which were in turn collected into biographical compilations. ‘Ilm al-Rijal, the "Science of Men" was a study of the biographies and training of Muslim scholars, and was applied to discerning the reliability of people who transmitted information about the Prophet and the first four Sunni caliph. The genre eventually expanded to encompass scholars who learned from and passed on hadith. |
Cockroaches are among the most disagreeable of household insects. They are associated with numerous pathogenic organisms, source of human allergens and assumptions of poor sanitation.
Common household pests are American cockroaches, German cockroaches, and Brown banded cockroaches.
Cockroaches start from an egg, develop into a nymph and finally an adult. An egg case laid by the adult usually contains between 16-50 eggs.
Cockroaches are mechanical vector that carry various pathogens on their body. They have been found to carry the pathogens that cause tuberculosis, cholera, leprosy, dysentery and typhoid, as well as over 40 other bacteria (like salmonella, staphylococcus and streptococcus) or viruses (including polio) that can cause disease. Their shed skin may trigger allergy or asthma.
To get rid if an infestation, insecticide is usually used to eliminate the roaches. However this must be combined with good hygienic and sanitation practices.
- Cockroaches are associated with unhygienic and filthy conditions
- Known to carry bacteria and germs that cause diseases
- All 3 forms of the life cycle, egg cases, nymphs and adults can be found in the infested area
- Unnecessary sources of water, food and hiding places should be eliminated
- Use traps, residual and non-residual insecticide sprays and insecticide dusts to eradicate cockroaches
Contact Us Now to prevent all these hassles @ 6205 5153 or [email protected] |
How Is Lung Cancer Diagnosed?
- Swollen lymph nodes above the collarbone
- Weak breathing
- Abnormal sounds in the lungs
- Dullness when the chest is tapped
- Unequal pupils
- Droopy eyelids
- Weakness in one arm
- Expanded veins in the arms, chest, or neck
Swelling of the face
Some lung cancers produce abnormally high blood levels of certain hormones or substances such as calcium. If a person shows such evidence and no other cause is apparent, a doctor should consider lung cancer.
Lung cancer, which originates in the lungs, can also spread to other parts of the body, such as distant bones, the liver, adrenal glands, or the brain. It may be first discovered in a distant location, but is still called lung cancer if there is evidence it started there.
Once lung cancer begins to cause symptoms, it is usually visible on an X-ray. Occasionally, lung cancer that has not yet begun to cause symptoms is spotted on a chest X-ray taken for another purpose. A CT scan of the chest may be ordered for a more detailed exam.
Though exams of mucus or lung fluid may reveal fully developed cancer cells, diagnosis of lung cancer is usually confirmed through a lung biopsy. With the patient lightly anesthetized, the doctor guides a thin, lighted tube through the nose and down the air passages to the site of the tumor, where a tiny tissue sample can be removed. This is called a bronchoscopy and the scope is called a bronchoscope. This is useful for tumors near the center of the lung.
If the biopsy confirms lung cancer, other tests will determine the type of cancer and how far it has spread. Nearby lymph nodes can be tested for cancer cells with a procedure called a mediastinoscopy, while imaging techniques such as CT scans, PET scans, bone scans, and either an MRI or a CT scan of the brain can detect cancer elsewhere in the body.
If fluid is present in the lining of the lung, removal of the fluid with a needle (called a thoracentesis) may help diagnose cancer as well as improve breathing symptoms. If the fluid tests negative for cancer cells -- which occurs about 60% of the time -- then a procedure known as a video-assisted thoracoscopic surgery (or VATS) may be performed to examine the lining of the lung for tumors and to perform a biopsy.
Because saliva, mucus, and chest X-rays have not proved particularly effective in detecting small tumors characteristic of early lung cancer, annual chest X-rays for lung cancer screening are not recommended.
However, groups such as the American Cancer Society and the National Cancer Institute say low-dose helical CT screening should be offered to those at high risk of lung cancer. That includes smokers and former smokers ages 55 to 74 who have smoked for 30 pack-years or more and either continue to smoke or have quit within the past 15 years. A pack-year is the number of cigarette packs smoked each day multiplied by the number of years a person has smoked. Their guidelines are based on research that showed CT screening decreases the chance of death overall but increases the chance of having a false alarm that requires more testing. |
Novel vaccine developed to protect people from both Lassa fever and rabies showed promise in preclinical testing, reports a new study. The findings of the study are published in the journal Nature Communications.
The investigational vaccine, called LASSARAB, was developed and tested by scientists at Thomas Jefferson University in Philadelphia; the University of Minho in Braga, Portugal; the University of California, San Diego; and the National Institute of Allergy and Infectious Diseases (NIAID), part of the National Institutes of Health.
‘Lassa fever relates to the same group of hemorrhagic fevers like Ebola. It has been the main threat in Western Africa like Ebola, infecting and killing millions of people each year. But a new vaccine is demonstrated to be effective against both rabies and Lassa in animal models’
The inactivated recombinant vaccine candidate uses a weakened rabies virus vector or carrier. The research team inserted genetic material from Lassa virus into the rabies virus vector, so the vaccine expresses surface proteins from both the Lassa virus and the rabies virus. These surface proteins prompt an immune response against both Lassa and rabies viruses. The recombinant vaccine was then inactivated to "kill" the live rabies virus used to make the carrier.
There are currently no approved Lassa fever vaccines. Although Lassa fever is often a mild illness, some people experience serious symptoms, such as hemorrhage (severe bleeding) and shock. The overall Lassa virus infection case-fatality rate is about one percent, according to the World Health Organization (WHO), but that rate rises to 15 percent for patients hospitalized with severe cases of Lassa fever. People contract Lassa virus through contact with infected Mastomys rats and exposure to an infected person's bodily fluids. Lassa fever is endemic to West Africa where these rats are common. In 2018, Nigeria experienced its largest-ever Lassa fever outbreak, with 514 confirmed cases and 134 deaths from Jan. 1 through Sept. 30, according to the Nigeria Centre for Disease Control.
Africa is also at high risk for human rabies. The WHO estimates that 95 percent of the estimated 59,000 human rabies deaths per year occur in Africa and Asia. Bites or scratches from infected dogs cause nearly all human rabies deaths. Effective rabies vaccines and post-exposure shots are available, but many deaths still occur in resource-limited countries.
The newly published findings show that LASSARAB, when administered with GLA-SE adjuvant (an immune response-stimulating protein), elicits antibodies against Lassa virus and rabies virus in mouse and guinea pig models. The vaccine also protected guinea pigs from Lassa fever after being exposed to the virus 58 days after vaccination.
Prior research indicated that an antibody-mediated immune response is not correlated with protection from Lassa fever, the authors note. However, the new findings show that high levels of non-neutralizing immunoglobulin G (IgG) antibodies that bind to the Lassa virus surface protein correlate with protection against Lassa virus. Levels of this type of antibody could potentially be a Lassa fever correlate of protection used to determine vaccine efficacy, according to the authors. They note the next step is to evaluate the experimental vaccine in nonhuman primates before advancing to human clinical trials. |
Dancing and Singing through the Bill of Rights
In this lesson, students analyze the Bill of Rights and explore the importance of the issues involved. The students employ their musical and kinesthetic intelligences in a creative performance singing and dancing to learn and teach the Bill of Rights. They perform the Bill of Rights in familiar vocabulary to their parents and members of the community (senior citizens).
The learners will:
- read and analyze the “Bill of Rights” using the Frayer model.
- write a four-question survey.
- survey family members and compile data.
- recite and sing the “Bill of Rights” in familiar language.
- Student copies of the “Bill of Rights”
- Copies of the Frayer Model (includes Spanish Version)
- Constitutional Amendment Poster Pages (handout) printed on poster board and displayed for students to see from their seats
- Song sheets for each child (handout The Amendment Song)
Interactive Parent / Student Homework:Student groups create a four-question survey related to the Bill of Rights. The students bring home the survey to get family input. They may invite family members to join in on the trip to the retirement home, encouraging more community participation.
- The Bill of Rights at the National Archives <http://www.archives.gov/national_archives_experience/
charters/bill_of_rights.html> 6 August 2003
- Frayer Model Map https://image.slidesharecdn.com/frayermodelmap-100212103407-phpapp01/95/frayer-model-map-1-728.jpg?cb=1265970859
- Schoolhouse Rock (video)—“America Rock”-1973. Disney Studios: 1997. ASIN: 1569494088
Pass out copies of the “Bill of Rights.” Ask the students to recall what the “Bill of Rights” is and why the amendments are important. Read the amendments aloud as a group.
Place students into ten cooperative groups (2-3 students per group). Assign each group one of the amendments in the “Bill of Rights.” Hand out the Frayer Model. Each group completes the Frayer model for the amendment assigned. After 15 to 20 minutes, have each group present its model to the rest of the class. These responses can then be hung in the classroom.Teacher Note: The Frayer Model is a tool used to help students develop their vocabulary. Frayer believes that students develop a stronger understanding of concepts when they study them in a relational manner. Students write a particular word in the middle of a box and proceed to list characteristics, examples, non-examples, and a definition in other quadrants of the box. They can proceed in any order: using the examples and characteristics to help them formulate a definition, or using the definition to determine examples and non-examples.
Each group collaboratively writes a four-question survey related to the “Bill of Rights.” The questions should generate answers that can be grouped and graphed. Examples: Which amendment do you think is most important? Do you think Amendment Two is as important today as it was at the time it was written?
Provide each group computer access to type the survey. They print out enough copies for each group member (and the teacher) and save the survey to a master disk.
Groups bring home their surveys (and copies of the “Bill of Rights”) and complete it with their families. The survey results are brought back the following day. Encourage the students to talk with their families about their responses to further understand their opinions and recognize the importance of the amendments to the Constitution.
Groups review and compare results from the survey. The groups decide how they will compile, organize, and display the data gathered, such as in a bar graph, circle graph, chart, etc. The students display their data neatly and creatively. The groups should add a paragraph describing the results of their survey.
All groups present their data to the class. The class discusses any trends evident from the surveys.
Place the students in the same ten cooperative groups from day one. The groups will have the same amendment from the first day (used in the Frayer model). The facilitator hands out the posters of Handout Three: Constitutional Amendment Poster Pages to the appropriate groups. These posters are written in language which is “user friendly,” or more modern for the students.
Give the groups fifteen minutes to come up with an action or dance move that shows the meaning of the assigned amendment.
Pass out copies of The Amendment Song. Lead the class through the song to the tune of the “Twelve Days of Christmas.” After one time through, have a representative from each group teach the class the creative movement to match each amendment. Sing the song through again with the new movements. Practice several times until everyone knows the song and the motions.
Day Four (may be several days later):
Take the class on a field trip to a local retirement home to share their song (and any related projects/performances) with members of the community. Bring along a poster with the lyrics to allow the residents to join in and sing along.
Assess whether the students know the amendments by passing out blank copies of The Amendment Song (Handout Four). The teacher should decide in advance whether the students fill in either the lyrics to the song or use their own words.
The teacher arranges a field trip to a retirement home (or a younger classroom in the school). The students share their song as a performance. (Other appropriate related projects can be part of the performance.)
Strand PHIL.II Philanthropy and Civil Society
Standard PCS 02. Diverse Cultures
Benchmark E.4 Demonstrate listening skills.
Standard PCS 05. Philanthropy and Government
Benchmark E.2 Identify why rules are important and how not all behaviors are addressed by rules.
Strand PHIL.IV Volunteering and Service
Standard VS 01. Needs Assessment
Benchmark E.1 Identify a need in the school, local community, state, nation, or world. |
Date: March 26, 2014
Who owns the world’s forests, and who decides on their governance? The answers to these questions are still deeply contested. To many Indigenous Peoples and local communities who have lived in and around forests for generations, the forests belong to them, under locally defined systems of customary tenure. In most countries, however, governments have claimed ownership of much of the forest estate through historical processes of expropriation, and those claims have been formalized in statutory laws. While governments are increasingly recognizing local ownership and control of forests, forest tenure arrangements remain in dispute or unclear in many places, including low, middle, and high income countries. |
One of the trickiest processes to represent accurately in global climate models is how Earth gives off and absorbs heat. This phenomenon is called ground heat flux (expressed with the variable G), and it varies considerably depending on a region’s local geography and climate.
For example, the magnitude of G is smaller in a wet, densely canopied rain forest than in a desert, where temperatures plummet and soar over the course of a day. It also changes with the seasons, as verdant regions dry out or freeze in summer or winter. In a new study, Purdy et al. compare different models of G in 88 sites across the globe and identify which make the most accurate predictions at different time scales.
The team compared six models of G forced by real-world data sets, including satellite measurements of vegetation cover and temperature of Earth’s surface, with observations from FLUXNET, a network of more than 650 towers. These towers measure surface energy exchange and gas fluxes on every continent in locations as diverse as tropical and coniferous forests, croplands, wetlands, and tundra.
The authors’ analysis revealed a range of strengths and weaknesses among the models and quantified where the largest model disagreement exists seasonally and globally. Models forced by net radiation explain more day-to-day variability in G, whereas models forced by temperature resulted in lower errors. A spatial assessment shows that model disagreement is greatest during winter months at high latitudes.
The new study highlights the importance of G and points to areas ripe for model refinement. Results from the study have implications for other models that rely on G, such as those used to calculate evapotranspiration. (Journal of Geophysical Research: Biogeosciences, doi:10.1002/2016JG003591, 2016)
—Emily Underwood, Freelance Writer |
Children who have never had chicken pox can be vaccinated at 12 months and 4 to 6 years of age. Adolescents and adults who have never had chickenpox can also get the vaccine. The vaccine has proven very effective in preventing severe chickenpox. The CDC Advisory Committee on Immunization Practices, the American Academy of Pediatrics, and the American Academy of Family Physicians recommend that all children be vaccinated for chickenpox.
Many states now require vaccination prior to entry into preschool or public schools.
Chickenpox is a highly contagious disease that usually occurs during childhood. By adulthood, more than 90 percent of Americans have had chickenpox.
The disease is caused by the varicella-zoster virus (VZV). Transmission occurs from person-to-person by direct contact or through the air by coughing or sneezing.
Until 1995, chickenpox infection was a common occurrence, and almost everyone had been infected by the time he or she reached adulthood. However, the introduction of the chickenpox vaccine in 1995 has caused a decline in the incidence of chickenpox in all ages, particularly in ages one through four years. The varicella vaccine can help prevent this disease, and two doses of the vaccine are recommended for children, adolescents, and adults.
Symptoms are usually mild among children, but may be life-threatening to adults and people of any age with impaired immune systems. The following are the most common symptoms of chickenpox. However, each individual may experience symptoms differently. Symptoms may include:
The initial symptoms of chickenpox may resemble other infections. Once the skin rash and blisters occur, it is usually obvious to a doctor that this is a case of chickenpox. If a person who has been vaccinated against the disease is exposed, then he may get a milder illness with a less severe rash and mild or no fever. Always consult your doctor for diagnosis.
Chickenpox is spread by exposure to the saliva or other respiratory secretions of an infected person. It can also be spread by being exposed to the fluid from the blistering rash. Once exposed, the incubation period is typically 14 to 16 days, but it may take as few as 10 and as many as 21 days for the chickenpox to develop. Chickenpox is contagious for one to two days before the appearance of the rash and until the blisters have all dried and become scabs. The blisters usually dry and become scabs within four to five days of the onset of the rash. Children should stay home and away from other children until all of the blisters have scabbed over.
Family members who have never had chickenpox have a 90 percent chance of becoming infected when another family member in the household is infected.
The rash of chickenpox is unique and therefore the diagnosis can usually be made on the appearance of the rash and a history of exposure.
Specific treatment for chickenpox will be determined by your doctor based on:
Treatment for chickenpox may include:
Children should not scratch the blisters because it could lead to secondary bacterial infections. Keep fingernails short to decrease the likelihood of scratching.
Most people who have had chickenpox will be immune to the disease for the rest of their lives. However, the virus remains dormant in nerve tissue and may reactivate, resulting in herpes zoster (shingles) later in life. Very rarely a second case of chickenpox does occur. Blood tests can confirm immunity to chickenpox in people who are unsure if they have had the disease.
Complications can occur from chickenpox. Those most susceptible to severe cases of chickenpox are adults and people with impaired immune systems. Complications may include:
Click here to view the
Online Resources of Infectious Diseases |
A new imaging sensor created by a team at Carnegie Mellon University and the University of Toronto allows depth cameras to operate effectively in bright sunlight. The researchers, including Srinivasa Narasimhan, CMU associate professor of robotics, developed a mathematical model to help cameras capture 3D information and eliminate unneeded light or “noise” that often wash out the signals necessary to detect a scene’s contours.
Sensor Technology: Why has capturing light been such a challenge?
Srinivasa Narasimhan: There’s no problem capturing light. In fact, the more light the better, in many cases, but here you have two competing sources of light. You have the light source that the sensor itself has, which is typically very low power, and you have the outdoor light, which is typically much higher power. You want to capture one, but not the other.
Sensor Technology: How does the sensor choose which light rays to capture?
Narasimhan: The sensor is only capturing the light that it is sending out, rather than anything else, by using a very-lowpower laser scanner or projector. It is shining light in one row and capturing that light. That allows us to be energy efficient.
Sensor Technology: What does your prototype look like?
Narasimhan: We have two prototypes. One has a laser projector plus a rolling shutter camera — the most common sensor that’s out there today. Most iPhone cameras have rolling shutter cameras.
Rolling shutter means that you’re capturing light one row at a time. What we thought was: If a laser projector is projecting one line at a time, and the rolling shutter is capturing one line at a time, you can synchronize those two things. They’ll capture only the lines that the laser projector is sending out. That led us to this more important theory of energy efficiency. For example, with this prototype, you’re only capturing light rays that are along the single plane. So you scan a sheet of light, and you just capture that same sheet of light.
Sensor Technology: Why is it important for these sensors to be energy efficient?
Narasimhan: If you want to put this [technology] on a cell phone or a mobile robot, it can’t be really heavy. You really have to think about energy. Once you make light sources smaller, they have to be low power. It’s very hard to be very high power and very small. Of course, if you send a rover to the moon, a lot of the energy is being spent by sensing applications rather than exploration or driving the robot itself. Therefore, you have to conserve energy.
Sensor Technology: How does the sensor enable 3D imaging?
Narasimhan: 3D imaging means that you’re capturing light rays that are intersecting or triangulating. If you wanted to capture only light rays that maybe bounce off three times, or two times, or ten times in the scene, those kinds of things are not easy to do now. With this kind of technology, you can capture and choose the particular light rays, even exotic ones, that you could not before. Ordinary cameras and ordinary illumination systems blast light everywhere, and then capture light from everywhere. That’s usually not a great thing to do.
We can automatically remove all of the reflections from a disco ball, because we don’t capture any of them. Or, if we want, we can capture only those exotic reflections. Light bounces around in many interesting ways in a scene, and we now have a way of controlling what to project and what to capture in a very efficient way.
Sensor Technology: What kinds of applications are possible because of this kind of technology?
Narasimhan: You can take these sensors outdoors. So you can put them on mobile robots, you can put them on your cell phones, and you can create these outdoor Kinect-type sensors. We’re also now thinking of putting this on rovers that might go to distant planets or icy moons one day. |
Do you also struggle with the R sound?
If the answer is yes, don’t miss out on today’s video lesson.
You see, in American English, there’s an ‘easy’ R, and a ‘tricky’ R.
The R that slips away and disappears when we don’t pay enough attention.
So first – It’s not you. It’s the R.
When the R appears at the end of the word or before a consonant, It’s
a little more challenging to pronounce.
This is why I made today’s video.
To show you that when you break it down, IT IS possible! 🙂
In this lesson you’ll learn:
- How to pronounce clearly and with elegance the /ar/ vowel sound (as in the words ‘car’, ‘part’, hard’ and ‘starting’.)
- How to distinguish between word pairs such as party vs. potty, harder vs. hotter, farther vs. father.
- A simple technique that will help you understand and practice tricky words R, and make it super easy for you to pronounce the words clearly. |
It’s relatively easy for galaxies to make stars. Start out with a bunch of random blobs of gas and dust. Typically those blobs will be pretty warm. To turn them into stars, you have to cool them off. By dumping all their heat in the form of radiation, they can compress. Dump more heat, compress more. Repeat for a million years or so.
Eventually pieces of the gas cloud shrink and shrink, compressing themselves into a tight little knots. If the densities inside those knots get high enough, they trigger nuclear fusion and voila: stars are born.
Imagine yourself in a boat on a great ocean, the water stretching to the distant horizon, with the faintest hints of land just beyond that. It’s morning, just before dawn, and a dense fog has settled along the coast. As the chill grips you on your early watch, you catch out of the corner of your eye a lighthouse, feebly flickering through the fog.
The combined observations from two generations of X-Ray space telescopes have now revealed a more complete picture of the nature of high-speed winds expelled from super-massive black holes. Scientist analyzing the observations discovered that the winds linked to these black holes can travel in all directions and not just a narrow beam as previously thought. The black holes reside at the center of active galaxies and quasars and are surrounded by accretion discs of matter. Such broad expansive winds have the potential to effect star formation throughout the host galaxy or quasar. The discovery will lead to revisions in the theories and models that more accurately explain the evolution of quasars and galaxies.
The observations were by the XMM-Newton and NuSTAR x-ray space telescopes of the quasar PDS 456. The observations were combined into the graphic, above. PDS 456 is a bright quasar residing in the constellation Serpens Cauda (near Ophiuchus). The data graph shows both a peak and a trough in the otherwise nominal x-ray emission profile as shown by the NuSTAR data (pink). The peak represents X-Ray emissions directed towards us (i.e.our telescopes) while the trough is X-Ray absorption that indicates that the expulsion of winds from the super-massive black hole is in many directions – effectively a spherical shell. The absorption feature caused by iron in the high speed wind is the new discovery.
X-Rays are the signature of the most energetic events in the Cosmos but also are produced from some of the most docile bodies – comets. The leading edge of a comet such as Rosetta’s P67 generates X-Ray emissions from the interaction of energetic solar ions capturing electrons from neutral particles in the comet’s coma (gas cloud). The observations of a super-massive black hole in a quasar billions of light years away involve the generation of x-rays on a far greater scale, by winds that evidently has influence on a galactic scale.
The study of star forming regions and the evolution of galaxies has focused on the effects of shock waves from supernova events that occur throughout the lifetime of a galaxy. Such shock waves trigger the collapse of gas clouds and formation of new stars. This new discovery by the combined efforts of two space telescope teams provides astrophysicists new insight into how star and galaxy formation takes place. Super-massive blackholes, at least early in the formation of a galaxy, can influence star formation everywhere.
Both the ESA built XMM-Newton and the NuSTAR X-Ray space telescope, a SMEX class NASA mission, use grazing incidence optics, not glass (refraction) or mirrors (reflection) as in conventional visible light telescopes. The incidence angle of the X-rays must be very shallow and consequently the optics are extended out on a 10 meter (33 foot) truss in the case of NuSTAR and over a rigid frame on the XMM-Newton.
The ESA built XMM-Newton was launched in 1999, an older generation design that used a rigid frame and structure. All the fairing volume and lift capability of the Ariane 5 launch vehicle was needed to put the Newton in orbit. The latest X-Ray telescope – NuSTAR – benefits from tens years of technological advances. The detectors are more efficient and faster and the rigid frame was replaced with a compact truss which required all of 30 minutes to deploy. Consequently, NuSTAR was launched on a Pegasus rocket piggybacked on a L-1011, a significantly smaller and less expensive launch system.
So now these observations are effectively delivered to the theorists and modelers. The data is like a new ingredient in the batter from which a galaxy and stars are formed. The models of galaxy and star formation will improve and will more accurately describe how quasars, with their active super-massive black-holes, transition into more quiescent galaxies such as our own Milky Way.
In a galaxy four billion light-years away, three supermassive black holes are locked in a whirling embrace. It’s the tightest trio of black holes known to date and even suggests that these closely packed systems are more common than previously thought.
“What remains extraordinary to me is that these black holes, which are at the very extreme of Einstein’s Theory of General Relativity, are orbiting one another at 300 times the speed of sound on Earth,” said lead author Roger Deane from the University of Cape Town in a press release.
“Not only that, but using the combined signals from radio telescopes on four continents we are able to observe this exotic system one third of the way across the Universe. It gives me great excitement as this is just scratching the surface of a long list of discoveries that will be made possible with the Square Kilometer Array.”
The system, dubbed SDSS J150243.091111557.3, was first identified as a quasar — a supermassive black hole at the center of a galaxy, which is rapidly accreting material and shining brightly — four years ago. But its spectrum was slightly wacky with its doubly ionized oxygen emission line [OIII] split into two peaks instead of one.
A favorable explanation suggested there were two active supermassive black holes hiding in the galaxy’s core.
An active galaxy typically shows single-peaked narrow emission lines, which stem from a surrounding region of ionized gas, Deane told Universe Today. The fact that this active galaxy shows double-peaked emission lines, suggests there are two surrounding regions of ionized gas and therefore two active supermassive black holes.
But one of the supermassive black holes was enshrouded in dust. So Deane and colleagues dug a little further. They used a technique called Very Long Baseline Interferometry (VLBI), which is a means of linking telescopes together, combining signals separated by up to 10,000 km to see detail 50 times greater than the Hubble Space Telescope.
Observations from the European VLBI network — an array of European, Chinese, Russian, and South American antennas — revealed that the dust-covered supermassive black hole was once again two instead of one, making the system three supermassive black holes in total.
“This is what was so surprising,” Deane told Universe Today. “Our aim was to confirm the two suspected black holes. We did not expect one of these was in fact two, which could only be revealed by the European VLBI Network due [to the] very fine detail it is able to discern.”
Deane and colleagues looked through six similar galaxies before finding their first trio. The fact that they found one so quickly suggests that they’re more common than previously thought.
Before today, only four triple black hole systems were known, with the closest pair being 2.4 kiloparsecs apart — roughly 2,000 times the distance from Earth to the nearest star, Proxima Centauri. But the closest pair in this trio is separated by only 140 parsecs — roughly 10 times that same distance.
Although Deane and colleagues relied on the phenomenal resolution of the VLBI technique in order to spatially separate the two close-in black holes, they also showed that their presence could be inferred from larger-scale features. The orbital motion of the black hole, for instance, is imprinted on its large jets, twisting them into a helical-like shape. This may provide smaller telescopes with a tool to find them with much greater efficiency.
“If the result holds up, it’ll be very cool,” binary supermassive black hole expert Jessie Runnoe from Pennsylvania State University told Universe Today. This research has multiple implications for understanding further phenomena.
The first sheds light on galaxy evolution. Two or three supermassive black holes are the smoking gun that the galaxy has merged with another. So by looking at these galaxies in detail, astronomers can understand how galaxies have evolved into their present-day shapes and sizes.
The second sheds light on a phenomenon known as gravitational radiation. Einstein’s General Theory of Relativity predicts that when one of the two or three supermassive black holes spirals inward, gravitational waves — ripples in the fabric of space-time itself — propagate out into space.
Future radio telescopes should be able to measure gravitational waves from such systems as their orbits decay.
“Further in the future, the Square Kilometer Array will allow us to find and study these systems in exquisite detail, and really allow us [to] gain a much better understanding of how black holes shape galaxies over the history of the Universe,” said coauthor Matt Jarvis from the Universities of Oxford and Western Cape.
The research was published today in the journal Nature.
The large black holes that reside at the center of galaxies can be hungry beasts. As dust and gas are forced into the vicinity around the black holes, it crowds up and jostles together, emitting lots of heat and light. But what forces that gas and dust the last few light years into the maw of these supermassive black holes?
It has been theorized that mergers between galaxies disturbs the gas and dust in a galaxy, and forces the matter into the immediate neighborhood of the black hole. That is, until a recent study of 140 galaxies hosting Active Galactic Nuclei (AGN) – another name for active black holes at the center of galaxies – provided strong evidence that many of the galaxies containing these AGN show no signs of past mergers.
The study was performed by an international team of astronomers. Mauricio Cisternas of the Max Planck Institute for Astronomy and his team used data from 140 galaxies that were imaged by the XMM-Newton X-ray observatory. The galaxies they sampled had a redshift between z= 0.3 – 1, which means that they are between about 4 and 8 billion light-years away (and thus, the light we see from them is about 4-8 billion years old).
They didn’t just look at the images of the galaxies in question, though; a bias towards classifying those galaxies that show active nuclei to be more distorted from mergers might creep in. Rather, they created a “control group” of galaxies, using images of inactive galaxies from the same redshift as the AGN host galaxies. They took the images from the Cosmic Evolution Survey (COSMOS), a survey of a large region of the sky in multiple wavelengths of light. Since these galaxies were from the same redshift as the ones they wanted to study, they show the same stage in galactic evolution. In all, they had 1264 galaxies in their comparison sample.
The way they designed the study involved a tenet of science that is not normally used in the field of astronomy: the blind study. Cisternas and his team had 9 comparison galaxies – which didn’t contain AGN – of the same redshift for each of their 140 galaxies that showed signs of having an active nucleus.
What they did next was remove any sign of the bright active nucleus in the image. This means that the galaxies in their sample of 140 galaxies with AGN would essentially appear to even a trained eye as a galaxy without the telltale signs of an AGN. They then submitted the control galaxies and the altered AGN images to ten different astronomers, and asked them to classify them all as “distorted”, “moderately distorted”, or “not distorted”.
Since their sample size was pretty manageable, and the distortion in many of the galaxies would be too subtle for a computer to recognize, the pattern-seeking human brain was their image analysis tool of choice. This may sound familiar – something similar is being done with enormous success with people who are amateur galaxy classifiers at Galaxy Zoo.
When a galaxy merges with another galaxy, the merger distorts its shape in ways that are identifiable – it will warp a normally smooth elliptical galaxy out of shape, and if the galaxy is a spiral the arms seem to be a bit “unwound”. If it were the case that galactic mergers are the most likely cause of AGN, then those galaxies with an active nucleus would be more probable to show distortion from this past merger.
The team went through this process of blinding the study to eliminate any bias that those looking at the images would have towards classifying AGN as more distorted. By both having a reasonably large sample size of galaxies and removing any bias when analyzing the images, they hoped to definitively show whether the correlation between AGN and mergers exists.
The result? Those galaxies with an Active Galactic Nucleus did not show any more distortion on the whole than those galaxies in the comparison sample. As the authors state in the paper, “Mergers and interactions involving AGN hosts are not dominant, and occur no more frequently than for inactive galaxies.”
This means that astronomers can’t point towards galactic mergers as the main reason for AGN. The study showed that at least 75% of AGN creation – at least between the last 4-8 billion years – must be from sources other than galactic mergers. Likely candidates for these sources include: “galactic harrassment”, those galaxies that don’t collide, but come close enough to gravitationally influence each other; the instability of the central bar in a galaxy; or the collision of giant molecular clouds within the galaxy.
Knowing that AGN aren’t caused in large part by galactic mergers will help astronomers to better understand the formation and evolution of galaxies. The active nuclei in galaxies that host them greatly influence galactic formation. This process is called ‘AGN feedback’, and the mechanisms and effects that result from the interplay between the energy streaming out of the AGN and the surrounding material in the center of a galaxy is still a hot topic of study in astronomy.
Mergers in the more distant past than 8 billion years might yet correlate with AGN – this study only rules out a certain population of these galaxies – and this is a question that the team plans to take on next, pending surveys by the Hubble Space Telescope and the James Webb Space Telescope. Their study will be published in the January 10 issue of the Astrophysical Journal, and a pre-print version is available on Arxiv.
It seems oddly appropriate to be writing about astrophysical jets on Thanksgiving Day, when the New York football Jets will be featured on television. In the most recent issue of Science, Carlos Carrasco-Gonzalez and collaborators write about how their observations of radio emissions from young stellar objects (YSOs) shed light one of the unsolved problems in astrophysics; what are the mechanisms that form the streams of plasma known as polar jets? Although we are still early in the game, Carrasco-Gonzalez et al have moved us closer to the goal line with their discovery.
Astronomers see polar jets in many places in the Universe. The largest polar jets are those seen in active galaxies such as quasars. They are also found in gamma-ray bursters, cataclysmic variable stars, X-ray binaries and protostars in the process of becoming main sequence stars. All these objects have several features in common: a central gravitational source, such as a black hole or white dwarf, an accretion disk, diffuse matter orbiting around the central mass, and a strong magnetic field.
When matter is emitted at speeds approaching the speed of light, these jets are called relativistic jets. These are normally the jets produced by supermassive black holes in active galaxies. These jets emit energy in the form of radio waves produced by electrons as they spiral around magnetic fields, a process called synchrotron emission. Extremely distant active galactic nuclei (AGN) have been mapped out in great detail using radio interferometers like the Very Large Array in New Mexico. These emissions can be used to estimate the direction and intensity of AGNs magnetic fields, but other basic information, such as the velocity and amount of mass loss, are not well known.
On the other hand, astronomers know a great deal about the polar jets emitted by young stars through the emission lines in their spectra. The density, temperature and radial velocity of nearby stellar jets can be measured very well. The only thing missing from the recipe is the strength of the magnetic field. Ironically, this is the one thing that we can measure well in distant AGN. It seemed unlikely that stellar jets would produce synchrotron emissions since the temperatures in these jets are usually only a few thousand degrees. The exciting news from Carrasco-Gonzalez et al is that jets from young stars do emit synchrotron radiation, which allowed them to measure the strength and direction of the magnetic field in the massive Herbig-Haro object, HH 80-81, a protostar 10 times as massive and 17,000 times more luminous than our Sun.
Finally obtaining data related to the intensity and orientation of the magnetic field lines in YSO’s and their similarity to the characteristics of AGN suggests we may be that much closer to understanding the common origin of all astrophysical jets. Yet another thing to be thankful for on this day. |
Archimedes Screw - History of Archimedes Screw
The Archimedes screw is a machine that can raise water with much less effort than lifting buckets. It was invented by the Greek scientist Archimedes, though the year is not known. Archimedes lived in Syracuse, Sicily (now part of Italy) between the years 287 B.C. and 212 B.C. The Archimedes screw is an ancient invention that continues to be important in the modern world.
This tool had many historical uses. It was used to empty water out of leaking ships and flooded mines. Fields of crops were watered by using the screw to pull water from lakes and rivers. It was also used to reclaim flooded land, for instance in Holland where much of the land lies below sea level.
- There are a few different designs of the Archimedes screw, but the key feature is the angled spiral around a center shaft (the typical screw shape). The screw can sit in a half pipe (trough) or a full pipe.
- To use the Archimedes screw to lift water, the pipe must sit on an angle with one end in a body of water. Then, the screw must be turned with a hand crank or motor. As the bottom of the screw turns, it will scoop up water. The shape of the screw will trap it, the water will be carried up to the top of the pipe, and it spill out.
- Today, there are many other uses for the Archimedes screw. Things like grain, sand, and sawdust flow in a similar way to water, and so the Archimedes screw can be used to move them as well.
- Archimedes screws appear in many unexpected places. Power drills, snow blowers, augers, crop harvesters, and many other machines operate using the principle of movement of these devices. |
Looks like you are using an old version of Internet Explorer - Please update your browser
Every few months, a news report will trumpet a new computer program with “living cyber organisms” that prove how life on earth evolved. These simulations often show how artificial life-forms reproduce, grow, and change over several generations. The algorithms behind these creatures can be quite complex in an effort to be as close to the “real world” as possible.
But what do such programs prove? For one, it is always important to remember that any computer program reflects the biases and assumptions of the programmer. In most cases, these programmers assume evolution to be true, and their artificial environments reflect this. Also, many programs have goals and way points, something that is not true of supposed Darwinian evolution. The programmers do not make a program without certain boundaries and guidelines that direct what the program can and cannot do. They make one with a purpose in mind.
Finally, the greatest irony of all is that these brilliant programmers, who are trying to prove that life evolved without intelligence, pour plenty of brain power into making these sophisticated artificial organisms. Keep that in mind when they declare this proves life arose by sheer brainless natural selection.
It is a staple of almost every biology book on the market: drawings of colored bones that show how evolution left its fingerprints on animals of common descent. These drawings point out how similar structure proves that we all come from one ancestor. The proof, they say, is as plain as the hand in front of our face.
Objectively, however, similar design and function can prove nothing. An iPod and an iPhone may have very similar parts, for example, but that certainly doesn’t mean the iPhone evolved from the iPod because of hardware glitches. Instead, because we have objective knowledge of history, we know that the same company designed both, which accounts for the similar design.
In the same way, similar structures in animals are just as strong an evidence for a common Designer leaving His mark on the works of His hand. Human designers often use similar solutions across a wide range of products. Why would we expect God not to do the same?
Darwin fretted over the lack of them, paleontologists are still looking for them, but they are often touted as the foundation of evolutionary theory. What are they? Transitional fossils. According to evolutionists, transitional fossils are sparse for a number of reasons: (1) fossils in general only give us a glimpse of the past, (2) punctuated equilibrium may cause geologically “rapid” changes in species, and (3) they aren’t easy to distinguish. However, many of us have seen the supposed fossils of the horse and whale series and the new “missing link” called Tiktaalik.
We must remember, however, that fossils do not come with tags telling us when and how the animal was buried, its lifestyle, and if or how it was related to another species. Scientists must make reasonable assumptions based on what they believe about the past and extrapolations from the data. Without an objective source of information, these assumptions are often tied to the subjective evolutionary worldview. Creation scientists, on the other hand, see the fossil record as evidence for both a global Flood and also the amazing diversity of the original created “kinds.”
Because there are a lack of transitional forms (and the ones found, including “walking whales” and fish, are contentious to say the least), evolutionists must resort to blurring the lines and claiming that since all species are in transition, we should not expect to find “missing links.” Perhaps the reason we do not find true transitional forms is because one created kind does not, cannot, and has never changed into another created kind.
The “slam dunk” proof for human evolution is, according to evolutionists, the claimed 98% similarity between human and chimp DNA and the evidence of chromosomal fusion. Textbooks tell us that this proves the common ancestry of humans and apes from ape-like beings that lived millions of years ago.
What makes this a myth, however, is that evolutionists forget to mention the problems with this claim. For one thing, the percentage of similarity may sound impressive (depending on which percentage you find), but this represents millions of letters of difference in the DNA. Factor in that many of the differences in the DNA are not represented in the “98% similarity” (such as deletions) and epigenetic differences and the chasm grows. Second, seeing the “history” of humans evolving from chimps in DNA and chromosomes requires a prior commitment to evolution. Evolutionists interpret the data to mean what they want it to mean in light of Darwin’s myth.
Though there are similarities between apes and humans, this too is strong evidence for a common Designer, who gave humanity characteristics unlike any other creature He made. But this doesn’t stop evolutionists, knowingly or not, from using flat-out propaganda as in myth #6.
The pervasive ape-to-human montage that shows an ape-like being on the left slowly becoming a human on the right is so much a part of culture that most anyone can recognize it. Natural history museums and TV shows give us supposed glimpses into the past and how human ancestors might have looked. Too bad it’s all a sham.
Fossil apes are difficult to come by, but several species have been found. However, a new ape fossil does not generate as much interest or prestige as one called a “human ancestor,” which is why there is so much focus on how ape fossils tie in to the evolution story. The desire to “fill in the gaps” leads to many false conclusions. For example, some of the supposed “bipedal” characteristics found in fossils are also found in living apes that are not bipedal.
In fact, imagination, wishful thinking, and presuppositions influence a great deal of the “reconstructions” we find in magazines, textbooks, and on TV. Enjoy the science, but don’t be taken in by the fiction.
If we look around us (and even in our own bodies), there are many structures that seem to show less-than-optimal design. What this means to some evolutionists is that this proves there is no creator. After all, a creator as intelligent as God would not have made imperfect designs.
Debunking this myth requires very little effort. First of all, how can humans judge what is optimal design? Some designs require a balance of efficiency and effectiveness, as we find in the human eye (a structure perfectly suited for human life). Also, we would hardly expect a universe that has been cursed with degeneration for over 6,000 years to maintain optimal design. The fact that we continue to survive, however, is evidence of how well the original design was. Finally, the broadening field of biomimetics (copying design from nature) shows us that God’s creation (even in its fallen state) offers a wealth of design potential—and good design at that.
While evolution does its dirty work, it leaves behind vestiges of its machinations, or so the argument goes. Evolutionists claim that humans and other animals have leftover organs and DNA that prove the power of mutations and natural selection. In fact, this is often touted as a powerful rebuttal to creationists.
But the myth stops here. If an organ loses function, this proves only that the organ has lost function. Often, however, reports of this kind are premature and based on evolutionary expectations. The appendix, for example, was once a bastion of vestigiality, but now we know its function. One must wonder, in fact, how much evolutionary thought has retarded science by claiming that things are no longer needed.
In the end, the loss of function (after all other possibilities have been eliminated) is better evidence for a world that is in decay, which is exactly what the Bible says about the universe we inhabit.
You may have heard this one a time or two. The development and spread of antibiotic-resistant bacteria (and pesticide-resistant plants and insects) is shouted from the rooftops as proof of evolution happening “right now.” Selection pressures push these organisms to evolve—at least, this is how evolutionists explain it.
Do bacteria develop resistance to antibiotics? Yes, this is documented science. Does this prove Darwinian evolution? No, not even close. Once again, evolutionists take the observations and pass them through their worldview filter. The problem (for evolutionists) is that the mutations that cause bacteria (and other organisms) to overcome environmental pressures are not the information-gaining mutations required for Darwin’s postulation. In fact, these mutations often come at a steep price to the organism—a price that doesn't show up until the environmental pressure is removed—and it often means the inability to compete with non-mutant bacteria.
Bacteria, in fact, show the amazing creativity of God in that they can swap DNA with other bacteria. This amazing feature reveals the provisions God made for them to survive in a fallen world and rapidly changing environments. However, they do not and cannot evolve into anything else. They have been and will always be bacteria.
Natural selection is the driving force behind evolution. This mantra has been repeated so often that people often conflate the two ideas. But are evolution and natural selection the same thing?
The short answer is that this is one of the most oft-repeated myths. Natural selection is an observable process that was certainly not first discovered by Charles Darwin. Species with certain characteristics survive better in a given environment. However, natural selection is nondirectional and does not lead anywhere. That is, if the environment changes, members of a species that were previously better adapted may no longer be. Evolution, on the other hand, is an unobservable process that requires direction (dinosaurs to birds, e.g.).
Natural selection can only act upon the information that already exists. When certain characteristics are selected, the overall genetic information decreases. Mutations have not been shown to reverse this process. This loss of information may make members of the same created kind unable to reproduce with each other, but this merely emphasizes how much loss can occur.
Many evolutionists would like to give natural selection powers that it does not have. Don’t let them swindle you.
When all is said and done, the ultimate “proof” of evolution is an appeal to human authority. We are often reminded by anti-creationists that virtually all “real” scientists agree that evolution happened.
When examining this myth, one must keep in mind that those who make this claim often rely on the belief that the only real scientists are those who accept evolution. The argument, then, essentially boils down to this: evolutionists agree that evolution happened. This, of course, is an absurd argument, and we could just as easily say that creationists agree that creation happened.
The main problem, however, is that even if every single person accepted an idea, that doesn’t make the idea correct. The history of science (and humanity) is filled with majority views being incorrect. Evolution is another such idea. Secondly, many scientists accept evolution because the only alternative is design, which is against their naturalistic beliefs. They have a prior commitment to keeping any miraculous interaction out of their worldviews, and they accept evolution by default.
Finally, there are a growing number of scientists, creationist and not, who do not find the supposed evidence for evolution to be valid or acceptable. The truth of the matter is that while some evolutionists would like creationists like us not to exist, we do, and it is past time for the myths of evolution—and the myth of evolution itself—to be dismissed once and for all. |
A verbal is a verb form used as some other part of speech. There are three kinds of verbals: gerunds, participles and infinitives.
A gerund always ends in ing and is used as a noun. Eating is fun.
A participle is used as an adjective and ends in various ways. A present participle always ends with ing as does the gerund, but remember that it is an adjective. A past participle ends with ed, n, or irregularly. Examples: played, broken, brought, sung, seeing, having seen, being seen, seen, having been seen.
An infinitive is to plus a verb form. It can be a noun, an adjective, or an adverb. Examples: to be, to see, to be seen, to be eaten.
Instructions: Find the gerunds, gerund phrases, participles, participial phrases, infinitives or infinitive phrases in these sentences, tell what kind of verbal they are, and how they are used.
1. Blaming others is not being honest with oneself.
2. We do not plan to change the rules.
3. Forgetting his promise, Jeff returned home late.
4. My dog is too old to learn new tricks.
5. One way to improve is regular practice.
–For answers scroll down.
1. blaming others is a gerund phrase used as the subject
2. to change the rules is a noun infinitive phrase used as the direct object
3. forgetting his promise is a participial phrase modifying the subject Jeff
4. to learn new tricks is an adverb infinitive phrase modifying the predicate adjective old
5. to improve is an adjective infinitive modifying the subject way
For your convenience, all of our lessons are available on our website in our lesson archive at http://ift.tt/1BHeG8C. Our lessons are also available to purchase in an eBook and a Workbook format.
from Daily Grammar Lessons Blog http://ift.tt/2uF8gca |
Many tropical diseases such as malaria, Chagas disease and dengue are transmitted to humans via mosquitoes and other carriers known as vectors. These vector-borne diseases continue to have a major impact on human health in the developing world: each year, more than a billion people become infected and around a million people die. In addition, around one in six cases of illness and disability worldwide arise from these diseases.
Malaria arguably continues to attract the most attention of all the vector-borne diseases by virtue of causing the greatest global disease burden. However, others such as dengue are not only resurgent in some regions, but threaten a vast proportion of the world’s population.
Climate change remains a substantial threat to future human health and since the behaviour of disease carriers like mosquitoes is known to be extremely sensitive to temperature and rainfall, it seems unquestionable that climate change will affect many, if not all, of these diseases. What is less clear, however, is the extent to which climate increases the risk of becoming infected in certain regions compared to other factors such as poverty or fragile health systems.
In addition, although the number of new cases of diseases such as malaria appears to be declining worldwide, it is still increasing in many regions for a variety of reasons; the continued spread of insecticide resistance, changes in land use, and difficulties in maintaining political interest pose considerable challenges. Which of these factors will be most influential over the coming decades remains up for debate and one that was raised in a special edition of Philosophical Transactions B.
Changes in risk
The latest research, however, is clear and consistent in many of its findings. Different diseases, transmitted by different vectors, respond in different ways to changing weather and climate patterns. Climate change is very likely to favour an increase in the number of dengue cases worldwide, while many important mosquito populations that are able to transmit devastating diseases are changing in their distribution.
The latest maps show that many areas of Europe (including the UK) could become increasingly hospitable for mosquitoes that transmit dengue over the coming decades (the map below shows a projected change in suitable habitat for the Aedes albopictus mosquito). Similarly, other mosquito range expansions are likely to occur in the US and eastern Asia. If dengue and/or chikungunya are imported into these regions, there will be a considerable increase in the worldwide number of vulnerable individuals.
It is also clear that small changes in these so-called risk maps can have very large public health impacts. Tick-borne diseases (such as Lyme disease) are also predicted to expand in range as climate changes. Although, as before, plenty of other factors are likely to contribute, meaning that direct causation is very hard to attribute.
It is important to remember too that climate change is not just global warming; the latter refers to an increase in global mean temperatures, but there is also an overwhelming body of evidence demonstrating that rainfall is at least as important for many vector-borne diseases. Rainfall episodes have also been shown to provide a very good early-warning sign a few months in advance for outbreaks of West Nile Virus.
New research on African anti-malaria mosquito control programmes that involve spraying houses (to kill indoor mosquitoes) and distributing bed nets also shows that both temperature and rainfall can influence the degree to which programmes decrease new infections and, crucially, their cost-effectiveness. However, whether or not this is substantial enough to affect regional policy decisions about scaling up mosquito control programmes depends on factors such as how rapidly insecticide resistance emerges, the human immune response to malaria, and country-specific conditions.
In terms of malaria elimination in Africa, adopting the same approach in all affected regions is unlikely to be the best way forward. However, there is some new evidence to suggest that if efforts continue to be concentrated on scaling-up current intervention programmes in regions close to elimination, the longer-term effects of climate change will become far less important. Indeed, one of the most effective ways of protecting human health against climate change in the long-term is to further strengthen current disease control efforts.
As with the formulation of public health policies to deal with diseases such as Ebola, flu, and HIV, mathematical models are valuable tools that are widely used to make predictions about how different carrier-borne diseases are likely to respond to climatic changes. How reliable these predictions are is an important question and, like many areas of science, include unavoidable uncertainties. For example, people may change their behaviour and actions as climate change evolves – for example by migrating to other areas – which evidently makes forecasting more difficult.
New evidence has also shown that disease vectors may evolve in under a decade to changes in temperature, which conflicts with many current models that assume climate change only affects their ecology, not their evolution. Predictions that might be affected by climate change must therefore not only take account of these uncertainties, if they’re to be more reliable and useful, but also recognise that these predictions cannot strictly be disproved until the future arrives.
This remains a very active research field, but considerable progress in our understanding has been made over the last ten to 15 years. Better data on the links between vectors, diseases they carry and the environment is definitely required, as are better ways of quantifying disease risk for different populations and different diseases.
Many diseases have received very little attention so far on how climate change may affect future trends. One example is onchocerciasis (river blindness), for which tentative predictions suggest that we might expect substantial increases in the number of disease vectors in certain African regions over the coming decades.
Almost all models are currently based on single diseases, but many populations are unfortunately burdened with multiple diseases at any one time; understanding how climate change affects interactions between these diseases has attracted little attention to date.
One other important challenge for the field is the mismatch between the data current global climate models are able to provide and the information required by local public health officials to make more informed decisions; continued improvements in computing power are essential to progress. The predictions of our current models are not perfect and improvements in our understanding are certainly required.
To date, we have tended to react to disease outbreaks as they occur, but we need an increased focus on being more proactive; we cannot stop outbreaks of many of these diseases, but proactive risk management is less expensive (and more effective) than responding after a crisis. Ultimately, the challenge is not to address specific health risks due solely to climate change, but instead to ensure sustained progress is made towards decreasing the number of deaths and cases of these diseases for future generations.
Hard Evidence is a series of articles in which academics use research evidence to tackle the trickiest public policy questions. |
Manure can be used as an organic material to improving soils when used as a mulch or soil amendment.
Apply fresh manure only in the fall so it has the winter to decompose. Applied in spring or summer could potentially transmit pathogens such as E-coli.
Manures can also be high in salt which is harmful when added to the soil. Salts build up in the root zones of plants in heavy clay soils with poor drainage causing seed-germination failure, stunted plants, or leaf burn.
When adding manure or compost made with manure, add no more than one inch per year and cultivate it into the soil six to eight inches deep. By improving drainage with organic matter, salts will be leached through the soil.
Other options for improving soil are to add compost, leaves, chopped straw and other low-salt organic materials instead of manure.
For more information, see the following Colorado State University Extension fact sheet(s).
- Choosing a Soil Amendment
- Managing Saline Soils
- Organic Fertilizers
- Using Manure in the Home Garden
- Vegetable garden: Soil Management and Fertilization |
Two groups of scientists, one of which is Purdue University and the second a number of other American universities, could explain the previously discovered features of the topography of Pluto. Earlier automatic interplanetary station New Horizons on the surface of the dwarf planet were recorded polygonal cells. As it turned out, they were formed as a result of convection.
Unusual polygons reaches a length on average of about a kilometre, and professionals immediately it was obvious that they were definitely not as a result of and even against the impact of meteorites. However, the true reason for their appearance for some time remained unknown. In two new studies concurrently published in the journal Nature, researchers presented evidence that this reason might be the phenomenon called “convection Rayleigh — Benard”. This means that the polygons of nitrogen ice become so influenced by the internal heat of the dwarf planet.
The temperature on Pluto is much lower than on Earth, so the melting of the water ice under the influence of internal heat, it hardly seems possible. At the same time, the nitrogen returns to the liquid state when the temperature is -236 degrees Celsius, is much more achievable in terms of Pluto.
At the points where nitrogen ice is heated by the decay of long-lived radioactive isotopes in the earth, it melts and moves to the surface, where it refreezes and turns into a solid state. Such a cycle leads, in particular, to the fact that the traces left falling on a surface polygons by the meteors, gradually “are tightened”. A complete upgrade of the surface of the ice structures is happening on a cosmic scale, very quickly — in about half a million years.
The results obtained, in particular, help explain why some Kuiper belt objects are unexpectedly bright — if they also occur similar processes, their surface is covered with fresh ice, so to reflect light better than if the ice gave way and covered with dust.Related posts: |
The Cyrillic alphabet was traditionally one of the two scripts invented to write Slavic languages, the other being Glagolitic. However, Cyrillic emerged as the more widespread of the two, probably due to its similarity to the Greek alphabet. At its height during the Soviet Union, Cyrillic was used to write not only Slavic languages such as Russian, Byelorussian, Ukrainian, Serbian, Macedonian, Bulgarian, etc., but also languages from other family like Mongolian, Uzbek, Kazakh, Azeri, Tajik, and so on. After the breakup of the Soviet Union, many of these languages have started to move toward other alphabets, such as Arabic and Roman.
It is rather clear to see that a majority of the Cyrillic letters were derived from Greek. The backward N which stands for /i/ comes from Greek eta η. Some of the letters (such as sibilants) were borrowed from Hebrew and Syriac alphabets.
Quick note: Ъ and Ь used to represent a very short high back vowel and a very short high front vowel, respectively. They are now silent but mark the quality of the consonant preceding it. Ь means that the previous consonant is palatalized, while Ъ means that the preceding consonant is not palatalized even in a palatalizing environment. (Palatalization is the tendency of vowels like [i] and [e] to push the tongue toward the front of the mouth while pronouncing a preceding consonant, causing the consonant to change a little, like [t] to become [ts], [s] to [š], etc).
The following is the modern Cyrillic alphabet as adopted to write Russian. Each of the Cyrillic letters is actually a pair, the upper-case letter on the left and the lower-case letter on the right. Roman letters in blue represent traditional transliteration of Cyrillic. The purple letters are the phonetic value of the Cyrillic letter. |
For children, the entire world and everything it contains is a new discovery. Though most of the activities that kids do are in some way related to their senses, it is equally important for you as a parent, to find an objective behind every play session.
Why Sensory Activities Are Required?
A typical toddler is raring to see, touch, feel and taste everything within his or her reach. Here are few reasons why it is essential to teach sensory activities for kids and will also help you bring your child’s senses to life
- Exposing toddlers to well planned sensory activities will ensure that they learn about things in a safe environment.
- It will help develop motor control and balance.
- During group activities, your tot can learn about coordination and collaboration.
- It gives a sense of freedom and exploration to toddlers.
Learning With Fun Sensory Activities For Toddlers:
Here is a bucket of kids sensory activities that you can teach them in early childhood.
1. Visual Activities:
Painting and similar handcraft art are great activities that will teach your toddler about shape, texture and visual creativity. This way, they will get a feel of different materials and learn how to handle them:
a. Sponge Painting: Cut out pieces of sponge in various shapes and sizes to add variety to the activity. See how your tot smashes the sponge all over and create all sorts of cute designs.
b. Draw With Sand:For this one, you will need a drawing board with some glue spread over it. Help your darling to pour sand on the board and create shapes, alphabets or numbers.
c. Finger Painting: This is the simplest one of all, as toddlers love using their hands to paint. Make sure that the paint in use is non toxic and natural; in case they want to taste some!
d. Art With Ice: A perfect activity for hot days. You can make ice paint by adding some colored water in the ice tray. If you want to make it edible, freeze different colored juices mixed with water. When the ice cubes melts, on the paper or paint board, see how beautiful and creative shapes emerges.
2. Tasting Activities:
Sense of taste can be developed by blindfolding your tot and then having him taste different things from small bowls. Let him guess what he is eating. Food helps develop senses in following ways:
a. Crunchy food or something with nuts or a grainy feeling will make your child alert towards his food and make him feel the texture. Try cereal bars, popcorn, cheese sticks, crackers, etc.
b. Sipping from a straw helps children in maintaining a certain flow of breath and makes them disciplined.
c. Fresh fruits are not just healthy; but also enhance the senses of different colors and understanding the texture in toddlers.
d. You can use different materials like pasta, paper cuttings, ice cubes or cloth pieces for giving a new experience to the Visual activities listed above.
3. Water Activities:
A small tub can be a great play area for toddlers. Here are some ideas for tub based sensory activities:
a. Water Beads: Fill the tub with water beads of different colors. Then ask your tot to find out beads of a certain color. This is fun and helps to learn about colors and develops the sense of sight.
b. Farm Frenzy: Create a farm by filling the tub with grains of different sorts. Then hide plastic animals in the grains and shuffle them. Your toddler will learn animals’ names while finding the hidden toys.
c. Sound Off: You can add to the previous game by telling your tot what sound each animal makes. This will develop not just motor skills, but also speech and auditory senses, while adding to his or her vocabulary.
d. Bubble Tub: Fill the tub with water and put in a little soap. Let your tot splash in the water and make foam. This can get messy though.
Playing with dough and clay are also great to develop fine motor skills, so do stock these materials in your toddler’s Play Box. It should always be fun while sharpening various senses.
Do share your favorite sensory activity for toddlers in the section below!
- 7 Great Wild Animal Craft Activities For Your Kids
- Top 10 Paper Cutting Craft Activities For Your Kids
- 10 Fun & Easy Activities To Do With Your Kids This Summer
- 10 Easy And Exciting Plate Craft Ideas For Your Kids
- 10 Interesting Christmas Tree Crafts For Your Kids |
rancheria (ränchāˈrēä) [key], type of communal settlement formerly characteristic of the Yaqui Indians of Sonora, Mexico, and of various small Native American groups of the SW United States, especially in California. These clusters of dwellings were less permanent than the pueblos (see Pueblo) but more so than the camps of the migratory Native Americans. Rancherias were small sedentary farming villages located on the flood plain; their location was frequently shifted due to flooding and changes in the river course. Houses were usually flat-roofed structures with woven cane or wattle and daub walls. House clusters shared ramadas, or unwalled, roofed patios that provided a ventilated and shaded space for food preparation, lounging, and sleeping.
More on rancheria from Fact Monster:
See more Encyclopedia articles on: North American indigenous peoples, Culture |
Sandra Kolb, on leave from Central Kitsap School District, Silverdale, Washington (2000/2001 TEArctic) Rick Griffith, Fairview Junior High, Central Kitsap School District, Silverdale, Washington
Engagement and Exploration (Student Inquiry Activity
On the first day, ask the students to record the temperature and other information they find interesting. Have the students acquire and record the local temperature from either the Internet or a school temperature station.
As a class, have each group share the Antarctic station they have chosen and the temperature for that day. Is the weather cold? Warm? Windy? Rainy Snowy? How does it compare to the local weather? Is the weather at all the Antarctic stations the same? What is the range of temperatures? Of Wind? Do the students think the weather will change over the next several days/weeks in Antarctica? At home? How much?
In what units are the temperature data presented? Most of the Antarctic temperature data are presented in Celsius. Often local data are presented in Fahrenheit. Guide students through the conversion of Celsius to Fahrenheit and Fahrenheit to Celsius. Help them to estimate the difference as a quick way to check their calculations. Students will need to record both Celsius and Fahrenheit in their temperature logs. How will the students organize their data as they collect it? What do they need to record (date, Antarctic temperature in Celsius and Fahrenheit, local temperature in Celsius and Fahrenheit, Antarctic and local wind and precipitation data, etc.)? Discuss the necessity for careful and neat recording and for keeping records in a safe place.
Does the class get a good "picture" of what is happening? Probably not. Looking at data in a table does not show the changes effectively and talking about the trends loses even more details.
Can the students think of a better way to show the data? How are temperatures often shown? Suggest that a graph might be the best way to show the temperature changes. What kind of information is shown on a graph? Show the students graphs of histograms, x-y plots, and pie charts. Which might be best for temperature? As a class, discuss the attributes of each graph.
Provide the students with graph paper. In their groups, ask the students to show the local temperature changes as a graph. What information should the students show on their graphs (time and temperature)? How will they want to show their data?
Continue the class with students working in groups, but lead a discussion for the class. On an overhead, work through graphing the local data (Celsius) as a histogram with the students. What is the x axis? What data get plotted on this axis? What is the y axis? What data get plotted on this axis? What about scale? How long should their axis be? What values of the temperature did the students record? What number of days? What divisions should they make? What about a title?
Have the groups graph the local temperature data as the facilitator creates the graph on the overhead. Plot the data on the graph.
Ask the student groups to plot their temperature data from Antarctica. Remind them that they may wish to compare their Antarctic data with their local data. They may wish to compare data with other student groups. Should all groups use the same type of data (Celsius or Fahrenheit)? How about similar scales? As a class, determine the scale and data types to use.
When all the groups are finished, have the groups exchange plots and interpret another group's plot.
Ask the students to graph the local wind directional data. What kind of a graph might be appropriate? The students can show the data as a line graph or as a histogram. Often wind direction is plotted as a pie chart (modified; rose diagram). With the students working in groups, guide them through the creation of the pie chart as a class. Ask the student to count the number of times the wind is from the northeast, northwest, southeast, and southwest. The data will not fall into these categories in every situation. The students may have to make some decisions. The pie chart should be divided into a number of pieces equal to the number of observations the students recorded. Students need to color the number of pieces that correspond to northeast wind readings one color, southeast readings another, etc. Have the students plot their Antarctic wind data on a pie chart. Ask the students to exchange their charts with another group and have each group interpret the other's plot.
Ask each student group to select another Antarctic data set to graph (e.g., precipitation, wind speed) and allow time for their work. Circulate among the groups to assist as necessary.
Elaboration (Polar Applications)
This activity can be extended to include discussion of: o Weather trends related to seasonal changes the "opposite" seasons experienced by the Northern and Southern hemispheres at the same time of year o The "severity" of the winter and summer at the poles, relative to the milder seasonal changes of the lower latitudes |
Type of FlowersSpiderwort:
The Spiderwort (Tradescantia), is a genus of an estimated 71 species of perennial plants in the family Commelinaceae, native to the New World from southern Canada south to northern Argentina. They are weakly upright to scrambling plants, growing to 30–60 cm tall, and are commonly found individually or in clumps in wooded areas and fields. The leaves are long, thin and bladelike to lanceolate, from 3–45 cm long. The flowers can be white, pink, or purple, but are most commonly bright blue, with three petals and six yellow anthers. The sap is mucilaginous and clear. A number of the species flower in the morning and when the sun shines on the flowers in the afternoon they close, but can remain open on cloudy days until evening. Unlike most wildflowers of the United States and Canada (other than orchids and lilies), spiderworts are monocots and not dicots.
Though sometimes considered a weed, spiderwort is cultivated for borders and also used in containers. Where it appears as a volunteer, it is often welcomed and allowed to stay. The first species described, Virginia Spiderwort T. virginiana, is native to the eastern United States from Maine to Alabama, and Canada in southern Ontario. Virginia Spiderwort was introduced to Europe in 1629, where it is cultivated as a garden flower.
Some members of the genus Tradescantia may cause allergic reactions in pets (especially cats and dogs), characterised by red, itchy skin. Notable culprits include T. albiflora (Scurvy Weed); T. spathacea (Moses In The Cradle); and T. pallida (Purple Heart). The Western Spiderwort T. occidentalis is listed as an endangered species in Canada, where the northernmost populations of the species are found at a few sites in southern Saskatchewan, Manitoba and Alberta; it is however more common further south in the United States south to Texas and Arizona.
The three species of Wandering Jew, one native to eastern Mexico, also belong to the Tradescantia genus. Other names used for various species include Spider-lily, Cradle-lily, Oyster-plant and Flowering Inch Plant. The name of the genus honours the English naturalists John Tradescant the Elder (ca. 1570s – 1638) and John Tradescant the Younger (1608–1662). |
Introduction to poetry
English is about communication. The most effective communication is communication that expresses its ideas simply. Better yet is communication that is easily remembered and readily passed on. Through your study of poetry you will come across, or already have come across, poems whose authors are anonymous. These poems, because they are easy to remember and often sung to popular tunes of the time, have been passed across the world and through generations.
Poetry was a particularly valuable tool for communication in times that predate literacy. This is because an illiterate person cannot revise words by reading them, instead they must remember the words that they have been told. Because poetry uses rhyme and regular rhythm this becomes a lot easier. Consider how difficult it would be to memorise this paragraph. If we were to turn it into a simple poem it would be much easier:
Poetry was valuable when literacy was scarce,
Rhythm and Rhyme was oft' used to share
Ideas and expressions from one to the other
'Cause those who can't read have to remember.
Think of all the childhood poems that you can remember. Often these teach you a lesson that you should never forget. Such an example is:
Sticks and stones might break my bones
But names will never hurt me.
Or less informative;
I am rubber you are glue
Whatever you say bounces off me and sticks to you.
Adults also remember important lessons through rhyme;
A stitch in time saves nine.
And through other techniques such as alliteration,
Don't make a mountain out of a molehill.
Over time, poetry has become much more than an easy way to remember ideas and songs. From these humble beginnings, poets from all ages and across all cultures have developed new ways in which to communicate their ideas concisely. A poem is often exactly this, a short, concise expression of complex and sometimes abstract ideas. Many people will tell you that "a picture paints a thousand words". I will tell you that a poem paints a thousand pictures.
A poem has a number of advantages over other forms of communications that make it possible for them to communicate widely across cultures and broadly appeal to individuals - to paint a thousand pictures. These advantages include the ability to appeal to all five senses, the ability to make connections between words and thus ideas through language techniques, the ability to shape meaning through alterations in rhythm and meter and rhyme and the ability to change mood and tone to effect the appreciation of their subject. The choice of combinations of these factors will have a wide reception for readers, dependent on their personal context, their individual reaction to particular words, metaphors and rhythms.
In this day and age we are fortunate to have the resources to be able to examine a wide variety of poems with different styles, structures and purposes from all different parts of the world. There are poems that are aimed to amuse, to evoke happy emotions, sad emotions, emotions of love, political opinions. Poetry can also be informative.
We have poems that are written in strict structural formats, strict rhythms, strict rhymes and poems that use rhythm and rhyme 'freely'. We have 'epic' poetry thousands of lines long and shorter poetry such as haikus and limericks. We have poems from the distant past, poems from exotic countries and poems from our backyards written in our time.
Overwhelming? It needn't be. There are some simple rules that you can learn to be able to read, understand and enjoy poetry of all shapes and sizes. Inspiring, moving, uplifting? It can be. Just be sure to remember that when you analyse a poem and you are picking out language techniques and the features of the poem, you must always consider what effect any particular technique or feature has on the meaning, the purpose or your interpretation of the poem. English is about communication. When you are analysing another person's communication your fundamental question is:
How has this composer achieved their purpose? |
Architectural models are created when a replica is needed of a building or site for demonstrating the look and layout of what the model represents. By building a model you can provide a three dimensional demonstration of your design, with details visible from every angle. Making an architectural model requires the same process as creating any other model; you design the model, choose a scale, and then build the model out of your chosen materials. If done well, you can create a model that brings life to your vision without the costs of building the actual structure.
- Skill level:
- Moderately Challenging
Other People Are Reading
Things you need
- Scale styrene pattern sheets
- Straight edge
- Hobby knife
- Plastic cement
- Model accessories
- Open cell spray foam
Create a design for your model, complete with all the details you wish to present to the viewers of your model. Design the immediate area surrounding the building as well, including landscaping, to help the viewer better visualise the site. Include the building materials in your design, as the material types can be incorporated into your architectural model increasing realism.
Choose a scale for your model. Residential models should be built to a 1:50 scale, where 1/4-inch in scale represents an actual foot of space. If the model represents an entire site or you're modelling a larger commercial building, drop the scale to encompass more space while maintaining a manageable model base. A typical scale for these large models are anything smaller than 1:100 scale, where each 1/8 scale inch equals a foot.
Build the model using styrene plastic sheets. Patterned styrene sheets model the material to scale so it looks natural when used in a model. Cut the sheets according to your design using a straightedge to guide your cuts and a hobby knife to make them. Sand the edges smooth and glue the sheets together using plastic cement.
Place model accessories onto your model such as doors, windows, external staircases, light fixtures and everything else needed to add realism to your model. Paint the model if necessary where the patterned materials did not provide the proper colours.
Cut the roof for your house and place onto the body of the house.
Create a base for your model using open cell spray foam, commonly used as household insulation. Cut the foam using a hot wire cutter to shape the texture of the land. After forming the land, paint the foam a grassy colour and lay out the landscaping for the model, including any pavement. Trees, vehicles, and pedestrians all serve to place emphasis to how the model building would appear in its surrounding environment.
Glue the model to the foam baseboard for viewing.
Tips and warnings
- Spray a layer of clear coat paint onto the model to provide protection from colour fading and ease of cleaning.
- 20 of the funniest online reviews ever
- 14 Biggest lies people tell in online dating sites
- Hilarious things Google thinks you're trying to search for |
Nitrogen pumped into the environment by human activities such as driving cars and farming is fertilising tree growth and boosting the amount of carbon being stored in forests outside the tropics, say researchers.
Their study provides a surprising example of how one type of human pollution is helping to counter another. But the researchers caution that they do not yet know what proportion of carbon dioxide emissions are being offset by the anthropogenic release of nitrogen.
Nitrogen is an important plant nutrient, widely used as an agricultural fertiliser, and two studies in 2006 suggested that its availability in nature will ultimately limit the capacity of forests to soak up human CO2 (Nature, p 440, vol 922 and Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.0509038103). But until now, no one had quantified the effect that human deposits of nitrogen were having on forests.
Federico Magnani of the University of Bologna in Italy and his colleagues have now done just that for temperate and sub-Arctic (boreal) forests. They looked at 20 clusters of forests, from Alaska to Italy, and Siberia to New Zealand, to see how much carbon they are storing and what is driving the growth.
Young, rapidly growing trees take more carbon from the atmosphere than old trees, so the researchers accounted for this in their calculations.
"If you take away age effects, then despite the large variability between the forests in tree type and climate, you find a surprising correspondence between carbon storage and nitrogen deposition," says Magnani.
The researchers found that on average for every kilogram of nitrogen that is deposited on the forest floor (by rainfall, for example), an extra 400 kg of CO2 is absorbed.
If forests were an isolated system, the removal of CO2 from the atmosphere as the trees grow would be balanced over time by the release of the same gas as dead trees decompose.
"But if you have an input from outside you break the cycle and increase one side," explains Magnani. In this case, the outside input comes from human nitrogen deposits, which are making plants grow faster than they decompose, and the resulting imbalance leads to net carbon storage.
"Even bad things can have a positive side effect," Magnani told New Scientist, "If we want to manage our environment in the right way for the next decades we must acknowledge both positive and negative effects."
But he is not yet arguing that the solution to climate change is to spray forests with nitrogen. The researchers do not yet know how much CO2 is being removed by the nitrogen effect on forests, and Magnani is reluctant to make a back-of-the-envelope estimate.
He points out that the relationship they have come to - 400 kg of CO2 for every 1 kg of nitrogen - is an average, and accurate global calculations would have to take into account the age distribution of trees.
European plants are thought to absorb a significant amount of CO2 - between 7% and 12% of European emissions according to a 2003 estimate.
However the new results do not apply to tropical forests, which remain one of the world's most important land-based carbon sinks. Magnani says other nutrients, phosphorus for instance, could play more critical roles than nitrogen in the tropics.
Finally, in excess, nitrogen can be toxic to plants causing them to suffer more from drought. "When you try and manipulate nature you have to be careful," says Magnani. "Just as I would not put dust into space to shelter air from incoming solar radiation, in the same way I would not pour nitrogen onto ecosystems to soak up carbon."
Journal reference: Nature (vol 447 p 848)
Climate Change - Want to know more about global warming: the science, impacts and political debate? Visit our continually updated special report.
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. |
David Smith, Betsy Youngman, Earth Exploration Toolbook Chapter from TERC
Activity takes between four and six 45-minute class periods. Computer access is necessary.Discuss this Resource»
Learn more about Teaching Climate Literacy and Energy Awareness»
Most suitable for upper high school and undergraduate. Can be modified to be used in upper middle school classes.
About Teaching Climate Literacy
Other materials addressing 2f
Other materials addressing 4c
Other materials addressing 4d
2.4 Water stores and transfers energy.
Excellence in Environmental Education Guidelines
Other materials addressing:
A) Processes that shape the Earth.
Notes From Our Reviewers
The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness.
Read what our review team had to say about this resource below or learn more about
how CLEAN reviews teaching materials
Teaching Tips | Science | Pedagogy |
- Teachers may want to create a summative assessment for each part of the activity since none is supplied.
About the Science
- Good background content for educators and students.
- Comment from scientist: The physical connections between the variables explored in the activity are not well explained and will need further explanation by the educator. Ideally an educator would encourage a hypothesis-based inquiry which could then lead in an understanding if the analyzed variables are connected.
About the Pedagogy
- Student directions are very detailed and well sequenced.
- Screenshots aid student navigation through the software and website.
- Most of the assessment questions are embedded in the student directions. No summative assessment supplied.
- A thorough and complete guide is supplied for educators.
- Students who are not very tech-savvy will need guidance from educator.
- This resource engages students in using scientific data.
See other data-rich activities
Have you used these materials with your students? Do you have insights to share with other educators about their use? Please share with the community by adding a comment below.
Please use this space only for discussion about teaching with these particular materials.
For more general discussion about teaching climate literacy please use our general discussion boards.
To report a problem or direct a comment to the CLEAN project team please use our feedback form (or the feedback link at the bottom of every page).
Off-topic posts will be deleted. |
||It has been suggested that this article be merged with Group velocity and Group velocity and phase velocity. (Discuss) Proposed since May 2016.|
The phase velocity of a wave is the rate at which the phase of the wave propagates in space. This is the velocity at which the phase of any one frequency component of the wave travels. For such a component, any given phase of the wave (for example, the crest) will appear to travel at the phase velocity. The phase velocity is given in terms of the wavelength λ (lambda) and period T as
Equivalently, in terms of the wave's angular frequency ω, which specifies angular change per unit of time, and wavenumber (or angular wave number) k, which represents the proportionality between the angular frequency ω and the linear speed (speed of propagation) νp,
To understand where this equation comes from, consider a basic sine wave, A cos (kx−ωt). After time t, the source has produced ωt/2π = ft oscillations. After the same time, the initial wave front has propagated away from the source through space to the distance x to fit the same number of oscillations, kx = ωt.
Thus the propagation velocity v is v = x/t = ω/k. The wave propagates faster when higher frequency oscillations are distributed less densely in space. Formally, Φ = kx−ωt is the phase. Since ω = −dΦ/dt and k = +dΦ/dx, the wave velocity is v = dx/dt = ω/k.
Relation to group velocity, refractive index and transmission speed
Since a pure sine wave cannot convey any information, some change in amplitude or frequency, known as modulation, is required. By combining two sines with slightly different frequencies and wavelengths,
the amplitude becomes a sinusoid with phase speed Δω/Δk. It is this modulation that represents the signal content. Since each amplitude envelope contains a group of internal waves, this speed is usually called the group velocity, vg.
In a given medium, the frequency is some function ω(k) of the wave number, so in general, the phase velocity vp = ω/k and the group velocity vg = dω/dk depend on the frequency and on the medium. The ratio between the speed of light c and the phase velocity vp is known as the refractive index, n = c/vp = ck/ω.
Taking the derivative of ω = ck/n with respect to k, yields the group velocity,
Noting that c/n = vp, indicates that the group speed is equal to the phase speed only when the refractive index is a constant dn/dk = 0, and in this case the phase speed and group speed are independent of frequency, ω/k=dω/dk=c/n.
The phase velocity of electromagnetic radiation may – under certain circumstances (for example anomalous dispersion) – exceed the speed of light in a vacuum, but this does not indicate any superluminal information or energy transfer. It was theoretically described by physicists such as Arnold Sommerfeld and Léon Brillouin. See dispersion for a full discussion of wave velocities.
- Nemirovsky, Jonathan; Rechtsman, Mikael C; Segev, Mordechai (9 April 2012). "Negative radiation pressure and negative effective refractive index via dielectric birefringence" (PDF). Optics Express. 20 (8): 8907–8914. Bibcode:2012OExpr..20.8907N. doi:10.1364/OE.20.008907. PMID 22513601.
- "Phase, Group, and Signal Velocity". Mathpages.com. Retrieved 2011-07-24.
- Crawford jr., Frank S. (1968). Waves (Berkeley Physics Course, Vol. 3), McGraw-Hill, ISBN 978-0070048607 Free online version
- Brillouin, Léon (1960), Wave Propagation And Group Velocity, New York and London: Academic Press Inc., ISBN 0-12-134968-3
- Main, Iain G. (1988), Vibrations and Waves in Physics (2nd ed.), New York: Cambridge University Press, pp. 214–216, ISBN 0-521-27846-5
- Tipler, Paul A.; Llewellyn, Ralph A. (2003), Modern Physics (4th ed.), New York: W. H. Freeman and Company, pp. 222–223, ISBN 0-7167-4345-0 |
Why would a zombie just walk, when she could lurch? Or clomp? Or even trudge? Monsters Can Mosey–Understanding Shades of Meaning, story by Gillia M. Olson, illustrated by Ivica Stevanovic, is an excellent read aloud choice for upper elementary students to demonstrate how vocabulary choices can make writing more exciting and vivid.
It presents 18 different words with similar but different meanings, as zombie child, Frankie, is encouraged by her zombie mother to select a signature way of walking.
The illustrations are cartoonishly ghoulish, and will captivate a younger audience without frightening them. Characters have a gray-green pallor, unkempt hair, torn clothing, and have a few stitches holding them together, yet their wide-eyed faces give them a cute, silly appearance.
What did we do after we read the book? Picture a library full of 3rd graders, performing their best zombie walks with arms outstretched, and vacant expressions. These were the instructions, as we took 5 slow steps in each style.
- Lurch: an awkward staggering walk
- Trudge: walk like it is really hard work
- Lumber: walk clumsily and heavily
- Clomp: walk heavily and noisily
- Stomp: walk heavily, noisily and usually angrily
- Mosey: walk in an unhurried or aimless manner
- Stride: walk with large steps usually with purpose
You can guess how they walked out of the library, after their teacher lined them up. All monsters need a good walk. |
MagnoliidaeArticle Free Pass
The oldest definitive angiosperm fossils are from the Early Cretaceous (about 146 million to 100 million years ago). The most abundant fossils are pollen grains (this is because the outer coat, or exine, contains sporopollenin, a chemical that is extremely resistant to decay). Leaves, wood, and well-preserved flowers also have been recovered from Early Cretaceous sediments. At one time, angiosperms were thought to have appeared suddenly (“explosively”) and were so diverse in structure that it was theorized that they must have originated well before their earliest remains appeared in the fossil record. When the first definitive angiospermous pollen grains and leaves were discovered, however, they were in fact similar to each other and accounted for a small proportion of the fossil plant material. This would suggest that they had evolved from their ancestors not long before they first appeared as fossils. During the course of the Cretaceous (i.e., over a span of 80 million years) many families emerged, and significant structural variations became evident.
The earliest definitive angiospermous pollen grain is known as Clavatipollenites, which recent studies suggest is probably most closely related to the order Laurales, although it shows some links to the Magnoliales. It first appeared in the rocks of the Barremian (130 million to 125 million years ago), or in those of the slightly earlier Hauterivian (134 million to 130 million years ago), of the Early Cretaceous, about 130 million years ago, and in such diverse regions as England, Australia, and the United States. Clavatipollenites was the oldest known pollen to show a typical angiosperm construction of the outer exine into a perforated tectum (roof)—giving the surface of the grains a network (reticulate) appearance—columellae (pillars), and foot (floor). It had a single elongated aperture (monosulcate) and closely resembled the pollen of Ascarina (Chloranthaceae).
Other types of pollen appeared a little later in the Cretaceous, between 108.5 and 100.5 million years ago. Also appearing about this time were the oldest fossils of the Magnoliales so far discovered—pollen grains of the Winteraceae. Another monosulcate pollen type that arose early in the fossil record in some primitive Magnoliidae—including Degeneriaceae, Eupomatiaceae, and some Annonaceae—resembles that found in some gymnosperms, having a smooth unperforated surface and a more or less homogenous (structureless) exine. It is debatable which pollen type is more primitive (the tectate-columellate or homogenous type), but they are not fundamentally different from one another, because both have been found within Polyalthia (Annonaceae).
There are evolutionary advantages in the tectate-columellate type of pollen. These grains more easily expand and contract with changes in humidity, contributing to the longevity of the pollen. Incompatibility proteins operate via two basic methods to promote cross-pollination. In the most common method the proteins, which are stored beneath the tectum in the pollen of many plants, are “recognized” by matching proteins produced in the stigma or styles; this mechanism prevents self-pollination and contributes to greater genetic diversity. Triaperturate pollen, found among the other dicotyledon classes, began to appear later.
Leaves as well as rather inconspicuous flowers also appeared during the Cretaceous. The first angiosperm leaves evolved contemporaneously with the tectate-columellate pollen described above. They had irregular, basically pinnate venation with a midrib and a secondary vein. Secondary and smaller tertiary veins were poorly defined. The leaves were small and of a simple elliptical or ovate shape. Leaves with features characteristic of Magnoliales also appeared during this time in rock strata of the eastern United States.
In 1990 Aptian deposits (125 million years ago to 112 million years ago) in Australia revealed a small fossil with very thin herbaceous stems, leaves, and female inflorescences. Clavatipollenites pollen was the only angiospermous type found in the same strata. This new fossil has been linked with several extant angiospermous families; its leaves resemble those of Saururaceae, Piperaceae (Piperales), and Aristolochiaceae (Aristolochiales), and its reproductive organs resemble Chloranthaceae (Piperales). If the new fossil had also contained the Clavatipollenites pollen, further links with Chloranthaceae and Aristolochiaceae would have been suggested. An ancestor of such a plant, with a small, rhizomatous perennial form and diminutive reproductive organs, might represent the ancestral angiosperm from which the first monocotyledons and rhizomatous-herbaceous dicotyledons diverged. Furthermore, as the lack of pre-Albian (112 million to 100 million years ago) fossil angiosperm wood might indicate, weedy dicotyledon shrubs, which had been considered ancestral to other angiosperms, may have evolved from a plant similar to this Australian fossil.
Do you know anything more about this topic that you’d like to share? |
Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
You can change this under Settings & Account at any time.
Socratic Method Workshop
Transcript of Socratic Method Workshop
They find memorization fun and express interest in lots of new facts
At this age they naturally ask a lot of questions Less interested in facts than in asking "why?"
Students begin to apply logic to all academic subjects
Abstract thinking begins to mature at this age The ability to communicate thoughts and ideas becomes more pronounced and clear Grammar + Logic = Intelligent
Stage Stage Communication If the Logic Stage has gone well, this is the time when the students learn to write and speak their thoughts with force and originality.
By this time they have the ability to probe deeper and refine their own questions through clear logical communication processes. If all 3 stages are properly excuted,
the way they think should look like But instead, most of our kids seem to think like this ? Purpose of Socratic Method "The Socratic professor aims for 'productive discomfort,' not panic and intimidation."
The aim is not to strike fear in the hearts of students just so that they come prepared to class;
the aim is to encourage students to clearly articulate the values that guide their lives, and that their values and beliefs should withstand scrutiny when challenged. It forces students to assess whether what they learn is true or false and to be accountable for their reasoning through facts and logic. As Teachers we can't do the thinking for them, but we could guide/assist them. *What kinds of problems have you encountered with students' getting frustrated about having to think for themselves? vs Copy what I do method Socratic Method What is the Socratic Method? Keeps students' minds engaged by reaching higher level of thinking. Example the U.S. Government class from an elementary class. Watch how this fourth-grade teacher uses a line of questioning in a structured, text-based discussion to help students recognize important elements and themes in a piece of art. It's Your turn! Teacher:
1. Set conversational guidelines
2. Discuss content or topic
3. Promote the first questions *Why is there such "disconnect" between what they learn and how they think? 35% of students dislike school Which method would you prefer for your students? Periodically summarizes ideas/thoughts to help guide students to dig deeper into topics/ideas Thought-provoking questions (Not interrogation or debate) Uses "asking and answering" to examine the values, principes and beliefs of students Together looking for the best solution and, more importantly, WHY Holds students accountable on what they know rather than their ability to regurgitate
NO rote memorization Students practice respectful interaction and discussion on any topic maturely and openly 1. Student responds with the first point.
2. 2nd Student will either add in or disagree with the first point and follow with explanation. Teacher:
1. Summarize the first couple of points.
2. After summary, refine the questions or ideas. Do not attempt to answer their questions right away! Welcome the "crazy idea" that offers a new perspective on the topic.
Discourage those ideas which are not serious. Keep the discussion FOCUSED on the subject matter and intellectually responsible Teacher:
Link the multiple concepts/points together.
Summarize what has been addressed and/or resolved. Students are too practiced at "doing school" and discovering what they need to know just to get by the next test.
Don't let them learn just "enough" to get by. This is IMPORTANT because Students:
Go home but feel challenged to continue pondering the points Socratic Method uses respect as a cornerstone and serves as a great foundation for essay writing and understanding texts. Instead students have become lazy in their thinking and tend to manipulate the adult to do the thinking for them. 1. Don't talk over others.
2. Listen to each other.
3. Participation is required! (No silence)
4. It's a discussion for learning, not debate.
5. No single comment as a response. Prior to discussion, teacher should:
Know the information well.
Prepare intelligent questions before class.
Prepare a guide of where you want to lead the discussion and plan your questions accordingly. Don't be a sage on the stage, or guide on the side.
Be willing to say, "I don't know the answer to that question." NO speeches or long lectures Brevity and short intervention from the teacher are most welcome.
Periodically summarize what has been and what has not been addressed and/or resolved. Students:
1. Respond or continue with more points and ideas
2. Begin to discuss
3. Propose questions
4. Explain confusing concepts to each other
5. Continue discussion and teach/learn from each other Do not give correction even if the answer is wrong. Stimulate the discussion with probing questions
Ask questions that begin with "Why?" "How?" or "What is the meaning of?"... Ask the questions in several different ways for comprehension.
Be comfortable with silence. Do NOT fill the silence with a conversational void; silence creates a kind of helpful tension--> "Productive Discomfort" Do not ask, "What would you do next?" Calling on someone in a non-threatening way tends to activate others who might otherwise remain silent.
Encourage students to support their ideas. Do NOT ask "yes" or "no" questions!!
If necessary to ask "yes or no" questions--> always use follow up questions. Because these kinds of prompts promote memorization rather than the logical thinking process. Follow up questions:
"What's the reason you chose that answer?"
"Why does that make sense to you?"
"What does it mean to you?" It's YOUR turn!
Let's Practice :) How Socratic Method Works
and the important components? If the discussion is going off the topic Challenge more probing thoughts--> either more in depth or as a lead-in into your next lesson. *Do you have any questions or comments? Ask the student to prove or explain the answer that they came up with?-->"WHY" Do not let the students regurgitate answers from readings and lectures.-->Again hold them accountable! Start Conclusion During So where did we get off track? What went wrong? How can we as educators effect the change we want and need in this country? Let's review :) They begin to pay attention to cause and effect by using logic
Student also begin to pay attention to the relationship between different fields of knowledge and to the way facts fit together in a logical framework The "AH HA" moments are absolutely priceless, because of their ability to connect ideas Wait for students to respond--> "10 sec wait" rule before attempt to re-phrase your questions! If student does not respond right away. Form into your group according to subjects
5 minutes to discuss the topics from handout
1 person is a teacher
1 ill-prepared student
2 well-prepared students
Or anyway you like it
Perform and evaluate by peers www.miriamchin.com Who is Socrates? Socrates was a Greek philosopher.
He was a teacher and a lover of wine and conversation.
His famous student, Plato, called him "the wisest, and justest, and best of all men whom I have ever known" (Phaedo). Philosophy, the love of wisdom, was for Socrates itself a sacred path, a holy quest -- not a game to be taken lightly. He believed -- or at least said he did in the dialog Meno -- we have unfortunately lose touch with that knowledge at every birth, and so we need to be reminded of what we already know (rather than learning something new). Socrates himself never wrote any of his ideas down, but rather engaged his students in endless conversation.
Plato, his famous students, reconstructed these discussions in a great set of writings known as the Dialogs.
It is difficult to distinguish what is Socrates and what is Plato in these dialogs. But the idea originated from Socrates. What was Socrates' philosophy? Why is Socratic method name after Socrates? 44% of dropouts under age 24 are jobless More than 1.2 million students drop out of school every year. -->That's one every 26 seconds. American students vs Students in other countries Even America's top math students rank 25th out of 30 countries when compared with top students elsewhere in the world. American students rank 25th in math and 21st in science out of 30 industrialized countries. By the end of 8th grade, U.S. students are two years behind in the math compare to peers in other countries. *Examples of "disconnect"? |
The goal of the curriculum is based on the idea that children learn through play and exploration. Achieving a balance between adult stimulation and independent exploration and discovery is how that goal is accomplished. Finding this balance revolves around the space in which the children occupy the equipment and opportunities offered, observations of individual children, and the expertise of the teacher. Children learn best from the activity or toy they choose. Because of this, the curriculum should offer the flexibility of addressing and respecting children’s immediate interests. An objective of the curriculum should be to create those “teachable moments” throughout the day whether exploring and learning on their own or by interacting with a teacher or peer.
Curriculum for infants should provide security, predictability, and the opportunity to explore safely and freely. Infants spend much of their first year testing their environment to see who they can trust to meet their needs. It is very important that these individuals, who infants learn to trust, remain available within a close proximity for that child. If this can happen, infants have the motivation to explore and experiment within their environment independently. The environment should offer experiences that challenge motor, social, cognitive, and language development. Sensory experiences are key at this early age for developing a sense of their environment and themselves.
Curriculum for toddlers should also provide predictability and security. Toddlers learn with their whole bodies. They like to explore with their hands, feet, mouth, and eyes. Toddlers need the opportunity to express their energy and curiosity in a positive and enriching environment. The emergence of self-awareness is critical during these years. Toddlers need a program that will promote a positive self-image and encourage a child’s natural desire to learn. They too need opportunities to enhance motor, social, cognitive, language and self help skills.
Curriculum for preschooler and Kindergarten children is derived from many sources such as the knowledge base of various disciplines, society, cultural, and children and parents’ desires. Since children learn best from the activity or theme they choose, it is most effective to let the children guide the curriculum. A talented teaching team uses careful observation and attention to pull the children's’ curiosities together in a cohesive and meaningful way. The daily schedule represents a balance of structure and freeplay comprised of developmentally appropriate activities designed to help each child toward independence and enhanced self-esteem. Our activities encourage children's’ awareness of the value of uniqueness and cultural diversity.
Curriculum is what happens with the child the entire time they are in out-of-home care. From diaper changes to a painting activity, the curriculum will address all areas of development and strive to meet the needs of the whole child. |
An artificial kidney is a filtering component of a dialysis machine that works to clean the blood in individuals with kidneys that do not function properly, according to the National Kidney Foundation. The artificial kidney filters away waste products such as excess fluids, urea and potassium.Know More
Dialysis procedures that utilize artificial kidneys are referred to as hemodialysis, explains Healthline. During a hemodialysis procedure, a catheter is first inserted in the leg, arm or neck. The catheter creates a pathway for waste products and chemicals to filter out of the blood. Individuals are usually required to undergo treatments up to three times per week.
While hemodialysis treatments are beneficial in improving kidney function, the procedures can also cause complications in certain individuals, notes Mayo Clinic. Low blood pressure is a common side effect of the treatment in individuals with diabetes and may also be accompanied by breathing difficulties and stomach upset. Muscle cramps may also occur; however, this symptom can often be remedied by altering intake of fluids and sodium between treatments. Additional side effects can include itchy skin, low iron levels in the blood and high blood pressure. Individuals who undergo hemodialysis treatments may also experience leg pain and sleep disturbances due to hindered breathing.Learn more about Organs
Both the liver and the kidneys are responsible for cleaning the blood. Both of these organs function as filters. However, while the liver operates as the body's main detoxifier, cleansing the blood of potential poisons, the kidneys work to eliminate waste and regulate the blood's chemical composition and maintain stability for optimum bodily function.Full Answer >
Kidney function tests determine how effectively the kidneys are removing extra fluid and wastes from the blood, explains the National Kidney Foundation. Urine tests determine the swiftness of waste removal and if the kidneys are allowing anomalous amounts of protein to leak through.Full Answer >
Urinary tract infections typically affect the bladder, but they often spread and cause infection in the kidneys, according to the National Kidney Foundation. Symptoms include pain in the back, frequent urination and fever.Full Answer >
The kidneys filter blood through a two-step process. Blood first enters a filter called the glomerulus, where excess fluid and waste products are redirected into the second area of filtration, known as the tubule. The tubule extracts any needed minerals that make it through the first filter and sends them back into the bloodstream, while the final product emerges from the tubule as urine.Full Answer > |
We spent the two to three weeks before Halloween exploring related themes. I began with a monster theme by reading Laura Numeroff's fun book about raising a pet monster, 10 Step Guide to Living With Your Monster. It is a very cute book and is not scary at all. The children then had an opportunity to draw and paint their own versions of a pet monster, name it, and tell or write a short story about it. We played the Guess Who deluxe version that includes monsters and we used a "build-a-monster" toy to create our own monsters again.
Here is a link to more Childrens books about Monsters and Halloween themes.
This week we focused on the pumpkin. Younger children engaged in painting a round orange pumpkin on a sheet of paper. When dried, we used sticky-backed foam sheets to cut out jack-o-lantern faces. The older children worked on carving their own jack-o-lanterns, with an appropriate amount of assistance as needed. For some children, this was a first time experience. This was no easy task for the therapists. It required extra prep time, lots of hands on work, and clean-up time. But it was well worth all of the effort.
Some parents may have questioned, silently, whether it was a productive use of therapy time. The answer I would give is a resounding, "Yes". Here are some of the lessons learned.
First of all, any experience broadens the child's knowledge of the world around him. It builds a larger framework of prior knowledge on which to draw for learning and developing in all areas: language, concepts, processes, motor, sensory, social, and emotional skills. "Activating prior knowledge" is a term one often sees in literature about helping children with learning difficulties.
Language, vocabulary, concept development: During the activity much vocabulary was explored from the physical descriptive words (round, sphere, smooth, ridges, slimy, gooey, slippery), the labeling of parts (stem, skin, rind, pulp, seeds), categorization of pumpkins (not as easy as you would think since pumpkins can be classified as either/both fruit and vegetable), and new vocabulary (carve/cut, pumpkin/jack-o-lantern, light/illuminate). Speech tends to flow much more easily from a child whose attention is captivated by a task and whose imagination is engaged. Also learning tends to "stick" when it occurs in a functional manner or with practical application.
Concepts and Processing: Children learned to process the sequence of the tasks; this might be difficult for someone who has not experienced the need to cut off the top first so that they are able to scoop out the insides. They had to listen to and follow directions to understand how to use the tools and what to do with each one. They had to process the idea that the toothed edge of the saw needed to be pointing the direction in which one was cutting. We also threw in some concept processing tasks during the carving time: Which is larger a pumpkin or an orange? Is a pumpkin harder or softer than a banana? ...
Motor and Sensory Processing: It goes without much explanation that this activity involves motor skills: holding, sawing, pushing, and manipulating tools. The feel of the pumpkin addresses sensory processing. Many children with or without Autism had difficulty touching the gooey insides and picking out seeds. But all of them experienced it, if only briefly. Some really conquered their difficulties and plunged into the task.
Emotional: One child in particular is a new client with an Emotionally Disturbed label. He has been standoffish and minimally engaged in the therapy process. Having come into the office reluctantly, his eyes seemed to light up when asked if he wanted to carve a real pumpkin. He was engaged during the entire process, he spoke more readily to the therapist, and he even smiled during the task. I think the next time he comes to speech therapy, he will not be dragging his heels. We made a real connection through the activity.
Additionally, many of the children had some fear of the strange jack-o-lantern faces. This activity took a very innocent and non-threatening pumpkin through the transformation process. Their hands made that process happen. The end result was what they fashioned it to be. Now they have full knowledge and understanding that the scary jack-o-lanterns are all just humble pumpkins decorated or carved by someone's hands. This knowledge can only serve to reduce, if not eliminate, the old fears.
Social: All of the above feeds into the social domain because to be social we need to develop our language skills. Additionally, such a task engages even the least socially developed to child to at least look at you and what you are doing and to pay closer attention thus increasing communicative interest. When two or more children are working at the same time, it fosters a desire to share the experience, to check out the other one's work, and to show off their own creations.
Expression: Now that the children have this experience, they will be willing to talk about it, if not initiate the discussions. They will engage in telling family members about the experience, explaining the process, and describing what it was like. Only reading a story about carving pumpkins would not build that kind of excitement for them. For the older children, they will be asked to write sentences or a short paragraph about the experience. They now will have greater understanding and increased vocabulary for these tasks. |
Plastic pollution is a costly, destructive, and growing problem worldwide. In fact, at the current rate, the annual amount of plastic waste entering Earth’s ecosystems could almost triple by 2040, according to a study co-authored by researchers at The Pew Charitable Trusts and published in the peer-reviewed journal Science. The paper was published online July 23 and appeared in the journal’s Sept. 18 print edition.
The paper, “Evaluating Scenarios Toward Zero Plastic Pollution,” also found that annual flow rates of plastic pollution can be reduced by nearly 80% in the next 20 years if governments and industry around the world take immediate, ambitious, and coordinated action. The authors used first-of-its-kind modeling to examine stocks and flows of municipal solid waste and four sources of microplastics under five scenarios and across a range of potential solutions.
Achieving the 80% reduction would require the full suite of those actions, which the authors term the “system change scenario”: reducing plastic use, implementing reuse systems, substituting plastic with other materials where appropriate, improving recycling strategies and rates, expanding waste collection, and building better disposal facilities. However, even if the 80% reduction is achieved, 710 million metric tons of plastic waste could enter aquatic and terrestrial ecosystems between 2016 and 2040.
The study identifies several challenges to overcome if the world hopes to achieve the potential of the system change scenario. These include:
- Scaling waste collection to all households at a global level: This monumental task would require more than 1 million new households to be connected to collection services every week between 2020 and 2040.
- Overcoming mismanaged plastic waste: This category includes dumpsites, openly burned waste, and plastic released directly into aquatic or terrestrial environments. The export of waste from high-income to low- and middle-income countries also poses a comparatively small but growing problem to address.
- Filling data gaps: To fully and accurately understand the effectiveness of consumer, corporate, and policy actions, the researchers said they would need more empirical data, especially on waste management in middle- and low-income countries.
Realizing the potential of fundamental system change would also require cutting the production and consumption of newly produced plastic by 55% from 2016 levels to 2040 relative to business as usual. Although implementing system change would be daunting from a logistical and policy perspective, it would not be cost prohibitive, the study found. In fact, due to reduced plastic production, increased recyclability, and other changes, the researchers estimate that global net waste management costs would be about 18% lower over the study period than if no action were taken.
“Evaluating Scenarios Toward Zero Plastic Pollution” serves as the technical underpinning for a complementary report, “Breaking the Plastic Wave,” which was released by Pew and the London-based sustainability consultancy SYSTEMIQ on July 23rd.
Winnie Lau is a senior manager with The Pew Charitable Trusts’ preventing ocean plastics project. Jim Palardy is a project director with The Pew Charitable Trusts’ conservation science project. |
Chip Design and ASIC Verification Basics in 2021
In a world dominated by apps and software, little or no attention is given to the actual hardware that allows apps and software to run, also known as ASIC verification. The chip, as the smallest part of any computer or laptop component, is also the most important component of any hardware system. Also known as IC (Integrated Circuits) or ASIC (Application Specific Integrated Circuit), each chip is designed to implement a specific function.
Despite looking tiny, a single ASIC has the potential to hold billions of transistors. Thus, the fabrication process is extremely costly. The same goes true for the verification process, which undergoes several stages to ensure that the chip works as designed.
ASIC verification is quite a challenging task that requires hours of testing and numerous optimizations. Even though there are no two projects alike, we can pinpoint several chip design verification steps that can be found in any successful verification project.
The verification can be split into several approaches. For instance, the top-down verification approach includes a system to individual components, while the bottom-up verification approach involves individual components to system. You can also have a platform-based verification approach, where the developer verifies IPs in an existing platform.
Ultimately, you can also have a system interface-based verification approach, which is suitable for the final integration verification and models each block at the interface level.
There are several technologies used in the verification of ASIC. They are broken down into Functional Verification, Formal Verification, and Emulation & Acceleration:
1. Functional Verification: ensures the design works according to the original specs.
2. Formal Verification: uses formal methods of mathematical verification to ensure the design requirements are met,
3. Emulation & Acceleration:
- In-system verification
- Highest performance
- Highest capacity
- Real system environment
Tools Used for Verifications
In order to verify the chip, engineers can use various tools, ranging from pre-developed simulation models and emulation to the ASIC test chip and FPGA. With emulation, for instance, they can verify designs using real hardware or can use interfaces with real HW components. On the other hand, with FPGA, they can enable automatic partitioning and routing.
The Verification Process
Step 1: Specifications
The specification is the first step of an ASIC design verification. This step includes a set of requirements that should be met and hold true across all possible operating requirements of the process, voltage, and temperature, as well as across all mismatches for a particular circuit. How much detail a spec contains depends on the particular situation, but it at least covers all the necessary information needed for the design in an unambiguous manner.
In this first step, it is also paramount for the engineers to consult with the architect in order to fully understand the functionality of this tool.
Step 2. Implement the Verification Environment
After the specs have been created and a verification plan is in place, engineers need to use an HVL, or verification environment, to check the ASIC. This process requires advanced programming skills
Step 3. Running Regressions
Next, the tests are run multiple times in order to pinpoint any potential scenario. At this point, random tests are preferred over normal tests, because they have a higher chance to find hidden weaknesses that were previously hidden.
The HVL is compiled then with the HDL or the RTL design code, and then the result is compiled and simulated. The stimulation will then produce waveforms, coverage results and logs. These will be analyzed by the verification engineer in order to ensure that the simulation has passed.
Step 4. Sign Off
This is the final stage of the chip design verification process. At this point, the verification engineer has to conclude if the process is actually completed. While it’s hard to ensure that all internal transitions and states were hit, or in other words, to ensure the final product works as intended, there are some mandatory things you can do to minimize the number of bugs.
- All tests produce zero failures
- The collected coverage is at 100%
- No critical bugs have been found during the final testing phases
No matter your needs for chip design verification services, Tremend can help. With extensive expertise in the chip design verification market, we’re always up to date with the latest testing technologies. Discover our range of chip development services on our Chip Design Verification page. |
This Y1 common exception word (CEW) pack allows pupils to practise reading and spelling a selection of the 45 tricky words for this cohort.
Each worksheet includes two words for pupils to read, spell and understand, with an opportunity to develop letter formation using handwriting line guides.
There are four PDF worksheets, plus an answer sheet, in this pack, covering the following common exception words:
- by, my
- one, once
- there, where
What are common exception words?
Common exception words are words which don’t follow the common rules of spelling, or which use letter combinations to represent sounds in an uncommon way.
Year 1 common exception words
National Curriculum English programme of study links
read common exception words, noting unusual correspondences between spelling and sound and where these occur in the word
spell common exception words
write from memory simple sentences dictated by the teacher that include words using the GPCs and common exception words taught so far |
Hubble Discovers a Graveyard of Planets
The Hubble Space Telescope has discovered the rocky remains of planetary material scattered amongst the atmospheres of two white dwarfs in the Hyades star cluster. This suggests that these two stars may have once upon a time had their own planetary systems which at some point met with a dark fate.
Jay Farihi of the University of Cambridge and his team used Hubble’s Cosmic Origins Spectrograph (COS) to detect the faint signature of carbon and silicon in the two white dwarfs. The ratio of the two elements suggests that these “dead” stars are consuming rocky material of a similar chemical composition as Earth.
“The one thing the white dwarf pollution technique gives us that we won’t get with any other planet detection technique is the chemistry of solid planets,” said Farihi. “Based on the silicon-to-carbon ratio in our study, for example, we can actually say that this material is basically Earth-like.”
White dwarfs form after stars like our sun have used up all its fuel, expanded as a red giant and blown apart a planetary nebula. The white dwarf which is left behind can survive for billions more years. During the red giant stage, any planetary system that was once in orbit around the star will be severely disrupted. The extreme tidal stresses of the newly formed white dwarf will rip apart any orbiting body, grinding it to dust.
The significance of this latest finding is that by analyzing the light from white dwarfs, we aren’t only able to see evidence for planetary systems around stars in star clusters, we’re also looking into the future of our solar system. In a few billion years time, our own Sun will become a white dwarf, and Earth, as well as the other planets of our solar system, will join their own planetary graveyard. |
A supercomputer simulation suggests it formed quite rapidly after a larger body hit Earth
The Moon may have formed in hours, rather than in months or years as has been believed, according to an advanced astrophysical model devised by NASA scientists with the aid of supercomputers, and published last week in the Astrophysical Journal Letters.
Using a higher resolution than was available at the time that the widely-accepted theory of the Moon’s origin was devised, in which a Mars-sized body called Theia hits a primitive Earth some 4.5 billion years ago, and the resulting debris coalesces into the Moon over a period of months or years, the model shows the satellite forming much more rapidly from material originating from both Earth and Theia.
The simulation shows Theia, a Mars-sized planet, colliding with a mini-Earth. The outer crust of the planets is thrown into orbit from the impact, quickly coalescing into two unstable satellites, the smaller of which stabilizes into the Moon, while the larger is reabsorbed into Earth.
The new theory helps explain why the Moon shares a similar mineral composition with Earth, particularly towards its crust – an attribute which is difficult to explain if it is supposed to be composed almost entirely of debris from Theia, as the prevailing theory holds.
Other existing theories that purport to explain the similarity in chemical composition between Earth and its satellite, like the synestia theory that suggests the Moon formed inside a swirl of vaporized rock from the collision of Theia with Earth, do not satisfactorily account for its orbit.
NASA hopes to use similarly advanced high-resolution modeling in conjunction with new samples brought back from its planned Artemis missions to test this and other theories of the Moon’s evolution. The astronauts of Artemis, the US space agency’s much-hyped return to manned space missions, will be tasked with taking specimens from deeper beneath the Moon’s surface, as well as from seldom-explored parts of the satellite, though the operation’s launch remains plagued by delays and technical malfunctions. |
This article needs additional citations for verification. (January 2018)
A limb (from the Old English lim), or extremity, is a jointed bodily appendage that humans and many other animals use for locomotion such as walking, running and swimming, or for prehensile grasping or climbing. In the human body, arms and legs are commonly called upper limbs and lower limbs, respectively. Arms are connected to the torso or trunk at the shoulder and legs are connected at the hip girdles. Many animals can use their forelimbs (which are homologous to arms in humans) to carry and manipulate objects, while some can use them to achieve flight. Some animals can also use hind limbs for manipulation.
Human legs and feet are specialized for two-legged locomotion – most other mammals walk and run on all four limbs. Human arms are weaker, but very mobile, allowing them to reach at a wide range of distances and angles, and end in specialized hands capable of grasping and fine manipulation of objects. Though human dexterity is relatively unique, grasping behavior is widespread among tetrapods.
The overall patterns of the forelimbs and hindlimbs are so similar ancestrally, and branch out in similar ways; that they are given shared names. Limbs are attached to the pectoral girdle or pelvic girdle. The one bony element of the upper limb is the stylopodium, the two bones of the lower limb are the zeugopodium. The distal portion of the limbs, that is, the hands or feet, are known as autopodia. Hands are technically known as the manus, and feet as the pes, which are both composed of carpals and digits. As metapodials, the metacarpals and metatarsals are analogous to each other.
Limb development is controlled by Hox genes. All jawed vertebrates surveyed so far organize their developing limb buds in a similar way. Growth occurs from proximal to distal part of the limb. On the distal end, the differentiation of skeletal elements occurs in an apical ectodermal ridge (AER) which expands in rays. A Zone of Polarizing Activity (ZPA) at the rear part of the AER coordinates the differentiation of digits.
- Anatomical terms of location
- Anatomical terms of motion
- Phantom limb
- Descending limb of loop of Henle
- Ascending limb of loop of Henle
- "Limb". medical-dictionary.thefreedictionary.com. Retrieved 16 June 2017.
- Sustaita, Diego; Pouydebat, Emmanuelle; Manzano, Adriana; Abdala, Virginia; Hertel, Fritz; Herrel, Anthony (2013-01-03). "Getting a grip on tetrapod grasping: Form, function, and evolution". Biological reviews of the Cambridge Philosophical Society. 88. doi:10.1111/brv.12010.
- "GEOL431 - Vertebrate Paleobiology". www.geol.umd.edu. Retrieved 2019-12-20. |
Extreme-Weather Winters Becoming More Common in U.S., Study Shows
This past July was Earth’s hottest month since record keeping began, but warming isn’t the only danger climate change holds in store. Recent years have seen a dramatic increase in the simultaneous occurrence of extremely cold winter days in the Eastern United States and extremely warm winter days in the Western U.S., according to a new study. Human-caused emissions of greenhouse gases are likely driving this trend, the study finds.
In the past three years alone, heat-related drought in the West and bitter cold spells in the East have pinched the national economy, costing several billion dollars in insured losses, government aid, and lost productivity. When such weather extremes occur at the same time, they threaten to stretch emergency responders’ disaster assistance abilities, strain resources such as inter-regional transportation, and burden taxpayer-funded disaster relief.
Understanding the physical factors driving extreme weather could provide decision-makers with more reliable information with which to prepare for weather disasters, while understanding the likelihood of droughts could help engineers better plan the development and management of infrastructure to provide reliable water supplies.
The new study, published in the Journal of Geophysical Research: Atmospheres, finds that the occurrence and severity of “warm-West/cold-East” winter events, which the authors call the North American winter temperature dipole, increased significantly between 1980 and 2015. This is partly because winter temperature has warmed more in the West than in the East, but the authors found that it also has been driven by the increasing frequency of a “ridge-trough” pattern, with high atmospheric pressure in the West and low atmospheric pressure in the East producing greater numbers of winter days with extreme temperatures in large areas of the West and East at the same time.
“What we’ve found is that this particular atmospheric configuration connects the cold extremes in the East to the occurrence of warm extremes ‘upstream’ in the West,” said lead author Deepti Singh, a post-doctoral research scientist at Columbia University’s Lamont-Doherty Earth Observatory.
Despite long-term warming across most of the globe, some regions can experience colder than normal temperatures associated with anomalous circulation patterns that drive cold air from the poles to the mid-latitudes. In fact, circulation patterns that facilitate such extremes are potentially a response to enhanced warming, the authors point out.
“Although the occurrence of cold extremes is often used as evidence to dismiss the existence of human-caused global warming, our work shows that the warm-West/cool-East trend is actually consistent with the influence of human activities that have modified Earth’s climate in recent decades,” Singh said.
Looking back at 35 years of temperature data, the scientists found that the winters of 2013-2014 and 2014-2015 had the greatest differences between the U.S. East and U.S. West. Much of the Western U.S. was exceptionally warm and dry, with record-low soil moisture and mountain snowpack, while the Eastern U.S., faced bitter cold spells and blizzards.
The simultaneous occurrence of extreme Western warmth and extreme Eastern cold will likely decrease over time as warming reduces the occurrence of cold winters in the East. Still, the researchers project that some extremely cold events will still occur even with high levels of global warming.
“We can absolutely expect further increases in hot events if global warming continues,” said co-author Noah Diffenbaugh, an associate professor in Earth System Science at the School of Earth, Energy & Environmental Sciences and a senior fellow at the Stanford Woods Institute for the Environment. “But our results also highlight how complex climate change can be. We should be prepared for both warm and cold extremes – sometimes simultaneously – now and in the future.”
Other coauthors of the study are Justin Mankin of Lamont-Doherty Earth Observatory and NASA Goddard Institute for Space Studies, Daniel Horton of Northwestern University, and Daniel Swain of the UCLA, who along with Singh are former members of Diffenbaugh’s research group; and Professors Leif Thomas and Bala Rajaratnam of Stanford.
This article is adapted from a release written by Rob Jordan of Stanford Woods Institute for the Environment. |
Polar Bear Habitat: Life on the Ice
The polar bear, or Ursus maritimus (sea bear), probably evolved about 150,000 years ago from brown bears [source: Live Science]. A polar bear can actually mate successfully with a brown bear and the resulting offspring is fertile, which gives more evidence of a relationship between the two. There are a lot more brown bears out there than polar bears, though. Brown bears number more than 200,000 worldwide [source: WWF]. Polar bears only number about 23,000.
Polar bears live only in the Northern Hemisphere – you won't find them at the South Pole. The 23,000 live in 19 separate populations throughout the Arctic, in only five countries: the United States (Alaska), Canada, Russia, Greenland and Norway. About 60 percent of the population lives in Canada.
Life in the Arctic is harsh: The bears live in total darkness between October and February, and the temperature can drop as low as -50 F (-45 C) in winter [source: Polar Bears International]. And that's exactly how they like it.
Polar bears are built for extreme cold. They experience almost no heat loss: Two layers of fur and a blubber layer up to 4.5 inches (11.5 centimeters) thick keep them so well insulated, they'll overheat if they run. The areas that lack this insulation – ears, tail and muzzle – are especially small, minimizing non-insulated surface area.
Polar bears mostly walk slowly, following their favorite prey, the seal, from ice sheet to ice sheet. They need the ice to hunt. In warmer months, when ice sheets get smaller, the bears will walk hundreds of miles to find solid spreads of ice.
Polar bears can walk up to 20 miles (30 kilometers) per day, for several days in a row, relying on tiny bumps on the bottoms of their feet to keep them from slipping on the ice. They'll swim, too, both to cool off after a meal and to bridge the gap between ice sheets when they're following seals. Polar bears use their front paws to paddle and their hind legs to steer (imagine the most powerful doggie paddling ever). They'll go only slightly under the water when they swim, and their nostrils close up when they're submerged. As much as they thrive on the ice, they're strong swimmers. Polar bears have been tracked swimming up to 60 miles (100 kilometers) at a time, and at up to 6 miles per hour (10 kilometers per hour) [source: WWF].
Aside from their dependence on staggering cold, one of the biggest differences between polar bears and other bears is that polar bears don't hibernate. Females go into a sort of semi-hibernation toward end of their pregnancy, but they don't experience the drop in heart rate and body temperature that characterizes real hibernation. They mostly just rest and sleep a lot in the months immediately before and after they give birth.
However, births are declining. Of the 19 polar bear populations, at least five are known to be shrinking dramatically. In at least one area, the creatures are reproducing at just 20 percent of the rate they were two decades ago [source: NBC News].
This serious drop in population is due to climate change, and it has a lot to do with the way polar bears hunt. |
When we talk about home values for child development, we are not talking about the prices of property, nor any school district catchment area. Home values in the context we will write about here have to do with the behaviours that are deemed ‘important’ to a child’s family, while they are in their home environment. These practices, understandings and cherished behaviours can extend into a child’s life outside the home.
In this article, we’ll explain why home values are important for child development, and how they affect early childhood education.
How are home values developed and defined in childhood?
A child’s home environment is where they first learn basic human skills. This can happen by teaching, imitation, experimentation through play, and so on. For example, language is learned at home by listening, imitating and practicing.
In some cultures, sitting at the table when we eat, saying grace before eating, shaking hands with new people we meet, or kissing guests on both cheeks can also be learned behaviours. These are also values. That is, culturally, the family places significant importance on these rituals and habits. Without words being necessary, they communicate messages to others, such as acceptance, being grateful or respect for a social situation or person.
Being polite, respecting elders, cleaning up after play, helping with chores, saying things like ‘I love you,’ ‘please’ and ‘thank you’ are all values that children learn at home, before they enter society (so to speak), and go to daycare, preschool or kindergarten.
As we can see, these ‘home values’ have a lot to do with the way a child is raised. For example, if children are taught by caregivers about money skills early in life, or taught to solve problems, be independent, build self-confidence, get along with others, eat healthy, tell the truth and so on, these become a person’s values later in life.
Education is another example of a value. In many cultures and households, getting a good education is one of the topmost home values. Kids are encouraged to develop literacy in early childhood, and are told that they may be doctors or lawyers one day. When kids reach grade school, parents may require that homework is finished before they can play. That is a value. Meaning: education is more important than having fun.
On the contrary, if a child is never taught to be thankful or polite, when they enter society, they may have problems forming relationships. Of course, it’s expected kids will fight over toys, but at some point, kids should be learning to take turns, wait their turn, share or otherwise place more value on the friendship than the object.
There can also be problems with values when a child’s home environment is not a nurturing one. According to the UN, a child, “for the full and harmonious development of his or her personality, should grow up in a family environment, in an atmosphere of happiness, love and understanding.”
And, in this article on the Unicef website, a child should be taught:
Socio-emotional and cognitive competencies and providing directions and guidance in daily life. Providing a safe and stimulating home environment, which allows children to play, explore and discover, is a critical piece of this process and can exponentially increase a child’s chances of flourishing, attaining an optimal level of development and later becoming a responsible and productive adult.
The Unicef article also mentions the following, which emphasize even more our point that home values are actively formed in early childhood:
Caregivers’ effective and responsive care in the first five years of life, includes daily parental guidance, responsive and adequate feeding practices, appropriate caregiver-child interactions (positive emotionality, sensitivity, and responsiveness towards the child, avoidance of harsh verbal or physical punishment). All these practices represent forms of family investments into children’ long term well-being.
And so, how caregivers act as models to their children, can also impact a child’s values, since they stem from their home environment.
How can early childhood educators use home values in their preschool curriculum?
On the one hand, early childhood educators can be thought of as teachers of facts. You know: colours, shapes, numbers, letters and the typical, age-appropriate school subjects for a young group of tots. However, early childhood education – and even education as a whole – can be thought of in a much wider sense.
The point of any educational system is to prepare pupils to be functioning members of society as adults. So, with that in mind, some educational theories emphasize that the job of a teacher is to build up the cognitive and emotional development of a child, too. In fact – even physical development and health is taught at school. Think of P.E. class, for instance.
In early childhood education, we have the important role of starting children off, and preparing them for the formal school environment. So at this stage, social skills, which encompass a person’s values, are really important (as are many other child development milestones!).
It is therefore beneficial for a preschool teacher to consider how they can reinforce home values, or bring positive values into the home. This of course takes coordination and open lines of communication with parents.
For example, at our daycare, we teach children about recycling. This gets told to the parents, who encourage children to bring in their recycling to the classroom. Parents can reinforce this value at home, by teaching children where the recycling goes.
We also host grandparents days. We let the parents know how to prepare for these events, and it is a mutual way for us to work together as a community to show children that we should show respect, love and admiration to our grandparents.
On the other hand, we may teach a child about cleaning up after eating. The child can use this behavioural value at home, even if they typically aren’t expected to do so in that environment.
Imagine how impressed a parent might be if they see their child throwing their snack wrapper in the garbage at home, without being asked! But in reality, these habits won’t be as effective as when daycare, preschool or kindergarten are aligned with home values. It takes a village to raise a child, and to show kids that we all appreciate and expect these positive behaviours, while we demote the negative ones.
The thing about values, though, is that we can’t always say they are right or wrong. Obviously, abusiveness and imbalanced parenting should not be considered valuable to our society. But in a lot of cases, values can be cultural. They can be based on religion practiced in the home, or traditions that are common in a country where parents may have originated from.
Educators need to be aware of this, since part of their job is to emphasize and reinforce the positive home values, which may or may not be different from the ones they grew up with.
So, some ways of being inclusive – as well as practical – when using home values in early childhood education curriculums are:
- Exposing children to different cultures and customs, by working them into the curriculum calendar. For example: show children not just what Christmas is like, but also Hanukkah and Diwali (especially if there are children from those cultures in your classroom, or your neighbourhood includes these cultures).
- Teaching children to say thank you and show appreciation for workers around the school who do important jobs, but could be from a different socio economic class or culture. For example, you can find many heart warming, tear-jerking videos of classrooms thanking and surprising their school janitor. It’s a great idea to pass on! Don’t forget your garbage pick up and recycling guys, the firemen, the secretary, and whomever else does important work in their neighbourhood!
- Teach phrases and words in different languages. Canada is so multicultural, it’s hard to not find a language spoken in this country. However, especially if your classroom has pupils who speak languages other than English at home, you can teach words from those languages. Or, ask the kids who speak other languages to share them with the other kids. Learn to say hello in Chinese, Hebrew, Punjabi, Arabic, Japanese, Korean, French, Spanish, sign language and any other language group represented in the class. Then, move on to, ‘how are you?’ and ‘thank you’ and any other phrase the class wants to learn.
- Don’t force kids to do activities that may go against their home values. This can be an extremely sensitive topic, and one that needs to be determined case-by-case. But for example, some families don’t celebrate holidays like Christmas or Halloween for their own reasons. That’s ok: it doesn’t hurt anybody to not do these activities. If the child is ok with it, they can paint stars and planets instead of doing a Christmas craft. If a child’s home values go against eating meat, be accommodating, and avoid any ‘annoyed’ looks when it comes to snack prep. And so on. Kids should not feel like being different means that something is wrong with them. Teach the class to be accepting, too.=
To conclude: home values are an important part of early childhood education and development
As we’ve seen above, home values can be important cultural learning points for how we behave in society. They are learned early in life, along with skills like language and physical and cognitive development. Children see and hear values modelled to them too – so parents and caregivers need to be attuned to how they act, since it affects how children act, too.
Home values can also change from household to household, but that doesn’t mean there is necessarily a right or wrong set of values we all need to follow, with the exception of the ones that are obviously inappropriate, and which our societies should protect children from.
Early childhood educators can reinforce common home values by including them in a preschool classroom schedule and learning objective. These can be as simple as learning to listen while a teacher or adult is talking, or cleaning up after eating. It can also be inclusive by incorporating traditions and language learning from other cultures. And, most importantly, teachers can demonstrate values of acceptance and accommodation towards many types of people, for their early learners. |
Letter case - Wikipedia Letter case (or just case) is the distinction between the letters that are in larger upper case (also uppercase, capital letters, capitals, caps, large letters, or more formally majuscule) and smaller lower case (also lowercase, small letters, or more formally minuscule) in the written representation of certain languages. When to Use Capital Letters - dummies In English grammar, you need to know when to capitalise words. Sometimes the capital letter signifies the part of a sentence or simply indicates someone's name (proper nouns). Use capital letters for the following: Specific names: Capital letters are used for the names of people, places, and brands. (Bill, Mrs. Jones, River Dee, Burberry). Do CAPITAL LETTERS Matter in Email Addresses? The suggestion from Zive and Kiwi for Gmail is to use capital letters in email addresses, stationery, business cards, and other print materials in a sparing way to help human beings see the words inside your email address, such as: "[email protected]." When do job titles need capital letters? - writing-skills.com
How to Use Capital Letters - ESL Library
2. Your Name In Capital Letters Exposed - Expose 1933 Definition From - The Black's Law Dictionary - Sixth Addition page 211 - When your name is put in ALL CAPITOL LETTERS this is- The highest or most comprehensive loss of status. This occurred when a man's condition was changed from one of freedom to one of bondage, when he became a slave. How to Do Capital Letters When Logging In to PS3 & Netflix ... How to Do Capital Letters When Logging In to PS3 & Netflix By Seth Amery ; Updated September 15, 2017 In addition to playing games on the PlayStation 3 console by Sony, you can make use of the television and video services available, such as the Netflix application. Avoid Capital Offenses When Using Job Titles - Daily Writing Tips Avoid Capital Offenses When Using Job Titles By Mark Nichol When it comes to mechanical aspects of writing, few details seem to trip writers up as much as capitalization: when to use uppercase letters, and when to use lowercase letters. PPS No Slide Title
Use capital letters in the following ways: The first words of a sentence. When he tells a joke, he sometimes forgets the punch line. The pronoun "I"
How To Use Capital Letters | Lexico Dictionaries
Capitalization rules tend to vary by language and can be quite complicated. It is widely understood that the first word of a sentence and all proper nouns are ...
Punctuating with capital letters - UNE Students tend to use capital letters for everything that feels important to them. ... emphasis in your formal academic writing (use underlining or italics for that). Why do we use a capital for 'I' but not 'me'? | EF English Live We have rules for using capital letters but 'I' doesn't really follow them. It's easy to remember to use a capital letter at the beginning of a sentence, for proper ... When Do You Use Capital Letters in English? - Kaplan International
Capitalization After Colons | Grammarly
Where do I use capital letters? - YouTube
Netiquette of Capitalization: How Caps Became Code for ... 1) using CAPITAL LETTERS to make words look "louder", 2) using *asterisks* to put sparklers around emphasized words, and 3) s p a c i n g words o u t, possibly accompanied by 1) or 2). Do Capital Letters Matter in Email Addresses? | Wilson Media ... The process of capitalizing letters at the beginning of words in email addresses and website addresses is known as Camel Notation and it’s VERY good to use for marketing materials. It provides a visual cue to the beginning of each word, which makes the email address or website address easier to read. Chapter 10 : Capitalization : Chapter Quiz - ClassZone For more information on capitalization, see Language Network, Chapter 10, pages 228-247. ... Choose the answer that shows capital letters used correctly. (title of a ... |
Enhancing learning through Sport at Boundary Oak SchoolLeave a Comment
Physical activity is essential for the balanced development of young people, fostering their physical, social and emotional health. The benefits of sport reach beyond the impact on physical well-being and the value of the educational benefits of sport should not be under-estimated.
Academic learning and sports education are the complements of each other. They resemble the two sides of the same coin. Through participation in sport and physical education, young people learn about the importance of key values such as: honesty, teamwork, fair play, respect for themselves and others, and adherence to the rules.
It also provides a forum for young people to learn how to deal with competition and how to cope with both winning and losing. The world of sport mirrors how one can play the game of life. Good athletes stay in the game and play their best even when they are losing. They know they will win some and lose some. Children must learn that winning and losing are both temporary, and that they can’t give up or quit. Education, life achievements, contributions to the arts, sciences, business, and government involve comparable determination and self-discipline.
To achieve broader goals in education and development, the Boundary Oak Sports Programme focuses on the development of the individual and not only on the development of technical sports skills.
Some children are natural athletes while others have lesser physical coordination. Sport and athletic activities are good for building confidence for both types of child. For the well coordinated, the discipline of developing skills gives a sense of improvement and accomplishment. Winning games and moving to higher levels of competition enable these children to sense their personal progress.
Children with lesser coordination begin involvement in less competitive sports at first or in activities in which they can achieve improvement compared to their past personal best.
Physical Education naturally builds healthy habits that encourage life-long participation in sport. This extends the impact of physical education beyond the schoolyard and highlights the potential impact of physical education on health in later life. At Boundary Oak School we believe that ‘a sound mind dwells only in a sound body’. |
SCIENCE & TECH / ENVIRONMENT
Topic: General Studies 3:
- Science and Technology- developments and their applications and effects in everyday life.
- Conservation, environmental pollution and degradation
Technology and Conservation: Elephants counted from Space
Context: Scientists are using very high-resolution satellite imagery to count and detect wildlife species, including African elephants.
A team of researchers from the University of Oxford Wildlife Conservation Research Unit and Machine Learning Research Group detected elephants in South Africa from space using Artificial Intelligence with an accuracy that they have compared to human detection capabilities.
So, how did scientists track the elephants?
- Earlier Methodology relied on manned aircrafts: Before researchers developed the new technique, one of the most common survey methods to keep a check on elephant populations in savannah environments involved aerial counts undertaken from manned aircraft.
- Limitations of earlier method: However, this method does not deliver accurate results since observers on aircraft are prone to get exhausted, are sometimes hindered by poor visibility and may even succumb to bias. Further, aerial surveys are costly and logistically challenging.
- Satellites Imagery Utilized: To test the new method, researchers chose the Addo Elephant National Park in South Africa, which has a high concentration of elephants. Researchers used the highest resolution satellite imagery currently available, called Worldview3.
- Leveraging Artificial Intelligence Technology: At first, the satellite images appear to be of grey blobs in a forest of green splotches – but, on closer inspection, those blobs are revealed as elephants wandering through the trees. And all the laborious elephant counting is done via machine learning – a computer algorithm trained to identify elephants in a variety of backdrops.
Significance of using Satellite & AI Technology in counting Elephants
- Accurate Count improves Conservation: In order to conserve the species, it is important for scientists to track elephant populations. This is because inaccurate counts can lead to misallocation of conservation resources, which are already limited and have resulted in misunderstanding population trends.
- Helps arrest Declining Population: The population of African elephants has plummeted over the last century due to poaching, retaliatory killing from crop-raiding and habitat fragmentation. The scientists say better counting & monitoring could be used in anti-poaching work.
- Useful in International borders: This approach of using satellites and AI could vastly improve the monitoring of threatened elephant populations in habitats that span international borders, where it can be difficult to obtain permission for aircraft surveys.
- Cost effective: Scientists used satellite imagery that required no ground presence to monitor the elephants. The breakthrough could allow up to 5,000 sq km of elephant habitat to be surveyed on a single cloud-free day.
- Suited in Pandemic Situation: Also since these images are captured from space, there is no need for anyone on the ground, which is particularly helpful during these times of coronavirus
Did You Know?
- But, this is not the first study of its kind to initiate tracking of elephants using satellites. In 2002, Smithsonian scientists started using geographic information systems (GIS) technology to understand how they could conserve Asian elephants.
- At the time, scientists launched the first satellite-tracking project on Asian elephants in Myanmar. |
Hashing is the process or technique of converting any data into certain codes or form beyond recognition. It is an irreversible process. In order to hash any data, the input or the data is passed through a hash function, which uses some algorithms to calculate the corresponding hash value. Some of the most popular hashing algorithms are MD4, MD5, SHA, etc.Real-world implementation of hashing:
Now you must be wondering why and how hashing are used in the real world. Hashing of any data is done for the purpose of protecting it from the understanding of any other people.For example:
Now, most of the websites use hashing of the password to store in their database.If by any chance the server of any website gets hacked or some sort of data breach happened from their server, then their user's passwords can be known by the attackers. And even though no any such things happen, still the passwords can be seen by any people of administration of the website, now which creates the risk of privacy. So now here is where the hashing of the password plays the role. When the user sign up into the website, the website hashed the plain password and store into the database.Now even though the server gets affected by hackers or not even the website administration can known the original plain password. They will just get the hashed password. And since hashing is an irreversible process, so none can convert the hash back into the original password. But what if many people keep the same password? If many people keep the same password, then obviously the hashed password will be also the same. And if now the hacker somehow using social engineering and bruteforce gets the password of one user then he/she can easily crack all hash passwords (of the people having the same password).So for the protection from this a new method was introduced named "Salting".Salting: Salting is a simple technique of adding an extra layer of security in hashing the passwords by adding a different unique value at the end of the passwords to generate a different hash password for every user. |
Antarctic krill population contracts southward as polar oceans warm
21 January 2019
The population of Antarctic krill, the favourite food of many whales, penguins, fish and seals, shifted southward during a recent period of warming in their key habitat, new research shows.
Antarctic krill are shrimp-like crustaceans which occur in enormous numbers in the cold Southern Ocean surrounding Antarctica. They have a major role in the food web and play a significant role in the transport of atmospheric carbon to the deep ocean.
Important krill habitats are under threat from climate change, and this latest research – published today in Nature Climate Change - has found that their distribution has contracted towards the Antarctic continent. This has major implications for the ecosystems that depend on krill.
An international team of scientists, led jointly by Dr Simeon Hill at the British Antarctic Survey and Dr Angus Atkinson at PML, analysed data on the amount of krill caught in nets during scientific surveys. The data covered the Scotia Sea and Antarctic Peninsula – the region where krill are most abundant. The team found that the centre of the krill distribution has shifted towards the Antarctic continent by about 440 km (4° latitude) over the last four decades.
The team took great care to account for background noise in the data. Many factors, in addition to long-term change, influence the amount of krill caught in any one net. Even after accounting for these factors the team found a consistent trend throughout the data, indicating a substantial change in the krill population over time.
The study provides support for a proposed mechanism behind these changes - an increasingly unfavourable climate leading to fewer young krill replenishing the population. This has led to a smaller population dominated by older and larger krill.
Simeon Hill said: “Our databases on abundance and population structure reveal a species facing increasing difficulty in replenishing itself and maintaining high numbers at the northern edge of the Southern Ocean”.
“These northern waters have warmed and conditions throughout the Scotia Sea have become more hostile, with stronger winds, warmer weather and less ice. This is bad news for young krill.”
Angus Atkinson added: “This is a nice example of international cooperation in Antarctica. It is only when we put all our data together that we can look at the large scales of space and time to learn how populations of key polar species are responding to rapid climate change.”
Simeon Hill continued: “The surveys which provided these data weren’t intended to monitor change over large spatial scales or over 90 years. The fact that we see a signal amongst all of this noise is an indication of how much the population has changed over time. These changes appear to be driven by the global climate. Continued precautionary management of the krill fishery is important, but is no substitute for global action on climate change.” |
The field of Civil War history has produced more interpretative disputes than most historical events. Next to debates about the causes of the war, arguments about why the North won, or why the Confederacy lost (the difference in phraseology is significant), have generated some of the most heated but also most enlightening recent scholarship. The titles of four books reveal just some of the central themes of this argument: Why the North Won the Civil War (1960); How the North Won (1983); Why the South Lost the Civil War (1986); Why the Confederacy Lost (1992).
Answers to these why and how questions fall into two general categories: external and internal. Exter-nal interpretations usually phrase the question as Why did the North win? They focus on a comparison of Northern and Southern population, resources, economic capacity, leadership, or strategy, and conclude that Northern superiority in one or more of these explains Union victory. Internal explanations tend to ask, Why did the South lose? They focus mainly or entirely on the Confederacy and argue that internal divisions, dissensions, or inadequacies account for Confederate defeat.
The most durable interpretation is an external one. It was offered by General Robert E. Lee himself in a farewell address to his army after its surrender at Appomattox: “The Army of Northern Virginia has been compelled to yield to overwhelming numbers and resources.”1 This explanation enabled Southern whites to preserve their pride, to reconcile defeat with their sense of honor, even to maintain faith in the nobility of their cause while admitting that it had been lost. The Confederacy, in other words, was compelled to surrender not because its soldiers fought badly, or lacked courage, or suffered from poor leadership, or because its cause was wrong, but simply because the enemy had more men and guns. The South did not lose; Confederates wore themselves out whipping the Yankees and collapsed from glorious exhaustion. This interpretation became the mainstay of what has been called the Myth of the Lost Cause, which has sustained Southern pride in their Confederate forebears to this day. As one Virginian expressed it:
They never whipped us, Sir, unless they were four to one. If we had had anything like a fair chance, or less disparity of numbers, we should have won our cause and established our independence.2
In one form or another, this explanation has won support from scholars of Northern as well as Southern birth. In 1960 the historian Richard Current provided a succinct version of it. After reviewing the statistics of the North’s “overwhelming numbers and resources”—two and a half times the South’s population, three times its railroad capacity, nine times its industrial production, and so on—Current concluded that “surely, in view of the disparity of resources, it would have taken a miracle…to enable the South to win. As usual, God was on the side of the heaviest battalions.”3
In 1990 Shelby Foote expressed this thesis in his inimitable fashion. Noting that many aspects of life in the North went on much as usual during the Civil War, Foote told Ken Burns on camera in the PBS documentary The Civil War that “the North fought that war with one hand behind its back.” If necessary “the North simply would have brought that other arm out from behind its back. I don’t think the South ever had a chance to win that war.”4
At first glance, Current’s and Foote’s statements seem plausible. But upon reflection, a good many historians have questioned their explicit assertions that overwhelming numbers and resources made Northern victory inevitable. If that is true, the Confederate leaders who took their people to war in 1861 were guilty of criminal folly or colossal arrogance. They had read the census returns. They knew as much about the North’s superiority in men, resources, and economic capacity as any modern historian. Yet they went to war confident of victory. Southern leaders were students of history. They could cite many examples of small nations that won or defended their independence against much more powerful enemies: Switzerland against the Hapsburg Empire; the Netherlands against Spain; Greece against the Ottomans. Their own ancestors had won independence from mighty Britain in 1783. The relative resources of the Confederacy vis-à-vis the Union in 1861 were greater than those of these other successful rebels.
The Confederacy waged a strategically defensive war to protect from conquest territory it already controlled and to preserve its armies from annihilation. To “win” that kind of war, the Confederacy did not need to invade and conquer the North or destroy its army and infrastructure; it needed only to hold out long enough to compel the North to the conclusion that the price of conquering the South and annihilating its armies was too great, as Britain had concluded with respect to the United States in 1781—or, for that matter, as the United States concluded with respect to Vietnam in 1972. Until 1865, cold-eyed military experts in Europe were almost unanimous in their conviction that Union armies could never conquer and subdue the 750,000 square miles of the Confederacy, as large as all of Western Europe. “No war of independence ever terminated unsuccessfully except where the disparity of force was far greater than it is in this case,” pronounced the military analyst of the London Times in 1862. “Just as England during the revolution had to give up conquering the colonies so the North will have to give up conquering the South.”5
Even after losing the war, many ex-Confederates stuck to this belief. General Joseph E. Johnston, one of the highest-ranking Confederate officers, insisted in 1874 that the Southern people had not been “guilty of the high crime of undertaking a war without the means of waging it successfully.”6 A decade later General Pierre G.T. Beauregard, who ranked just below Johnston, made the same point: “No people ever warred for independence with more relative advantages than the Confederates.”7
If so, why did they lose the war? In thinly veiled terms, Johnston and Beauregard blamed the inept leadership of Jefferson Davis. That harried gentleman responded in kind; as far as he was concerned, the erratic and inadequate generalship of Beauregard and especially Johnston was responsible for Confederate defeat. In the eyes of many contemporaries—and historians—there was plenty of blame to go around. William C. Davis’s Look Away! is the most recent “internal” study of the Confederacy that, by implication at least, attributes Confederate defeat to poor leadership at several levels, both military and civilian, as well as factionalism, dissension, and bickering between men with outsize egos and thin skins. In this version of Confederate history, only Robert E. Lee and Stonewall Jackson remain unstained.
For any believer in the Myth of the Lost Cause, any admirer of heroic Confederate resistance to overwhelming odds, the story told by Davis (no relation to the Confederate president) makes depressing reading. It is a story of conflicts not on the battlefields of Manassas or Shiloh or Gettysburg or Chickamauga or the Wilderness—they are here, but offstage, as it were—but conflicts between state governors and the Confederate government in Richmond, between quarreling Cabinet officers, between Jefferson Davis and prominent generals or senators or newspaper editors and even his vice-president, Alexander Stephens. Davis chronicles different examples of internal breakdown under the stresses not only of enemy invasion but also of slave defections to the Yankees, of Unionist disloyalty in the upcountry, particularly in such states as Tennessee, of galloping inflation and the inability of an unbalanced agricultural society under siege to control it, of shortages and hunger and a growing bitterness and alienation among large elements of the population.
These problems seemed more than sufficient to ensure Confederate failure, but they were greatly exacerbated by the jealousies and rivalries of Confederate politicians, which remain Davis’s principal focus. He does not explicitly address the question of why the Confederacy lost, but his implicit answer lies in the assertion that “the fundamental flaw in too many of the big men of the Confederacy… [was] ‘big-man-me-ism.'”
There are, however, two problems with this interpretation. In two senses it is too “internal.” First, by concentrating only on the Confederacy it tends to leave the reader with the impression that only the Confederacy suffered from these corrosive rivalries, jealousies, and dissensions. But a history of the North during the Civil War would reveal similar problems, mitigated only by Lincoln’s skill in holding together a diverse coalition of Republicans and War Democrats, Yankees and border states, abolitionists and slaveholders—which perhaps suggests that Lincoln was the principal reason for Union victory. In any event, Look Away! is also too “internal” because the author is too deeply dependent on his sources. It is the nature of newspaper editorials, private correspondence, congressional debates, partisan speeches, and the like to emphasize conflict, criticism, argument, complaint. It is the squeaky wheel that squeaks. The historian needs to step back and gain some perspective on these sources, to recognize that the well-greased wheel that turns smoothly also turns quietly, leaving less evidence of its existence available to the historian.
Look Away! falls within one tradition of internal explanations for Confederate defeat. More prevalent, especially in recent years, have been studies that emphasize divisions and conflicts of race, class, and even gender in the South. Two fifths of the Confederate population were slaves, and two thirds of the whites did not belong to slaveholding families. What stake did they have in an independent Confederate nation whose original raison d’être was the protection of slavery? Not much stake at all, according to many historians, especially for the slaves and, as the war took an increasing toll on non-slaveholding white families, very little stake for them either. Even among slaveholding families, the women who willingly subscribed to an ethic of sacrifice in the war’s early years became disillusioned as the lengthening war robbed them of husbands, sons, lovers, and brothers. Many white women turned against the war and spread this disaffection among their menfolk in the army; in the end, according to Drew Gilpin Faust, “it may well have been because of its women that the South lost the Civil War.”8
If all this is true—if the slaves and some nonslaveholding whites opposed the Confederate war effort from the outset and others including women of slaveholding families eventually turned against it, one need look no further to explain Confederate defeat. In The South vs. the South, however, William W. Freehling does not go this far. He says almost nothing about women as a separate category, and he acknowledges that many nonslaveholding whites had a racial, cultural, and even economic stake in the preservation of slavery and remained loyal Confederates to the end. But he maintains that, properly defined, half of all Southerners opposed the Confederacy and that this fact provides a sufficient explanation for Confederate failure.
Freehling defines the South as all fifteen slave states and Southerners as all people—slave as well as free—who lived in those states. This distinction between “the South” and the eleven slave states that formed the Confederacy is important but too often disregarded by those who casually conflate the South and the Confederacy. Admittedly, some 90,000 white men from the four Union slave states (Kentucky, Missouri, Maryland, and Delaware) fought for the Confederacy, but this number was offset by a similar number of whites from Confederate states (chiefly Tennessee and the part of Virginia that became West Virginia) who fought for the Union.
But Freehling’s central thesis that “white Confederates were only half the Southerners” raises problems. This arithmetic works only if virtually all black Southerners are counted against the Confederacy. At times Freehling seems to argue that they should be so counted. At other times he is more cautious, maintaining that “the vast majority” of Southern blacks “either opposed the rebel cause or cared not whether it lived or died.” Freehling does not make clear how important he considers that qualifying “or cared not.” In any event, let us assume that all three million slaves who remained in the Confederacy (as well as the one million in the border states and in conquered Confederate regions) sympathized with the Union cause that would bring them freedom. Nevertheless, their unwilling labor as slaves was crucial to the Confederate economy and war effort, just as their unwilling labor and that of their forebears had been crucial to building the antebellum Southern economy. These Confederate slaves worked less efficiently than before the war because so many masters and overseers were absent at the front. Unwilling or not, however, they must be counted on the Confederate side of the equation, which significantly alters Freehling’s 50/50 split of pro- and anti-Confederates in the South to something like 75/25.
Freehling draws on previous scholarship to offer a succinct narrative of the political and military course of the war, organized around Lincoln’s slow but inexorable steps toward emancipation, “hard war,” and the eventual mobilization of 300,000 black laborers and soldiers to work and fight for the Union. This narrative is marred by several errors, including the repeated confusion of General Charles F. Smith with General William F. “Baldy” Smith, the conflation of combat casualties with combat mortality, the mislabeling of a photograph of Confederate trenches at Fredericksburg as Petersburg, and the acceptance at face value of Alexander Stephens’s absurd claim, made five years after Lincoln’s death, that the Union president had urged him in 1865 to persuade Southern states to ratify the Thirteenth Amendment “prospectively,” thereby delaying the abolition of slavery five years. Nevertheless, Freehling has made a strong case for the vital contribution of the two million whites and one million blacks in the South who definitely did support the Union cause. Without them, “the North” could not have prevailed, as Lincoln readily acknowledged.
Freehling does not take a clear stand on the question of whether Union victory was inevitable. At times he seems to imply that it was, because the half of all Southerners whom he claims supported the Union (actively or passively) doomed the Confederacy. But at other times he suggests that this support was contingent on the outcome of military campaigns and political decisions. No such ambiguity characterizes the essays in Gary Gallagher’s Lee and His Army in Confederate History. In this book and in his earlier The Confederate War, Gallagher has argued forcefully and convincingly that Confederate nationalism bound most Southern whites together in determined support for the Confederate cause, that the brilliant though costly victories of Robert E. Lee’s Army of Northern Virginia reinforced this determination, and that morale even in the face of defeat and the destruction of resources in 1864–1865 remained high until almost the end.
Gallagher does not slight the problems of slave defections to the Yankees, class tensions among whites, personal rivalries and jealousies among Confederate leaders, and other internal divisions that have occupied historians who see these problems as preordaining defeat. But he emphasizes the degree of white unity and strength of purpose despite these faultlines. Plenty of evidence exists to support this emphasis. A Union officer who was captured at the Battle of Atlanta on July 22, 1864, and spent the rest of the war in Southern prisons wrote in his diary on October 4 that from what he had seen in the South “the End of the War…is some time hence as the Idea of the Rebs giving up until they are completely subdued is all Moonshine they submit to privatations that would not be believed unless seen.”9
“Until they are completely subdued.” That point came in April 1865, when the large and well-equipped Union armies finally brought the starving, barefoot, and decimated ranks of Confederates to bay. Gallagher revives the overwhelming numbers and resources explanation for Confederate defeat, shorn of its false aura of inevitability. Numbers and resources do not prevail in war without the will and skill to use them. The Northern will wavered several times, most notably in response to Lee’s victories in the summer of 1862 and winter–spring of 1863 and the success of Lee’s resistance to Grant’s offensives in the spring and summer of 1864. Yet Union leaders and armies were learning the skills needed to win, and each time the Confederacy seemed on the edge of triumph, Northern victories blunted the Southern momentum: at Sharpsburg, Maryland, and Perryville, Kentucky, in the fall of 1862; at Gettysburg and Vicksburg in July 1863; and at Atlanta and in Virginia’s Shenandoah Valley in September 1864. Better than any other historian of the Confederacy, Gallagher understands the importance of these contingent turning points that eventually made it possible for superior numbers and resources to prevail. He understands as well that the Confederate story cannot be written except in counterpoint with the Union story, and that because of the multiple contingencies in these stories, Northern victory was anything but inevitable.
Much of the best scholarship on the Civil War during the past decade has concentrated on the local or regional impact of the war. A fine example is Brian Steel Wills’s The War Hits Home, a fascinating account of the home front and battle front in southeastern Virginia, especially the town of Suffolk and its hinterland just inland from Norfolk. No great battles took place here, but there was plenty of skirmishing and raids by combatants on both sides. Confederates controlled this region until May 1862, when they were compelled to pull back their defenses to Richmond. Union forces occupied Suffolk for the next year, staving off a halfhearted Confederate effort to recapture it in the spring of 1863. The Yankees subsequently fell back to a more defensible line nearer Norfolk, leaving the Suffolk region a sort of no man’s land subject to raids and plundering by the cavalry of both armies.
Through it all most white inhabitants remained committed Confederates, while many of the slaves who were not removed by their owners to safer territory absconded to the Yankees, adding their weight to the Union side of the scales in the balance of power discussed by Freehling. White men from this region fought in several of Lee’s regiments, suffering casualties that left many a household bereft of sons, husbands, fathers. Yet their Confederate loyalties scarcely wavered.
Northern occupation forces at first tried a policy of conciliation, hoping to win the Southern whites back to the Union. When this failed, they moved toward a harsher policy here as they did elsewhere, confiscating the property and liberating the slaves of people they now perceived as enemies to be crushed rather than deluded victims of secession conspirators to be converted.
Wills does not make a big point of it, but his findings stand “in sharp rebuttal” to the arguments of historians who portray a weak or divided white commitment to the Confederate cause as the reason for defeat. “These people sought to secure victory until there was no victory left to win.” In the end the North did have greater numbers and resources, wielded with a skill and determination that by 1864–1865 matched the Confederacy’s skills and determination; and these explain why the North won the Civil War.
June 13, 2002
The Wartime Papers of R.E. Lee, edited by Clifford Dowdey and Louis H. Manarin (Little, Brown, 1961), p. 934. ↩
Quoted in Why the North Won the Civil War, edited by David Donald (Louisiana State University Press, 1960), p. ix. ↩
Richard N. Current, “God and the Strongest Battalions,” in Why the North Won the Civil War, p. 22. ↩
“Men at War: An Interview with Shelby Foote,” in Geoffrey C. Ward with Ric Burns and Ken Burns, The Civil War (Knopf, 1990), p. 272. ↩
London Times, August 29, 1862. ↩
Joseph E. Johnston, Narrative of Military Operations (Appleton, 1874), p. 421. ↩
Pierre G.T. Beauregard, “The First Battle of Bull Run,” in Battles and Leaders of the Civil War, 4 volumes, edited by Robert U. Johnson and Clarence C. Buel (Century, 1887), Vol. 1, p. 222. ↩
Drew Gilpin Faust, “Altars of Sacrifice: Confederate Women and the Narratives of War,” The Journal of American History, Vol. 76, No. 4 (March 1990), p. 1228. ↩
“The Civil War Diary of Colonel John Henry Smith,” edited by David M. Smith, Iowa Journal of History, Vol. 47 (April 1949), p. 164. ↩ |
|Part of a series on the|
Higher category: Language
The British Empire annexed modern-day Myanmar in three stages over a six-decade span (1824–1885). It administered Myanmar as a province of British India until 1937, and as a separate colony until 1948. During the British colonial period, English was the medium of instruction in higher education, although it did not replace Burmese as the vernacular. English was the medium of instruction in universities and two types of secondary schools: English schools and Anglo-Vernacular schools (where English was taught as a second language). Burmese English resembles Indian English to a degree because of historical ties to India during British colonization.
On 1 June 1950, a new education policy was implemented to replace Burmese as the medium of instruction at all state schools, although universities, which continued to use English as the medium of instruction, were unaffected. English became taught as a second language beginning in the Fifth Standard. Until 1965, English was the language of instruction at Burmese universities. In 1965, Burmese replaced English as the medium of instruction at the university level, with the passing of the New University Education Law the previous year. English language education was reintroduced in 1982. Currently, English is taught from Standard 0 (kindergarten), as a second language. Since 1991, in the 9th and 10th Standards, English and Burmese have both been used as the medium of instruction, particularly in science and math subjects, which use English language textbooks. Because of this, many Burmese are better able to communicate in written English than in spoken English, due to emphasis placed on writing and reading.
The preferred system of spelling is based on that of the British, although American English spellings have become increasingly popular. Because Adoniram Judson, an American, created the first Burmese-English dictionary, many American English spellings are common (e.g. color, check, encyclopedia). The ⟨-ize⟩ spelling is more commonly used than the ⟨-ise⟩ spelling.
Burmese English is often characterised by its unaspirated consonants, similar to Indian English. It also borrows words from standard English and uses them in a slightly different context. For instance, "pavement" (British English) or "sidewalk" (US English) is commonly called "platform" in Burmese English. "Stage show" is also preferred over "concert."
For units of measurement Burmese English use both those of the Imperial System and those of the International System of Units interchangeably, but the values correspond to the SI system. Burmese English continues to use Indian numerical units such as lakh and crore.
Burmese names represented in English often include various honorifics, most commonly "U", "Daw", and "Sayadaw". For older Burmese who only have one or two syllables in their names these honorifics may be an integral part of the name.
In Burmese English, the k, p, and t consonants are unaspirated (pronounced /k/, /p/, /t/), as a general rule, as in Indian English. The following are commonly seen pronunciation differences between Standard English and Burmese English:
|Standard English||Burmese English||Remarks|
|// (e.g. further, Burma)||/á/||Pronounced with a high tone (drawn-out vowel), as in Burmese|
|Word-final // (e.g. now, brow)||/áuɴ/||Pronounced with a nasal final instead of an open vowel|
|Word-final // (e.g. pie, lie)||/aiɴ/||Pronounced with a nasal final instead of an open vowel|
|// (e.g. tuba)||/tɕu/||e.g. "tuition," commonly pronounced [tɕùʃìɴ]|
|// (e.g. ski)||/sək-/||Pronounced as 2 syllables|
|// (e.g. star)||/sət-/||Pronounced as 2 syllables|
|// (e.g. plug)||/pəl/||Pronounced as 2 syllables|
|// (e.g. spoon)||/səp/||Pronounced as 2 syllables|
|// (e.g. vine)||/b/|
|// (e.g. think)||/ḭɴ/||Pronounced with a short, creaky tone (short vowel)|
|// (e.g. thing)||/iɴ/||Pronounced as a nasal final|
|consonantal finals (.e.g. stop)||/-ʔ/||Pronounced as a glottal stop (as in written Burmese, where consonantal finals are pronounced as a stop)|
- Bolton, Kingsley (2008). "English in Asia, Asian Englishes, and the issue of proficiency". English Today. Cambridge University Press. 24 (2): 3–12. doi:10.1017/s026607840800014x. ISSN 0266-0784.
- Thein Lwin (2000). Education in Burma (1945-2000) (PDF) (Report). Migrant Learning Centre. Archived from the original (PDF) on 2011-04-30. Retrieved 2010-08-21.
- Thein, Myat (2004). Economic development of Myanmar. Institute of Southeast Asian Studies. pp. 115–118. ISBN 978-981-230-211-3.
- Judson, Adoniram; Stevenson, Robert Charles (1921). The Judson Burmese-English dictionary. Yangon: American Baptist Mission Press. OL 6459075M.
- Barron, Sandy; John Okell; Saw Myat Yin; Kenneth VanBik; Arthur Swain; Emma Larkin; Anna J. Allott; Kirsten Ewers (2007). Refugees From Burma: Their Backgrounds and Refugee Experiences (PDF) (Report). Center for Applied Linguistics. Archived from the original (PDF) on 2011-04-27. Retrieved 2010-08-20.
- Than Than Win (2003). "Burmese English Accent" (PDF). Papers from the Seventh Annual Meeting of the Southeast Asian Linguistics. Arizona State University, Program for Southeast Asian Studies: 225–241. |
COVID-19 symptoms can sometimes persist for months. The virus can damage the lungs, heart, and brain, which increases the risk of long-term health problems. Most people who have COVID-19 recover completely within a few weeks. Older people and people with many serious medical conditions are the most likely to experience lingering COVID-19 symptoms, but even young, otherwise healthy people can feel unwell for weeks to months after infection. People sometimes describe themselves as “long haulers” and the condition has been called post-COVID-19 syndrome or “long COVID-19.” The most common signs and symptoms that linger over time include: Fatigue, Shortness of breath, Cough, Joint pain, Chest pain. Other long-term signs and symptoms may include: Muscle pain or headache, Fast or pounding heartbeat, Loss of smell or taste, Memory, concentration or sleep problems, Rash, or hair loss. |
What Is an Essential Question?
Here are the seven defining characteristics of a good essential question:
1. Is open-ended; that is, it typically will not have a single, final, and correct answer.
2. Is thought-provoking and intellectually engaging, often sparking discussion and debate.
3. Calls for higher-order thinking, such as analysis, inference, evaluation, prediction. It cannot be effectively answered by recall alone.
4. Points toward important, transferable ideas within (and sometimes across) disciplines.
5. Raises additional questions and sparks further inquiry.
6. Requires support and justification, not just an answer.
7. Recurs over time; that is, the question can and should be revisited again and again.
Resources to assist with planning
Stage One - Desired Results
- Big Ideas by Grade
- Combined Grades Big Ideas
- Concept-based Teaching Brief
- Concepts List by Subject
- Transfer Goals Handout
- Essential Questions Explained
- First People's Principles of Learning
Stage Two - Planning for Assessment |
A census tract is, in the practice of the United States as a county and government, a geographic region recognized in the course of carrying out a census and accordingly used for directing the process of the survey.
Census tract maps are thus provided to employees of the service with authority over this function, as in the United States is represented by the particular source of the U.S. Census Bureau. Moreover, a Census tract will accordingly be used for this purpose by representatives of the U.S. government according to a timeline of every 10 years. Outside of the United States, the territories referred to by that country’s Census Bureau as a Census tract will instead be called a census area or census district.
Census tract maps pertaining to the U.S. will show that census tract lines commonly coincide with those of other local divisions of land, such as into towns, cities, or other kinds of local political entities, and in that in all, in terms of the size of a census tract, census tract maps for one whole county will generally indicate several such political territories
Further in regard to the place of the Census tract and census tract maps in U.S. political practice, it might be noted that the census tract concept is generally considered to have originated as an American concept. Census tract divisions first began to be used in 1906 as methods for measuring statistical variations in local New York City neighborhoods, and were thus used in the census carried out four years later.
Free census records have been made available online to the research functions as people may wish to carry out on the past populations of the United States. To this end, census online search engines have been particularly noted as being popular with people who are doing research into their own genealogy, or “family tree,” as well as that of others.
Free census websites should thus be looked for as provide for access to particular years of the census, of which there are, in all, 22 for the entirety of U.S. history. Free census records, in addition to being commonly made available for the inquiries of people within the United States, may also be accessed for other English-language systems such as those of Canada or the U.K.
In terms of free census searches for the particular political entity of the United States, people can access such various Census online archives of stored records as for the 1850, 1920, and 1930 censuses. In regard to older Census online archives, people are often interested in the names and backgrounds of their ancestors specifically in the periods of historically significant epochs.
For instance, free Census records are commonly used to research information as to the members of a person’s family as might have been involved in a historical event like the Civil War. Census online functions can be used to show what a person’s legally registered name was, where he or she was born, and, in some cases, the date of his or her death. |
Indian Classical Music has been divided into two sub-genres, Hindustani
Shastriya Sangeet popular in North India and Carnatic Music, practiced in the southern part of India.
Most forms of music have at least three main elements – melody, rhythm and harmony. Because of its contemplative, spiritual nature, Hindustani (north Indian) classical music is a solitary pursuit that focuses mainly on melodic development. In performance, rhythm also plays an important role, giving texture, sensuality and a sense of purpose to melody. Instruments like tabla, pakhavaj etc. are used to provide rhythm. While, instruments like tanpura are accompanied to provide harmony.
In Hindustani classical music, once one has a command over the basic notes, he/she is introduced to ragas (which are like musical themes), and then is encouraged to start improvising and making his own melodies. The main thing Hindustani classical music does is to explore the melodic and emotional potential of different sets of notes. About five hundred ragas are known (including historical ragas) today. While in carnatic music, there are 72 melakartas in which most compositions are based.
Because, not everyone can master the rigorous training essential to appreciate hindustani classical music, many forms were given rise to which were semi-classical and light in nature. These styles are less rigid so that anyone can practice and compose songs. Later, light music was adopted in movies. Many singers composed in this style. Due to the influence of films and television, these compositions came in the limelight of the masses and gained popularity. Folk music on the other hand is diverse because of India’s vast cultural diversity. Though it is weakened due to the arrival of movies and the western pop culture, saints and poets have large musical libraries and traditions to their name.
Here are some notes from my study of Hindustani Classical Music.
Naveen Venkat (2017, December 9). Forms and Styles of Hindustani Vocal Music. Retrieved from http://www.naveenvenkat.com |
Physicists establish 'spooky' quantum communication
Physicists at the University of Michigan have coaxed two separate atoms to communicate with a sort of quantum intuition that Albert Einstein called "spooky."
In doing so, the researchers have made an advance toward super-fast quantum computing. The research could also be a building block for a quantum internet.
Scientists used light to establish what's called "entanglement" between two atoms, which were trapped a meter apart in separate enclosures (think of entangling like controlling the outcome of one coin flip with the outcome of a separate coin flip).
A paper on the findings appears in the Sept. 6 edition of the journal Nature.
"This linkage between remote atoms could be the fundamental piece of a radically new quantum computer architecture," said Professor Christopher Monroe, the principal investigator who did this research while at U-M, but is now at the University of Maryland. "Now that the technique has been demonstrated, it should be possible to scale it up to networks of many interconnected components that will eventually be necessary for quantum information processing."
David Moehring, the lead author of the paper who did this research as a U-M graduate student, says the most important feature of this experiment is the distance between the two atoms. Moehring graduated and now has a position at the Max-Planck-Institute for Quantum Optics in Germany.
"The separation of the qubits in our entangled state is the most important feature," Moehring said. "Localized entanglement has been performed in ion trap qubits in the past, but if one desires to build a scalable quantum computer network (or a quantum internet), the creation of entanglement schemes between remotely entangled qubit memories is necessary."
In this experiment, the researchers used two atoms to function as qubits, or quantum bits, storing a piece of information in their electron configuration. They then excited each atom, inducing electrons to fall into a lower energy state and emit one photon, or one particle of light, in the process.
The atoms, which were actually ions of the rare-earth element ytterbium, are capable of emitting two different types of photon of different wavelengths. The type of photon released by each atom indicates the particular state of the atom. Because of this, each photon was entangled with its atom.
By manipulating the photons emitted from each of the two atoms and guiding them to interact along a fiber optic thread, the researchers were able to detect the resulting photon clicks and entangle the atoms. Monroe says the fiber optic thread was necessary to establish entanglement of the atoms, but then the fiber could be severed and the two atoms would remain entangled, even if one were "(carefully) taken to Jupiter."
Each qubit's information is like a single bit of information in a conventional computer, which is represented as a 0 or a 1. Things get weird on the quantum scale, though, and a qubit can be either a 0, a 1, or both at the same time, Monroe says. Scientists call this phenomenon "superposition." Even weirder, scientists can't directly observe superposition, because the act of measuring the qubit affects it and forces it to become either a 0 or a 1.
Entangled particles can default to the same position once measured, for example always ending in 0,0 or 1,1.
"When entangled objects are measured, they always result in some sort of correlation, like always getting two coins to come up the same, even though they may be very far apart," Monroe said. "Einstein called this 'spooky action-at-a-distance,' and it was the basis for his nonbelief in quantum mechanics. But entanglement exists, and although very difficult to control, it is actually the basis for quantum computers."
Scientists could set the position of one qubit and know that its entangled mate will follow suit.
Entanglement provides extra wiring between quantum circuits, Monroe says. And it allows quantum computers to perform tasks impossible with conventional computers. Quantum computers could transmit provably secure encrypted data, for example. And they could factor numbers incredibly faster than today's machines, making most current encryption technology obsolete (most encryption today is based on the inability for man or machine to factor large numbers efficiently).
Source: University of Michigan |
CUTTLEFISH can change their color and camouflage themselves, becoming almost invisible to the human eye. According to one report, cuttlefish “are known to have a diverse range of body patterns and they can switch between them almost instantaneously.” How do cuttlefish do it?
Consider: The cuttlefish changes color by using the chromatophore, a special kind of cell found under its skin. Chromatophores contain sacs that are full of colored pigment and that are surrounded by tiny muscles. When the cuttlefish needs to camouflage itself, its brain sends a signal to contract the muscles around the sacs. Then the sacs and the pigment within them expand, and the cuttlefish quickly changes its color and pattern. The cuttlefish may use this skill not only for camouflage but also to impress potential mates and perhaps communicate.
Engineers at the University of Bristol, England, built an artificial cuttlefish skin. They sandwiched disks of black rubber between small devices that function like cuttlefish muscles. When the researchers applied electricity to the skin, the devices flattened and expanded the black disks, darkening and changing the color of the artificial skin.
Research on cuttlefish muscles
What do you think? Did the ability of cuttlefish to change color come about by evolution? Or was it designed? |
Heat capacity is the amount of heat that is to be added to 1 mole of substance to increase its temperature by 1 degree Kelvin. The same can be expressed in terms of mass as the heat that is to be added to 1 gram of substance to increase its temperature by 1 degree Kelvin. The former has the units J/mol.K and the latter J/g.K. Heat capacity can be calculated in the following the calculation tables.
Heat capacity is dependent on the temperature i.e for the same substance the heat capacity is different at different temperature. In order to find this value, heat capacity is expressed as function of temperature(T) which is usually valid over a given range of temperature. These equations of heat capacities have various forms and are provided in the literature.
In the following calculation sheet, we have considered three common equations for computing heat capacity. The constants in the equations are to obtained from textbooks, literature etc. An example calculation for each type of equation is carried out. You can enter the constants directly and get heat capacity values in J/mol K and J/g.K .
Equation type 1 : Cp = A + BT – CT-2
Equation Type 2 : Cp/R = A + BT + CT2 + DT-2
Equation Type 3 : Cp = C1 + C2T + C3T2 + C4T3 + C5T4
To get a copy of Heat capacity calculation sheet, contact us here. |
Solid to Gas
The process which is used to convert solid to gas is known as sublimation. It diverts the flow of instinctual energy and solid converts into gas without passing through the liquid stage. Only some compounds like solid carbon dioxide can go through the process at normal atmospheric pressure. It changes from the solid state to gaseous state without being a liquid. Most of the objects require low atmospheric pressure to go through sublimation. Iodine can be converted into gas from solid directly without melting only under extreme temperatures.
Fast Facts: –
- The process of sublimation is most commonly used to describe the process of snow and ice changing into water vapor.
- It is not easy to see the process occurring, especially with the ice.
- Sublimation is a type of phase transition as the substance is being changed from one state to another.
- The reason of sublimation is that the substance absorbs energy so quickly that from its surroundings that it never melts.
- Freezer burns are the result of sublimation of ice into water vapor.
- Element arsenic will also sublimate from solid into the gaseous state.
- It is the process which is the main reason behind wearing down of glaciers.
- This process is also used for purifying compounds and especially for organic compounds.
- Latent fingerprints on paper can be revealed with the help of sublimation of iodine.
- Dry ice is used to create fog effects because it sublimates very quickly. |
Any heart problem can seem overwhelming at first, but there are many treatment options available.
Many heart valve problems are first identified by the presence of a murmur, or sound that can be heard by listening to the heartbeat with a stethoscope. A murmur may sound like a “whooshing” noise as blood flows from one chamber to the next, or it may sound like an extra click when a valve allows back flow.
Some murmurs are harmless. Others can indicate an underlying problem with the valve. If a murmur is detected, here are some possible causes.
Murmurs may indicate valve problems including:
- Stenosis: a narrowing or stiffening of the valve that prevents enough blood supply from flowing through
- Regurgitation: when valves allow blood to flow backward into the chamber
- Prolapse: a valve that has improperly closing leaflets
- Atresia: a valve that is improperly formed or missing
Causes of Valve Problems
Congenital defects (abnormalities present at birth):
Aging and age-related valve disease include:
- Degenerative valve disease – Over time valves can slowly degenerate. This most commonly affects the mitral valve. For example, mitral valve prolapse, a condition that affects 2% to 3% of the population, may eventually lead to mitral valve regurgitation and require treatment.
- Calcification due to aging – Sometimes calcium can accumulate on the heart's valves, most commonly affecting the aortic valve, and can lead to aortic stenosis.
- Mediastinal radiation therapy (radiation to the chest) – Studies have shown childhood cancer survivors who had radiation therapy have an increased chance of valve disease later in life.
Related illnesses and conditions that can cause valve problems:
These conditions can cause one or more of the heart valves to leak blood backward into the heart chambers or fail to open fully. This makes your heart work harder and lessens its ability to pump blood. Although valve problems can potentially be severe and life-threatening, most valve conditions are also highly treatable. |
This article does not have any sources. (March 2019)
Pseudoscience ("false science") is an idea that looks like science, but is not. Pseudoscience may fail one or more parts of science. Sometimes, pseudoscience are ideas that are thought to be wrong, like scientific racism.
Essentially, pseudoscience is any idea about how nature works that is generally not accepted as true by the mainstream scientific community. An idea can be considered pseudoscientific for any number of reasons. The word pseudoscience literally means "false science". Creationism and astrology are both well known pseudosciences.
Pseudoscience is often considered immoral by scientists not because its claims are undemonstrated, but because they are sometimes presented as facts and/or real. An average person might not recognize the differences in credibility between a television program about psychics supposedly reading people's thoughts versus one that presents evidence for and against global warming.[source?]
Differences between pseudoscience and scienceEdit
- Pseudoscientific ideas are not tested, or can not be tested (i.e. not testable). Science ideas are tested, and are testable.
- Pseudoscientific ideas are not given to scientists to read before they go into a paper (called "peer review"). Science papers are peer reviewed.
- Pseudoscientific ideas are not based on facts. Science is based on facts and observations.
Types of pseudoscienceEdit
Ideas (more properly "hypotheses") about how nature works may be considered pseudoscientific for many reasons. Sometimes, the hypothesis is just simply wrong, and can be demonstrated to be wrong. An example of this is the belief that the Earth is flat, or the belief that human female skeletons have one more rib than men do. Ideas such as these are considered pseudoscientific because they are just simply wrong.
Sometimes, scientists agree that a certain idea may be true, but could never be demonstrated to be true, even in principle. For example, some people believe that the Earth and the universe came into existence last Thursday. They believe that when the universe came into existence last Thursday, it was created with the appearance of being many thousands or even millions of years old. According to these believers, even our memories of two weeks ago are actually just the false memories that came along with the creation of the universe, which took place last Thursday. Such a belief is considered pseudoscientific because it is not falsifiable—scientists cannot even imagine an experiment that could shed light on whether this belief is true or false.
Other types of pseudoscience are considered pseudoscientific because they are based on deception, even though the idea being used is not impossible. Examples are people who claim to have made time travel devices, antigravity devices, or teleporters. Scientists simply do not have the technology to build such things in modern times, even though they may be able to someday.
Some ideas are arguably pseudoscientific. This means that some mainstream scientists consider the idea pseudoscientific and some do not. Certain ideas about how the stock market behaves fall into this category.
Pseudoscience is not exactly the same thing as biased research, where the scientist has some bad motive (such as personal gain, fame, or financial profit) for promoting their findings. It is also not the same as an untested hypothesis, which is an idea that scientists cannot test yet because they do not have the money or technology to do so. The theories of quantum gravity are untested hypotheses: scientists can easily imagine experiments to test them, but they just do not quite have the technology to do so at this time.[source?] |
When it comes to language learning, songs are extra special language. They have a dimension that makes them more than language. Using songs to teach Spanish is incredibly effective. I think of music as language glue, because it sticks language to the human brain in a way nothing else can. Just think of all the lyrics you can sing and how easily you can do it!
Many songs introduce kids to culture. They can teach everything from geography to food. Values and history are part of many songs, too. You can tap into this cultural component using a variety of activities.
To use songs to teach Spanish, incorporate visuals, movement and text. Below are 50 ideas using songs to teach Spanish to kids of different ages and levels. You can do many of the activities as listening activities first. Once kids know the song, they will sing along.
Check out our favorite Spanish songs for kids grouped by theme!
Movement with Songs to Teach Spanish
- Do gestures that represent the lyrics.
- Teach sign language for key vocabulary.
- Do actions like jumping or turning around that let kids use their whole body. You can associate an action with a word, or do them to the rhythm of the song.
- Have kids move toy animals to make them do the actions in a song.
- Do traditional finger plays or make up your own.
- Learn dance songs like La mascota by Pim Pau, La Yenka, or Chuchuá
- Draw in rhythm to music. Cantoalegre has amazing activities for dibujo rítmico.
- Toss balls in rhythm to the song.
- Do traditional hand clapping games, group clapping games or make up your own. Here are a few posts with clapping games to get you started.
- Learn the cup rhythm. It can be done with many songs, but the most popular is Si me voy.
- Sing songs for daily routines, like A guardar for picking up or Al agua pato for bath time.
- Invent songs using familiar tunes to sing about what kids are doing. You can sing things like Ponte, ponte, ponte los zapatos to the tune of 10 little fingers. Yo canto esta has a version of La bamba for washing hands.
Using Pictures and Objects with Songs to Teach Spanish
- Point to pictures cards or parts of a scene when words are mentioned. All the songs to teach Spanish from Rockalingua have lyric sheets with pictures.
- Hold up pictures or words when you sing them.
- Make a caja de canciones with figures for different songs. This wonderful idea is from a blog called Rejuega.
- Act out the song with puppets as you sing.
- Tell a song as a story with puppets. Add dialog and incorporate lines from the song as narration. Try it with Los pollitos. It is so fun!
- Represent the lyrics with picture cards and have kids put them in order as they sing.
- Use clip art or draw a scene to make a coloring page to represent the song.
- Use a felt board to tell and sing the song.
- Let children bounce a stuffed animal to the rhythm of a song as you sing.
Reading and Writing Using Songs to Teach Spanish
- Create cloze exercises (fill-in-the-blank) with key vocabulary using the lyrics.
- Put the lyrics on cards and have kids put them in order as they sing
- Have kids write in rhyming words as they listen.
- Ask reading comprehension questions about the lyrics of a song.
- Find synonyms and antonyms for words in the lyrics.
- Have kids write new verse.
- Let kids change one line to change the meaning of the song.
- Let kids substitute one word for another to change the meaning of the song.
- Illustrate the song in one scene and label the words.
- Illustrate the song line by line or verse by verse and make a mini book.
- For little ones, represent key vocabulary with pictograms. Read and sing together.
- Use the lyrics of a song as shared reading.
- Treat songs as literature. They are poems put to music, so you can talk about stanzas, literacy elements and themes. Consider the rhyme scheme and figurative language. If the song tells a story, talk about the elements of plot structure.
Culture Activities for Songs to Teach Spanish
- Talk about places, names, food, or culturally relevant objects mentioned in songs.
- Use songs to talk about regional variations in vocabulary.
- Teach songs related to holidays and other celebrations.
- Talk about, or better yet, make and taste foods mentioned songs. José-Luis Orozco has recently released ¡Come Bien! Eat Right! It has wonderful songs about food! For older kids, if you want to go to the extreme,try Lupita’s Taco Shop, which lists a staggering number of Mexican foods.
- Learn about instruments from different countries. Daria’s World Music has lots of great information and materials. You can find crafts and coloring pages here.
Book Activities for Songs to Teach Spanish
- Sing the words of a simple picture book. Here is a good example with Luna, a book published by Kalandraka.
- Read picture books that come with songs on CD. Here are a few favorites:
- Barefoot Books publishes wonderful picture books with CDs. The CDs have a song version of the story. Try Algarabia en la granja or Vivamos la granja
- There are many lovely collections of traditional songs for children. Many of these include the music in the book or have CDs.
- Arrorro, Mi Niño is a bilingual collection of traditional baby games and lullabies from fourteen Spanish-speaking countries. Tortillitas para mamá and Arroz con leche are also favorite collections of traditional nursery rhymes and songs.
- Jose-Luis Orozco has two wonderful collections of traditional songs in Spanish: Diez deditos and Other Play Rhymes and Action Songs from Latin America and De colores and Other Latin American Folk Songs for Children. You can purchase CDs of the songs in these books.
Finally, as you are working with kids and music, follow their lead. I have had preschoolers spontaneously sort Legos to color songs and older kids sing duets with themselves using their phones. Kids are imaginative by nature and music inspires them. There are countless ways to use songs to teach Spanish to children! |
The traditional museum display is constructed around objects, thus making material culture a key constituent of most museum interpretation narratives. The origin of this model can be traced in some part to private collections maintained by prominent individuals during the Renaissance. Many of the significant museums in the world opened during the 18th century, an era when the trend of collecting reached a climax. Private collections of art, objects, rare books and curiosities functioned as symbols of social prestige, and it was through the collection and consumption of objects that one acquired knowledge and superiority. With a concern for the continuity of collections, as well as a development in thinking which prioritised public education, many of these private collections were left to the state.
That museums act as repositories of genuine articles are in many ways their strength. In our modern world of reproduced images and mass production, museum artefacts inspire a sense of wonder, reality and nostalgia. However, museums are being put under increasing amounts of pressure to remain relevant to today’s visitor, and to not simply function as dusty storage facilities.
In more recent history there has been as trend for ‘Living Museums’ such as Beamish, North England Open Air Museum in County Durham and the Black Country Living Museum near Dudley. At the Black Country Living Museum staff and volunteers dress in authentic clothes and interpret the lives of past residents of their open air village. The museum creates a lifelike experience for the visitor, engaging all the senses, even offering traditional food from the bakery, sweet shop and fish and chip shop. These living museums tap into a unique consumption of historical knowledge, quite literally bringing the past into our present-day culture. However, whilst in many ways living museums engage a younger audience in a practical and easily digestible fashion, some authorities have challenged this style of museum for their lack of authenticity. Critics say no matter how closely one attempts to remain authentic, it is impossible to recreate a complete society from another time.
Next year will mark the 400th anniversary of William Shakespeare’s death, and will see the opening of a new visitor attraction, Shakespeare’s Schoolroom and Guildhall. These were the formative learning places for a young Shakespeare, who was taught in the school and saw some of his first theatre performances at the Guildhall. The project will enable public and tourists from around the world to sit where Shakespeare was first taught, and experience the place that inspired Shakespeare to become the world’s greatest playwright. The King Edward VI School was established in the 13th century and is still open as working boys grammar school and academy today. Unlike living museums which aim to recreate the past, this functioning schoolroom and Guildhall create a unique visitor experience, where one can experience history, but also understand its impact on the present.
Similarly, the bringing together of past and present is perhaps one of the key strengths of the Mary Rose Museum. The Mary Rose was raised from the sea bed in 1982 by the Mary Rose Trust and together with thousands of artefacts. The finds included weapons, sailing equipment, naval supplies and a wide range of personal objects. More unusual finds included a feeding bottle, a backgammon set and peppercorns, as well as the full remains of the ship’s dog ‘Hatch’. Since being raised the remarkable history of the Mary Rose and her objects has continued through ground-breaking research. The STEM lab, a programme of outreach and education activities for school children, offers expert-led classroom and laboratory sessions designed to enrich teaching. The museum thus functions not only as a visitor attraction but an educational facility and a centre of academic research. The success of this model is clear as the Mary Rose Museum has welcomed over one million visitors since opening in May 2013.
In a time when museums are under increasing pressure to remain relevant, to serve a wider audience and promote social change through learning, they must learn to adapt and redefine themselves. It is perhaps those museums that effectively combine our past and present which will flourish. |
Imagine you're a child intently working away on a painting, when the child sitting next to you knocks your elbow, spilling paint and spoiling the drawing. How should you react?
Was it intentional, or just a careless accident? Young children rely on their parents and teachers to guide them to make sense of their social worlds. A new study shows how parents can help inoculate their children against hostility and aggression, just by talking to their children.
Dutch parents were shown picture books that told four different stories, each in two pictures. The stories were about provocative situations, such as a child being skipped over when candy was handed out, or prevented from sitting down at a table with other children. There was no explanation as to why the provocation was occurring.
Parents who help their children perceive less hostility in their social worlds reduce kids' likelihood of behaving aggressively.
When parents talked to their children, describing the harm as accidental or unintentional, the children were less likely to interpret a story, old or new, as an act of aggression.
“Young children may feel physically hurt, left out, or frustrated by their peers' actions, the intent of which, at this age, is frequently unclear,” explained Anouk van Dijk, a postdoctoral researcher in psychology at Utrecht University who led the research, in a statement.
“While most children interpret ambiguous slights as accidental, some feel they are hostile. By framing social situations in a positive way, parents can help their children perceive less hostility in their social worlds and thus, reduce their likelihood of behaving aggressively.”
Aggression tends to inspire more aggression. And a hostile perspective tends to grow stronger over time — the proverbial chip on the shoulder. Everyone has days when the world seems to be against them. The earlier that children learn not to take it personally, the better off they'll be.
The study appears in Child Development and is open access. |
Child nutrition is tricky, especially when getting your child to eat healthy is always an uphill battle. So, here are some easy tips and tricks you can use to encourage your child to eat healthier, without fighting them on it!
1. Set a good example.
Young people are most influenced by what they see and experience, not by what they are told. Leading by example is the best way to shape children’s behaviour and their diets. Remember, the habits children form when they are young often stay with them for life!
2. Feed your child(ren) a balanced diet.
Natural tastes for food develop early on. Babies are born with a palette inclined to sweet foods, so it is of upmost importance to broaden children’s palettes and encourage savory foods from a young age. If a child becomes accustomed to the flavors that nature provides and has few experiences with the sensory overload that is refined/processed food, the child will be less likely to crave these foods later in life (and reduce the risk of health problems related to said foods).
3. Ensure plenty of fluids, especially good quality drinking water.
Children should be encouraged to drink water before any other liquids, especially sugary drinks. Try your best to avoid juices, even those that say ‘no added sugar’ or ‘reduced sugar’, these are still PACKED with sugar (check the nutrition facts) and have relatively no other beneficial nutrients. Products like these can cause mood swings, among other problems. If your child(ren) absolutely must have juice, try diluting it with water to reduce the impact of sugar.
4. Do not bribe your child with sugar.
It is all to easy to give children sweet treats whenever they need attention or a distraction. However, this conditions the reward/pleasure systems of the brain to demand sugar in response to stressors, and this remains engrained in their brains as a reward for the rest of their lives. This conditioning is extremely hard to reverse and can lead to many problems later in life such as emotional or stress eating, overeating, undereating, etc. Instead, encourage children with healthy snacks, with small toys, or with experiences.
5. Have healthy snacks available around the house.
Often the problem with healthy snacks is that they take a little more time to prepare. By having healthy snacks pre-made and ready to go, the convenience is the same come snack time. These snacks can then be readily available for whenever they need, like when they get home from school.
6. Involve your child(ren) in shopping and preparing foods.
Shopping with young children can be daunting. However, teaching them good shopping habits from a young age will be extremely beneficial in teaching them not only how to shop, but how to make good choices when surrounded with temptation. Teaching children how to prepare the foods they like will also teach them the skills they need to prepare healthy options later in life and can also allow them to be creative with food.
7. Plant a garden.
Planting a garden with your child(ren) can be more than just a bonding experience. It can teach them the value of good, clean, quality produce and allow them to experience eating right off the vine. It can be magical for children to watch and cultivate something that they planted. The produce you plant can then turn into a meal that they helped make from beginning to end! If you don’t have a garden, suggest a community garden at their school or in your town, or join with friends/neighbours to plant a garden for everyone to enjoy.
8. Organize your refrigerator/pantry in a way that allows your young ones to get what you want them to have.
Making healthy foods readily available for children will encourage them to choose those foods over anything else. If there are treats in the house, keep these out of reach/out of sight, and allow them only on occasion. Even if they share their lunches and end up having a little more junk food at school, teaching them to eat healthy at home will make a world of a difference!
9. Help your children to limit or avoid their intake of unhealthy additives.
The basic additives to watch out for are artificial colors and flavours, excess sugar, MSG, aspartame, sodium nitrite (often in cured/lunch meats), salt, excess sulfites in dried fruits and preserved foods, and hydrogenated (trans) fats. The best way to avoid additives it to eat a diet consisting of whole foods (fruits, vegetables, whole (unprocessed) grains, high quality protein) and to avoid anything in the middle aisles of the grocery store (stuff that has a long shelf life, processed foods, pre-packaged foods, instant meals, treats, etc.).
10. Watch out for food allergies and delayed sensitivity reactions, which are very common in children.
When you limit or avoid foods that cause reactions in your child(ren) you will notice immediate differences in behaviour and health. Sensitivity reactions can occur even 3 days after consuming a product and can cause fatigue, brain fog, mood swings, digestive problems, hyperactivity symptoms, frequent illness, the list goes on. One example is chronic ear fluid in children, which is often treated as chronic ear infections with strong antibiotics. Ear fluid can be an indicator of a dairy sensitivity, and often removing all dairy from the diet will reduce the fluid in the ear. Common sensitivities include dairy and gluten, as well as some preservatives and artificial colors. Food sensitivities have a vast and expansive range and can cause reactions to anything, like pineapple, garbanzo beans, beef, or to garlic. Be sure to look for reactions they may be having and do your best to eliminate the foods causing reactions. |
Skin cancers are named for the type of cells that become malignant (cancer). The three most common types are:
Basal cell skin cancer: Basal cell skin cancer begins in the basal cell layer of the skin. It usually occurs in places that have been in the sun. For example, the face is the most common place to find basal cell skin cancer.In people with fair skin, basal cell skin cancer is the most common type of skin cancer.
Unlike moles, skin cancer can invade the normal tissue nearby. Also, skin cancer can spread throughout the body. Melanoma is more likely than other skin cancers to spread to other parts of the body. Squamous cell skin cancer sometimes spreads to other parts of the body, but basal cell skin cancer rarely does.
When skin cancer cells do spread, they break away from the original growth and enter blood vessels or lymph vessels. The cancer cells may be found in nearby lymph nodes. The cancer cells can also spread to other tissues and attach there to form new tumors that may damage those tissues.
The spread of cancer is called metastasis. |
Stroke Order Rules Group 4
In this Chinese character lesson we will learn Group 4 of Chinese Stroke Order Rules, and see some example Characters that apply these two rules.
- Stroke Order Rule 7: Middle Before the Two Sides.
- Stroke Order Rule 8: Stroke Order of Dot.
- What would be an example Character that applies Stroke Order Rule 7, Middle Before the Two Sides?
- What would be an example Character that applies Stroke Order Rule 8, Stroke Order of Dot?
Stroke Order Rule 7: Middle Before Two Sides
Middle Before Two Sides, 先中间后两边.
For characters that are split into two sides by a central divider, we’ll write the divider first, and then the two sides. But with the two sides, we’ll always start from the left side, and then the right side.
小, 永, 乖, 水
Stroke Order Rule 8: Stroke Order of Dot
The Stroke Order of the Stroke Dot 点 (Diǎn), in a character, is decided by the orientation or location of the Dot.
There are four different locations where a single Dot can be placed in a character: top center; top left; top right; or inside a frame. In situation 1 and 2, we write Dot first before anything else. And in situation 3 and 4, we write Dot last.
Situation 1. Top Middle
When the 点 sits on the top left of a character, we’ll write 点 first and then the rest of the character.
Situation 2. Top Left
When 点 is in the top left. So we’ll write 点 first and then the rest of the character.
Situation 3. Top Right
When 点 is in top right of a Character, we will write 点 the last.
Situation 4. Inside A Frame
When 点 sits inside a “frame”, we’ll write the frame first, and then finish the character with 点. |
Creation, development, and evolution of a curriculum can only occur in a classroom. We have spent thousands of hours in classrooms across the country, working directly with thousands of teachers and students, creating and developing the LabLearner curriculum. It’s this combination of careful research and the ability to listen to teachers and school administrators that has led to the well conceived and thoughtful curriculum that you see today.
By 2004, rather than thinking in isolated categories like Biology, Chemistry, Earth Science and Physics, our LabLearner science curriculum began to revolve around nine Core Concepts™ which we believed offered a much more useful way to organize elementary and middle school science instruction. It also results in a spiraling curriculum where scientific concepts build upon each other, year after year. This is now precisely the recommendation of the Next Generation Science Standards (NGSS).
Over 60 CELLs (Core Experience Learning Labs) are strategically inserted throughout the PreK-8 educational experience for a truly spiraling curriculum (see below). At each grade level, developmental and academic skills are accounted for. These skills range from mathematical, reading, writing, critical thinking, and fine motor skills, among others.
In addition, LabLearner CELLs have been designed so that essential scientific themes spiral throughout the curriculum from PreK through eighth grade, taking into account the neurocognitive processing mechanisms of elementary and middle school students, while remaining perfectly correlated with academic standards.
In 2010, the National Research Council released what they called “Crosscutting Concepts” as a means of integrating the K-12 science curriculum into a unified, spiraling curriculum of related concepts. These concepts now serve as the basis of the newly formulated Next Generation Science Standards (NGSS).
Beyond the cross-cutting concepts, LabLearner Program is infused with the Common Core English Language Arts (ELA) and Math Standards.
Information processing is important in learning and memory. In fact, if new information is not processed it will not lead to permanent new memories and learning will not occur. Conversely, the more extensively new information is processed, the better it will be remembered. The steps involved in the input and processing of new information is summarized in the Information Processing Model. Let’s consider this model in overview. The Input arrow indicates new information entering the brain. Most information that we are presented with never makes it through the sensory register and is lost as indicated by the downward arrow on the left (leading to “Forgotten”). We receive an enormous amount of information, thousands of stimuli per second. We cannot process so much information at one time. Consider, for example, the “feel” of your right foot at this very moment. If you choose to concentrate on it, you can actually sense information being sent from your foot to your brain. Obviously, it would be difficult to concentrate on anything else if we spent all of our time dealing with information coming from our right foot, let alone the rest of our body and environment. Therefore, we routinely filter out almost all information delivered to our brain through the sensory register.
Below are samples of the LabLearner curriculum aimed at giving you a glimpse into the teaching, application and absorption of the curriculum material.
- Each academic year is divided into a series of Core Experience Learning Labs (CELLs)
- Each CELL consists of three to six Investigations, each of which takes approximately a week to complete.
- Each Investigation consists of three lessons: PreLab, Lab, PostLab
Grade Three: Investigation Four: Exploring Electricity
Exploring Electricity Sample Lesson Plan includes introductory material for the entire CELL and information to teach the pre-lab, lab, and post-lab lesson of Investigation Four and the Performance Assessment. Performance Assessments are in-lab lessons in which students must solve a problem using concepts from previous Investigations by designing their own experiment.
Exploring Electricity Sample Student Data Record (SDR) serves as the students’ workbook/science journals. This sample includes key vocabulary, entries for data collection, pre and post-lab activities, and data analysis for Investigation Four and the Performance Assessment. The teacher key for all SDR questions is located within the Exploring Electricity Sample Lesson Plan.
Exploring Electricity Sample Pre-Post Assessments provide an individualized assessment tool for student evaluation. Pre-tests are designed to elicit information about a student’s background knowledge of concepts prior to beginning a CELL. Post-tests capture student understanding of concepts at the completion of a CELL. Both student and teacher keys are included.
Exploring Electricity Sample CELL Summary provides a way to review experiments conducted in the LabLearner Lab. The summary is broken down into the experiments students performed in the Investigation Four lab AND the results of the experiments.
Exploring Electricity Sample Videos: Videos for both teachers and students accompany each Investigation. Teacher videos focus on inside tips, logistic suggestions and useful theoretical background for each Investigation. Student video are viewed during the PreLab so students know exactly what to expect and do when they get to the Lab.
- Each academic year is divided into a series of Core Experience Learning Labs (CELLs)
- Each CELL is divided into a theoretical introduction for the teacher and three to four Investigations.
- Each Investigation consists of four lessons: Concepts, PreLab, Lab, PostLab
Grade Seven: Investigation Two: Chemical Reactions
Chemical Reactions Sample Introduction provides a summary of concepts introduced in the CELL, key terms and materials needed to teach the entire CELL.
Chemical Reactions Investigation Two Sample Concepts: introduces and explains the scientific concepts that will be introduced in Investigation Two Lab.
Chemical Reactions Investigation Two Sample Pre-Lab reinforces the scientific concepts within the CELL and their real-world connections. Included in the lesson are key vocabulary, Focus Questions relevant to the upcoming lab and a short video highlighting key steps and manipulations that will be encountered in Lab.
Chemical Reactions Investigation Two Sample Lab includes important teacher preparation for the Lab, objectives the Lab, lab materials, and lab procedures.
Chemical Reactions Investigation Two Sample Post-Lab includes analysis questions and suggested responses, suggested data to use as a reference; and the opportunity to compare lab experiences, discuss data trends, and summarize conclusions.
Chemical Reactions Performance Assessment lists the objectives of the assessment, provides the teacher with instructions for preparing assessment materials, list materials needed for each student lab group, includes the Student Data Record (SDR) Background(s) for the assessment, and suggested data to use as a reference. Performance Assessments are in-lab lessons in which students must solve a problem using concepts from previous Investigations by designing their own experiment.
Chemical Reactions Sample Performance Assessment Rubric serves as a guide for teachers to assess students’ work in the Performance Assessment. Rubric goals align with the goals provided to students in their copy of the Student Data Record (SDR) of the Performance Assessment.
Chemical Reactions Sample Student Data Record (SDR) serves as the students’ workbook/science journals. This sample includes key vocabulary, background information, entries for data collection, and data analysis for Investigation Two and the Performance Assessment.
Chemical Reactions Sample Pre-Post Assessments provide an individualized assessment tool for student evaluation. Pre-tests are designed to elicit information about a student’s background knowledge of concepts prior to beginning a CELL. Post-tests capture student understanding of concepts at the completion of a CELL. Both student and teacher keys are included.
Chemical Reactions Sample CELL Summary provides a way to review experiments conducted in the LabLearner Lab.
Spiraling Curriculum Example: Developing a Concept of Heat through Hands-on Experiments
As an example of how the LabLearner curriculum spirals concepts from primary grades through middle school, consider the concept of heat. The concept of heat is discussed with students before they can read. They perform experiments with thermometers at this time as well.
As the years progress, students learn more and more about heat: how it can be used to do mechanical work, how heat is involved in chemical reactions and even how photons are produced in the Sun’s interior by nuclear fusion.
Kindergarten: Exploring Time and Sequence
Students investigate the effect of heat by comparing two ice cubes: one ice cube that sits under a lamp and another that remains “shaded.” They contrast the appearance of each over a period of time. In addition, students drop food coloring into cold and hot water and observe differences in the time it takes for the dye to spread in each temperature water.
Grade 1: Weather Changes
Students perform experiments and construct models that explore weather and changes in weather. Among other fundamental concepts, students explore the characteristics of the four seasons and the changes in temperature that occur during each season. Students measure the temperature in different school locations after learning to use thermometers.
Grade 3: Our Solar System
Students are introduced to basic physical principles through investigations related to our solar system. A central focus is their investigation of three forms of radiant energy released by the Sun: visible light, infrared energy, and ultraviolet energy. As a part of their investigations, students explore how radiant energy produces temperature changes in the Earth’s atmosphere.
Grade 4: Forms of Energy
Through experimentation, students explore the concept of energy. Students learn to identify and understand the different forms of energy, including thermo energy and heat, and their uses in every day life. As a focus, students examine the Law of Conservation of Energy as it applies to chemical energy and heat by conducting experiments with endothermic and exothermic reactions.
Grade 5: Investigating Heat
Students explore heat, temperature, and the transfer of heat through conduction, convection, and radiation. They investigate the relationship between kinetic energy and temperature, and the relationship of heat to kinetic energy and temperature. In addition, they learn to calculate the rate of heat transfer and use this formula to draw conclusions as they explore different types and processes of heat transfer. Investigations also focus on heat conductivity and the understanding that the chemical composition of matter determines its ability to transfer heat.
Grade 6: Kinetic and Potential Energy
Through experimentation and data collection, students explore the following type of questions: How does the transfer of potential energy to kinetic energy relate to the Law of Conservation of Energy? Can one form of energy be converted to another? An area of focus is the transfer of energy from light (kinetic energy) to another form of kinetic energy, thermal energy or heat.
Grade 6: Atmosphere
Students explore Charles’ Law, the relationship between the temperature and density of air, and the relationship between the density of air and atmospheric pressure. Students investigate how changes in the temperature of the atmosphere produce changes in its density. Through their experiments, students gain a better understanding of atmospheric events, weather phenomena and the scientific principles that govern them.
Grade 7: Chemical Reactions
Students investigate chemical reactions and the principles that govern the consumption of reactants, the production of products, and the dependence of reaction rate on reactant concentration. Through experimentation and data collection, students study endothermic and exothermic chemical reactions and build a calorimeter to follow the heat production as chemical reactions take place. They tie their observations to the Law of Conservation of Matter, the relationship among the reactants, the products and the rate of the reaction.
Grade 8: Heat Transfer
Students investigate the relationship between kinetic energy, temperature, and heat transfer during physical changes of state. Further, the effect of solutes on freezing point and boiling point is explored. Finally, students explore specific heat capacity by quantitating the amount of heat that is absorbed by a solid and calculating the amount of heat that the solid can transfer.
Cross-Section of Assessments
Most educators and researchers agree that science should primarily be taught in a hands-on, experiential fashion. This constant is more measurable, of course, than success. Thus, the baseline question for educators and researchers alike in the field of science education today is: What constitutes a good or successful science experience for students? We’re able to make this assessment in a number of ways.
Assessments and Testing – Are We Doing It Correctly?
What Can We Learn From the Research and Clinical Sciences Models? Several years ago, my colleague, Dr. Paul Eslinger, and I were asked to write a short piece about “high-stakes” testing in K-12 systems for The EducationPolicy and Leadership Center. At the time, Paul was a neuropsychologist that worked with patients suffering from various forms of […]
The preschool portion of the LabLearner program was assessed over a five-month period in eleven preschool classrooms, including Head Start and STEP classrooms, in three different states: Pennsylvania, Florida, and Virginia. The goal of the assessments was to determine whether the LabLearner Preschool Program fostered the development of critical thinking, problem solving, numeracy, fine motor control, […]
The preK-8 LabLearner Program consists of some 60-plus individual science units called Core Experience Learning Labs (CELLs). Each CELL takes approximately four or five weeks for a class to complete, working in teams of four to six students. In grades one through eight, students complete a pretest, taken before the CELL begins and a posttest, […]
Blue Ribbon Awards
The U.S. Department of Education, National Blue Ribbon Award is one way to assess the academic impact of a curriculum or program. The Blue Ribbon is widely considered the “highest honor a school can achieve”. This is because the Blue Ribbon is not just a measure of minimal compliance or simple standardized test scores. It is […]
Within the U.S. there is a large diversity of standardized tests that public and private schools choose to assess their students in science. LabLearner schools across the U.S. utilize a variety of standardized tests such as the Virginia Standards of Learning (SOLs), the Michigan Educational Assessment Program (MEAPs), the Standard Achievement Test 10th edition (SAT […]
Perhaps the best evaluation of all is to simply talk to students to find out what they really know about science. Teachers try to do this as much as possible because rather than a static, one way “report” of information from student to teacher, a back and forth exchange of thoughts occurs. Students can ask […]
LabLearner Video Overview |
Students with learning difficulties often have trouble at school because they don’t have effective strategies for working through challenges.
These students can benefit from help to take charge of their own learning, monitor their behaviour and progress and make adjustments along the way.
1. Setting Goals
When done in the right way, goal setting gives students power over their own learning and opportunities to look at their own behaviour and identify ways they can improve. Setting goals helps students identify what they need to do, lets them see how they are progressing, and motivates them to act productively.
The goals students set for themselves should be specific and challenging, but not too hard. The student should be able to reach their goal quickly so they can feel good and move on to the next goal. As every student is different, every students goals will be different. One student might identify that they don’t get their homework done because they aren’t managing their time, so might decide to cut out a recreational activity to achieve the goal of getting their homework done before dinnertime. Another student might identify that he struggles with homework because he forgets to bring the homework instructions home, so he might realise he needs to bring his notes home so he can reach his goal of completing his homework each day.
Self-monitoring involves a student asking himself whether he has engaged in a specific, desired behavior. A student might ask himself, Am I using my time in the right way to complete my homework by dinnertime? Or, Did I put all of my assignments in my backpack to take home? Students may also self-monitor for behaviors like paying attention, staying on task, and meeting performance expectations such as completing all homework problems or spelling 8 of 10 spelling words correctly.
This is part of normal development for many younger children and can be effective at any age when used to self-monitor and direct learning behaviour. For example, a student who is having trouble understanding a challenging text might think, I need to look up the definitions of these unfamiliar words and read this page again.
Students can use self-talk to remind themselves to focus their attention, to take positive steps when faced with difficulties, and to reinforce positive behaviours. Teachers and parents can model effective self-talk, but should allow each student to create and use her own statements. Taking some time to write out some useful statements before starting a new project or beginning a homework assignment can enable students get themselves out of a tight spot.
Self-reinforcement occurs when a student chooses a motivating reward and then awards it to himself when he achieves a milestone. Self-reinforcement can be short or long term and can relate back to goals that have been set. The student who has identified time-management as an issue, for example, might decide, I can go to the movies on Sunday because I finished all of my homework before dinnertime every night this week.
Self-reinforcement can also work well in the classroom. Teachers and students can select rewards together and teachers can let students know how to earn them. Once a student has met the criteria for a reward, she can award it to herself – say, by selecting a sticker for her journal after completing the day’s writing assignment and getting her teacher’s approval.
Source: Self-Regulation Strategies for Students With Learning Disabilities by Carrie Gajowski |
Holistic, Multi-Disciplinary, Globally Cooperative Approach
Since each generation must start from a blank slate of knowledge of the world they are born into, education plays a critical role in the future of our species. The fundamental job of education is to prepare the next generation for the challenges of their era, using the knowledge and wisdom that has been gained from all previous generations. This job is rapidly becoming more difficult in recent decades due to many factors, but principal among these is the nearly explosive growth in global population.
Throughout recorded history, the global population has remained below one billion. However, as shown in the figure above, in just the last few decades it has grown to seven billion and it is expected to reach nine billion before the midpoint in this century. This rate of population growth is astounding and unprecedented. Never before has our planet been called upon to provide a suitable habitat for so many people. The projected demands for food, clean water, sustainable energy, affordable healthcare, security, and the joy of living are not only unprecedented—they are much more complex than in any previous generation. There is hardly any aspect of life in the next generation that will not be dominated by man-made or engineered objects and environmental factors. This explosion in human population and influence was largely enabled by technological innovations resulting from the rapid recent expansion of our understanding of the world we inherited (often including unintended consequences).
Technology has proven to be an amplifier on human activity. In each generation, a smaller and smaller number of people are able to influence a larger and larger group of others through the amplifying effects of technology. This influence may be beneficial, or it may not. It may be intentional, or it may be unintentional. However, in order to succeed in managing the Grand Challenges of the 21st century, the intentional and coordinated application of technological innovation is essential and must become widespread.
Furthermore, societal expectations continue to expand as we become more globally aware of the possibilities for the quality of life. As a result, we should anticipate the expectation that the quality of life afforded to each generation will continually advance, in spite of the challenges ahead.
In addition, the complexity of the challenges faced by the next generation is far greater than anything we have faced in the past. The basic problems they will face are inherently global, transcending time zones, political boundaries, and academic disciplines. It isn’t obvious that the knowledge obtained in far simpler times will even be especially useful in meeting the challenges of the 21st century. Today, we are unable to predict the major events of our world even a few days or months in advance, not to mention 40 years in advance—yet we must educate the next generation for the conditions they will face decades from now.
While it is implicit that knowledge will continue to play an important role in any education, a good case could be made that creativity and adaptability may become more important than knowledge in the 21st century—at least knowledge as we know it today. We must be open to a complete rethinking of the purpose of education to provide the best preparation for these future challenges.
Because we have been conditioned to think of education as the accumulation of knowledge, we are naturally inclined to ask the question “what does every person need to know in the 21st century?” This is because in simpler times, the rate of change of life was much lower and the world was a much more predictable place. The rate of change of knowledge was sufficiently low that it could be treated as a static quantity that could be passed on from one generation to the next for the purpose of both understanding and predicting the world around us. However, in the 21st century, things we thought were true only a few years ago are routinely shown to be completely wrong or inadequate every day. So, a better question to guide the redesign of education in the 21st century may be “what should every person be able to do in the 21st century?”
In considering the basic goals of a reconceived education, it is important to address first and foremost the most basic of human needs. It is implicit that the basic physical necessities (air, water, food, shelter, security, etc.) will be provided before education is attempted. Then, every human has a basic need to know that s/he is the most important person on the planet to at least one other person. Humans are inherently social and the need for a degree of unconditional love and social acceptance is universal. In addition, every person has a fundamental need to be able to make sense of the world they live in; that is, they need a framework for understanding the events around them that is sufficient to predict the most obvious and basic features of life that they must deal with. Finally, every person has a basic need to feel significant and capable of making a difference in their world. The need for self-expression and creation are universal. Unpacking these needs in an educational context is the basic task of framing the redesign of education.
What Does Every Person Need to be Able to Do?
It is hard to see how a public education can fully meet the need for personal, unconditional love. It is implicit that this level of emotional support must come from parents and a home life that provides the foundation that frees one’s mind as a prerequisite for learning. Assuming this family support to be in place, every person then needs the ability to feel empathy for others, express not only ideas but feelings, and tell their story in a way that draws to them the kind of attention that leads to lasting and meaningful relationships within a social group. This should be the basic purpose of learning in the humanities and social sciences. Obtaining a much greater ability to work together with others will be fundamental to developing the necessary global cooperation in addressing the complex challenges of the future, elevating the need for effective learning in this area to much higher levels than is present today.
Next, in order to make sense of the world around them and construct a conceptual framework that provides useful predictions about their world, they need a combination of knowledge, attitudes (including values), behaviors (including skills), and motivations. It is important to note that education here is not strictly about knowledge, and it must involve learning how to independently construct understanding from direct observation as well as recorded information from the past. The physical world is certain to present more challenges in the next century than in the past. As a result, a more robust and authentic understanding of the STEM subjects is essential. However, instead of framing these as “natural sciences” we need to emphasize the integrated system of man-and-nature and the role of man-made or engineered devices and systems, since most people will have far more experience of the human engineered world than the pristine natural world. In order to achieve this, we simply must embrace experiential learning at an early age and expect higher levels of reasoning than memorization in math and science.
Learning how to learn will be critical in the 21st century. This requires a firm foundation of attitudes, behaviors and motivations. But motivations may be the most important of all. To make a positive difference in the world, people will need to have a positive outlook on life and believe that they can each make a difference. Without hope and a sense of self efficacy, it will not be possible to be creative and innovative in developing new pathways to meet the challenges ahead. Research in innovation and creativity indicates that play is essential in young children to build the creative potential later in life. When successful, this eventually leads to the discovery of a personal passion that fuels an increase in intensity of play to motivate the development of personal vision and goals for achievement. This intrinsic motivation, coupled with the tools of independent learning skills, provides the perseverance needed to endure the arduous path to mastery or expertise in pursuit of the personal goals. Finally, with maturity and experience and a desire to make a positive difference in the world, this mastery and intrinsic motivation are directed toward purpose. Purpose-driven expertise is what it will take to solve the problems of the 21st century.
There are many pedagogies that may be successful in scaffolding this educational development. However, in order to nurture the attitudes, behaviors and motivations required for this type of education, it is clear that rote learning will be of limited value. Many forms of knowledge will become a commodity that is available online, freeing teachers to play a much more personal role in customizing the educational experience for each student. Working together in teams on projects that help build intrinsic motivation together with the skill of learning how to learn through experimentation and experience will be much more important than in past generations. Finally, learning how to develop meaningful relationships and eventually leadership through influence with groups of people that are from very different cultural/language/religious/geographical/economic backgrounds is essential. Without the ability to address the Grand Challenges of the 21st Century in a holistic, multi-disciplinary globally cooperative approach, our species may not survive.
Copyright – Richard K. Miller |
In the early 1800s, the British government, motivated by profit and security, marched into the Southeast Asian nation of Burma, also known today as Myanmar. A Buddhist country rich in natural resources, Burma was an expansionist power that bordered India, one of Great Britain’s most prized colonies. Three Anglo-Burmese Wars were fought over a period of 60 years and Burmese territories were annexed as provinces of British India before the British government allowed Burma to be administered separately in 1937 (Harvey, 1946). In 1948, Burma finally gained its independence but the presence of the British colonists had inevitably transformed the nation, its government, society, and institutions. The education system in Burma was one of the areas in which profound changes had taken place. How did British colonization transform the Burmese education system during the mid-19th to early 20th centuries and how did nationalists respond to these foreign influences?
In the pre-colonial period, education and religion were inextricably linked as the Theravada Buddhist monastic order, or the Sangha, served as the main educational institution for the natives. After Burma was colonized, the British attempted to reform the existing system, initially by working to incorporate more secular subjects into the monastic curriculum and later by setting up a system of secular schools that could supply them with local administrators and civil servants and enable them to “civilize” the Burmese people. With the rise of the nationalistic spirit in the 20th century, the educated Burmese demanded education reforms and created national schools that endeavored to rebuild a sense of national identity.
Prior to the arrival of the British, few private schools existed except those established by Christian missionaries and local monasteries in the self-contained agricultural villages were the center of culture and served as schools for the Burmese boys. Due to religious restrictions set against women, girls were educated at home by parents who taught them basic literacy skills alone with other skills related to efficacy in home duties and at the marketplace needed for business activities (Cady, 1958). The emphasis of monastic education was placed largely on learning and reciting religious Pali scriptures that would help the boys develop skills required to eventually monks (Fuqua, 1992). The strong connection between religion and schooling is reflected by fact that the Burmese word for school (kyaung) is the same word used to refer to the monastery.Though the education was of a religious nature, the monastic schools ensured that Burma had a high literacy rate of about 60% as the majority of Burmese men were at least able to read and write their basic letters (Harvey, 1946).
The monastery schools were completely independent from government control and Buddhist monks, in addition to carrying out the duties of their office, acted as the schoolmasters, teaching the basics of reading, writing, and arithmetic. Their work was supported by voluntary gifts, donations, and alms from the villagers, allowing the monks to provide education free of charge to all boys in the village regardless of their class or religious background (Octennial Report, 1956). Some boys attended the school in the day and went home at night while others temporarily became novices for a period of time and lived at the monastery. Rather than using exams or grades to categorize the boys, the monks grouped boys instead by lessons they had completed (Octennial Report, 1956). The main issue in these village school systems, however, was that attendance was irregular and some students dropped out after just receiving the basic literacy skills as working in the fields took precedence over going to school.
In addition to teaching students the basics needed for literacy, the monastic education aimed to transmit the traditional cultural, moral, and religious values of the community and society (Cady, 1958). The monastic education system also contributed to the leveling of classes in society as entrance was open to all and regardless of whether one was a prince or the son of a poor farmer, everyone enjoyed the same status and was subject to the same discipline (Cady, 1958). A more advanced level of education that addressed a wider array of subjects, such as Buddhist Studies, Classical Burmese literature, court protocol, engineering, construction and manufacturing operations (Cady, 1958) could also be attained at some monastic centers in urban locations. This education enabled those who would become monks to build pagodas and monasteries and for those who didn’t to fill roles in the Burmese court.
Aside from monastic schools, a few other avenues of education were also available to the Burmese males. Vocational education was learned in a hands-on manner with students taking up actual apprenticeships (Harvey, 1946). The Burmese kings also sent men to Calcutta and attend higher learning institutions to acquire other sorts of training in the medical or technical fields (Furnivall, 1948).
Prior to 1854, the British had a laissez-faire policy in regards to education. The British had a policy of conciliation since the early 19th century and to avoid confrontation with the local population they did not try to change the education system which was linked to religion (Fuqua, 1992). However, as mentioned before, western education schools had already been established by Christian missionaries in the rural areas populated by Non-Burman ethnic tribes, such as the Karens, Kachins, and Chins. While the missionaries’ efforts with the Burmans and Shans, who were devout Buddhists, were met with resistance, they successfully educated some of the minority groups and also converted them to Christianity. These schools educated both male and female students. The efforts of the American Baptist mission schools, for instance, were so successful with the Karen that they eventually established a college for them in the city of Rangoon (Cady, 1958). These mission schools were an effective means of educating rural populations living in areas that were hard to reach due to geographical barriers, even after a formal school system began to emerge later.
Starting in 1854, the British authorities extended their influence into the education system. Their aim was to “convey useful and practical knowledge suited to every station in life to the great masses of people” as well as to “spread civilization” to remove superstitious prejudices (Fuqua, 1992) . Aside from their liberal and humanitarian sentiments, they also hoped to use education to attach subjects more closely to British rule (Furnivall, 1948) and needed natives who were literate and fluent in English to fill the positions as local administrators and subordinate civil servants (Hillman, 1946). Though they had already opened three Anglo-vernacular schools between the period of 1885 and 1844 to educate English-speaking clerks, there was little demand for these schools because fewer positions in government work was available at that time and most people continued going to monastic schools.
Initially, the British attempted to use the existing monastic system to fashion a rudimentary system of western-style primary education. As this was prior to the separation of Burma from India, this simply resulted in the imposition of educational policies in India on the Burmese system. It fell upon Sir Arthur Phayre, the Commissioner of British Burma to combine the best of both worlds and incorporate secular subjects into monastic system to create a westernized system similar to what was established in India (Fuqua, 1992). However, Phayre’s attempts failed because they were resisted by the monks and they failed to take into accounts the differences between the Indian and Burmese population. Although a few monasteries were receptive to the idea of improving their curriculum and accepted secular textbooks from the British, most monasteries resisted the change. Monks refused to teach subjects like geography and science which they considered to be evil and “refused to play the layman, to be supervised by the layman, to keep lay attendance registers, to exercise lay discipline, and to use lay books” (Campbell,1946). Although their resistance was in part due to religious reasons, it is also likely that they were reacting to being “systematically disenfranchised by the colonial state through its demolition of the pre-existing Buddhist political order” that was closely associated with the Burmese monarchy in the pre-colonial era(Cheesman, 2003). Phayre’s efforts also failed because unlike in India where there was “no comprehensive egalitarian schooling managed by a single agency” and access to schooling was dependent on one’s wealth, gender, and social status in the caste system, in Burma a system of monastic schooling that had a magnitude of independence already existed (Cheesman, 2003). This impeded the efforts of the British who had no means of unifying and reaching out to the hundreds of monastic schools that were not under a central authority. Additionally, the British failed to consider the problem of fluctuations and irregularities in school attendance that was prevalent in monastic schools.
The disappointing results of trying to influence the Sangha and merge monastic education with western secular notions of schooling, the British administration changed their strategy. By 1871, the British authorities set up a system of lay schools under the control of a director of public instruction and his inspectors (Furnivall, 1948). Three main types of schools were established: vernacular schools, Anglo-vernacular schools, and English schools. These schools had taught the 3 Rs as well as subjects in science, British history, the British constitution, Grades 1 to 4 were designated as elementary school, grades 5 to 7 were designated as middle school and grades 8 to 10 as high school (Tinker, 1967). The language of instruction was Burmese in the vernacular schools and English in the English schools, while Anglo-Vernacular schools used both languages for instruction until the 8th standard and English becomes the sole language of instruction (Tinker, 1967). Students had to pay a fee to attend these schools and those who could not pay continued to attend monastic schools (Cheesman, 2003). Those who displayed high academic ability in vernacular schools were given financial aid and other “bridge” program provisions were made for them to transfer into Anglo-vernacular schools (Cambell, 1946). By 1891, there were over 6000 lay schools opened in Burma (Fuqua, 1992). The opening of the Rangoon University, the first higher education institution, in 1885 by the government was followed quickly by the opening of universities (Hillman, 1946).
While the aim of the Sangha was to “teach the boys how to live but not merely how to make a living” (Furnivall, 1948), the modern schooling system based on western ideologies taught students skills that had market value and that led them to contribute to the economy to the benefit of their colonizers. Students were trained for vocational jobs or low skill jobs so that they could enter the work-force and help the British maximize their economic profits and the best of them attained higher education that allowed them to work in the colonial administration (Fuqua, 1992). The British education system did however have a positive effect on female education and increased female literacy because women were permitted to enroll in these lay schools (Furnivall, 19480).
Later in the 1870s when the opening of the Suez Canal accelerated Burma’s economic growth which consequently led the administrative expansion, there was a rise in demand for English schools (Hillman, 1946). The majority of the schools that were opened were vernacular schools which only led to careers as vernacular school teachers or other low paying manual jobs. The Burmese began to realize that they needed to enroll in Anglo-Vernacular or English schools that would allow them attend university and secure jobs in the administration and other government office jobs in the education, health, forestry, and agricultural sectors (Tinker, 1967). As the number of students in Anglo-vernacular and English schools increased, so did the enrollment in universities, leading to a new class of educated Burmese citizens. The desire for social advantage led to the rise in demand and popularity of the state-managed modern education system and to the decline of the monastic school system.
In the turn of the 20th century, the rise in the number of educated Burmese led to a nationalist movement that was inspired by a number of concurrent events. The reforms being implemented by the British in neighboring India and the Japanese victories against Russia opened up the possibility of successfully resisting their opp. Education received from the schooling system established by the British ironically contributed to the nationalist movement in two ways. First, while more people had earned higher degrees to enter government posts, most available posts for the Burmese had been filled by 1930. The frustration of university students was manifested in strikes and protests, contributing to the conditions of political unrest and economic decline in Burma (Hillman, 1946). In 1920, the university students began a national strike to protest against educational policies set by the British who raised the bar for university entrance requirements, marking the entry of students into national politics.
Second, education empowered the Burmese people to fight for liberation from their western colonizers; the Burmese who had gone abroad for further studies returned with both a realization of how they had been second-class citizens in their own country that was being exploited by the colonizers as well as new ideas about government and politics (Cady, 1958). A revival of interest in Burmese history, arts, and literature followed John Furnivall’s organization of the Burma Research Society in 1909 (Cady, 1958). A nationalist group called the Young Men’s Buddhist Association (YMBA) that was modeled after the Young Men’s Christian Association began to organize and were later joined by other university student led groups such as the Thakin group and We Burmans Association They began to use print materials to mobilize nationalist sentiments across the nation (Cheesman, 2003).
The agenda of these groups were centered largely on issues of education (Schober,2007). The Burmese began to realize that the knowledge of Burmese literature had almost died out and that aside from rural areas, English had become the main spoken language as Burmese language and literacy were not sufficiently taught in schools attended by the majority of students (Schober,2007). The YMBA based on its model and actions implicitly seemed to acknowledge that modernization was necessary and did not completely discard modern education. But at the same time, concerned about the influence of western education on the national identity, they tried to support schools that had Buddhism in the curriculum (Schober, 2007).The nationalists also supported monastic schools, petitioning the government to exempt these schools from taxation and discouraging costly religious rituals in these schools.
In the 1920s, Burmese nationalists began to open private schools that were independent of government control that fostered nationalistic ideals (Hillman, 1946). Despite popular support, these schools did not have sufficient funding and had to receive government support (Hillman, 1946). The YMBA agitated the government further to establish more national schools that were independent from the British education system where Burmese was the language of instruction (Fuqua, 1992). Their aim was to establish a system that could compete with the British system and eventually supplant it (Fuqua, 1992).
Please note that within the borders of Burma there exists a number of different ethnic minority groups who are identified separately from the main population of Burmans. In this paper, I am refer to the entire population that lives in Burma as “Burmese” and to those of the majority ethnic group as “Burmans”.
Cady, John F. A History of Modern Burma. Ithaca, NY: Cornell UP, 1958. Print.
Campbell, A. (1946). Education in Burma. Journal of the Royal Society of Arts, 49, 438-448.
Cheesman, N. (2003). School, State and Sangha in Burma. Comparative Education, 39(1), 45-63. Retrieved from http://www.jstor.org/stable/3099630
Furnivall, J. S. (1948). Colonial policy and practice a comparative study of Burma and Netherlands India,. Cambridge England: Cambridge University Press.
Fuqua, J. (1992). A Comparison of Japanese and British Colonial Policy in Asia and their Effect on Indigenous Educational Systems Through 1930 (Master’s Thesis). Retrieved from DITC database. (Ascension Order No. ADA2544560)
Harvey, G. E. (1946). British Rule in Burma, 1824-1942. London: Faber and Faber.
Hillman, O. Education in Burma. Journal of Negro Education, 15, 526 – 533. Retrieved , from www.jstor.orgtable/2966118
Octennial Report on Education in Burma, 1947-48 to 1954-55. (1956). Rangoon: Supdt., Govt. Printing and Staty., Union of Burma.
Schober, J. (2007). Colonial knowledge and buddhist education in burma. I.Harris (Ed.) Buddhism, power, and political order (pp.52-70). London: Routledge.
Tinker, H. (1967). The Union of Burma: a study of the first years of independence. (4th ed.). London: issued under the auspices of the Royal Institute of International Affairs by Oxford U.P. |
properties used to makepennies. as well as a lot of electrical wiring. copper is also the originating mineral from which malachite and azurite form due to
mining of copper ores is carried out using one of two methods. this useful property prevented the heavy drag caused by the growth of weed that limited the
this ks3 education resource looks at copper mining extraction and it is ductile and it can prevent bacterial growth (see copper properties and applications).
copper is found in many minerals that occur in deposits large enough to mine. these include azurite malachite chalcocite acantite chalcopyrite and bornite.
learn about the geologic properties of copper how much exists how it is mined and where to find copper deposits on earth.
copper is opaque bright and metallic salmon pink on freshly broken surfaces but soon turns dull brown. copper crystals are uncommon but when formed are
from greek "kyprios" of cyprus the location of ancient copper mines; latin "cuprum". copper group. chemical properties of copperhide. this section is
che pure copper is soft and exposed surface has a reddish orange tarnish. pure copper is rarely found in nature. it is also believed that copper has the power to
copper is a chemical element with the symbol cu (from latin cuprum) and atomic number 29. . the concentration of copper in ores averages only 0.6% and most commercial ores are .. many electrical devices rely on copper wiring because of its multitude of inherent beneficial properties such as its high electrical |
The rapid rise of farm mechanization that took place in the late 19th and early 20th centuries led to significant changes in farming. The introduction of and innovation of machines such as tractors, synthetic fertilizers, and pesticides, marked the beginning of intensive farm practices. As the global population continued to rise rapidly, the global food demands increased significantly as well. Consequently, the response to the rising need to feed the global population became intensive farming.
Intensification of farming is the process of increasing the use of capital and labor in order to increase yields and farm profitability. Because of this, intensive farm practices are often presented as a solution for feeding the world. In an attempt to accommodate worldwide needs and increase farm productivity and profitability, intensive farmers rely heavily on the use of machinery, pesticides, synthetic fertilizers, and large-scale irrigation systems.
Intensive Farming as a Productive Farm Practice
With the optimal use of inputs, intensive farmers manage to produce more crops per land unit, making these intensive farm practices a simple way to increase farm productivity. Intensive farming implies a monoculture system; meaning farmers simplify their farm management by growing the same crop over several years on the same field, enabling them to become highly skilled. The labor cost associated with this type of farming is also lower than that of other farming types.
The Other Side of Intensive Farming
While intensive farming was introduced as a solution to feeding the world, with farmers' productivity increasing, at the same time, sustainability has become questaionable. Scientists are becoming increasingly concerned about the environmental consequences of intensive farming. For instance, heavy use of pesticides reduces biodiversity and may have negative effects on helpful organisms. Possible negative effects on human health is also often questioned. Furthermore, pesticides and synthetic fertilizers cause pollution and poisoning of soil and water.
Intensive farm management practices are one of the main contributors to global deforestation. Slash and burn agriculture is one example of how the clearing of tropical forests in order to increase crop production can cause deforestation and soil erosion.
In aiming to achieve higher productivity, intensive farm practices contribute to climate change, leaving a dramatic footprint on the environment. Clearing land in order to make room for the growing of crops, intensive cattle raising, and overuse of fertilizer significantly contribute to global greenhouse gas emissions.
Sustainability of Intensive Farm Practices
There is much discussion about the sustainability of intensive farming. Sustainable crop production aims to increase yields while at the same time protecting valuable natural resources. With all of the negative consequences of intensive farm practices, it's obvious that this practice is not sustainable long term.
However, by transforming farm management intensive practices into a conservation type of agriculture, intensive farmers will be able to develop more sustainable crop production on their land. Furthermore, many intensive conventional farmers are beginning to recognize the benefits of organic farming and are introducing some organic practices into their farm management.
In order to reach Zero hunger, food production must be oriented towards sustainability. By protecting the environment, farmers are able to protect the most valuable production resources, thus protecting the global food supply. |
Networking is a way of communicating the devices. It is made up of various technologies like computers, switches, routers that are interconnected and share data among themselves.
The two common network protocols:
TCP stands for Transmission Control Protocol/Internet Protocol that allows reliable communication between two applications. TCP/IP specifies data exchange over the end-to-end communications and specify how it should be broken, transmitted, routed, packets, addressed, and received at the destination. TCP/IP is compatible with all the operating systems. TCP/IP is a highly scalable and routable protocol that searches the most efficient path through the network. The two main protocols in TCP/IP protocol suite serve specific functions:
- TCP: The application can create channels of communication across the network. It also manages how messages are assembled into smaller packets and then transmit it over the Internet and re-assembled in the right order at the destination address.
- IP: It defines how to address and route each packet to make sure it reaches the right destination.
UDP stands for User Datagram Protocol, a connectionless protocol that allows the packets of data to be transmitted between the applications. It is an unreliable and connectionless protocol, so there is no need to establish a connection before transferring the data. UDP permits packets drops instead of processing delayed packets. There is no error checking in UDP. UDP is more efficient, based on both latency and bandwidth. It is useful when it reduces the requirement of computer resources, the transmission of Real-time packets, mainly in multimedia applications. It is used for simple request-response communication when the size of data is less. It is a suitable protocol for multicasting as UDP supports packet switching. UDP is used for routing some update protocols. It is usually used for real-time applications that cannot tolerate uneven delays between sections of a received message.
Networking has the following modules, such as:
- Clients and Servers.
- IP Addresses.
- A network hub, switches, and cables.
- Routers and firewalls.
Client and Server concept
It is an essential architecture to communicate with each other over the network.
- The server is the computer that holds services and every single information related to the clients such as media files, database-related matters, and keep the website for Google search page, etc.
- It provides services in the form of webpages and sends it out when requested by the client.
- The client is a different computer that requests to view, download, or search the content from the server, such as to request for Google web page, etc.
- A client connects to the server and other clients over a network to exchange the information and data.
Mechanism of a working client-server architecture
In a client-server architecture, the client computer sends a request for data to the server, the server accepts the request of the client, process it, and deliver the data packets requested back to the client. One unique feature is that the server computer has the potential to manage numerous at the same time.
On the network, the server can locate anywhere, and if the client has its IP Address, it can access its services efficiently. The client asks for the services like website accessed from the server computer. The respective site is delivered through the server and displayed on the client web browser in the form of webpages.
It is a unique address to identify the computer on the network. It is used for directing the data across a network, computers need to identify their destinations and origins. It is an identification which is known as IP Address.
Two types of IP Address:
The IPv4 protocol is assigned for 232 addresses. Addresses in IPv4 is 32-bits long. There is a maximum of 4,294,967,296 unique addresses. IPv4 is 32- bit binary number that contains two sub-addresses i.e., the network and the host, with an imaginary boundary separating them. An IP address is represented as 4 octets of numbers from 0-255 represented in decimal form. An IPv4 address is represented in dotted-decimal notation, with every eight bits (octet) represented by a number from 1 to 255, each separated by a dot. An example of an IPv4 address is 192.168.17.43.
IPv4 address contains two parts. The first part of the address specifies the network, and the second part specifies the host. A host represents a server offering resource information, services, and applications to users or other nodes on the network. It is a network that uses various protocols and attaches various hosts for their inter-communications.
To avoid the address space issue in technology and to increase the address size from 32 bits in IPv4 to 128 bits, IPv6 specifies 3.4*1038 (2128) unique addresses. These are reportedly enough addresses, assigned one to every single host on the Internet. IPv6 addresses are shown in the form of eight sets of four hexadecimal digits, and each set of numbers separates by a colon. It looks like 2DAB:FFFF:0000:3EAE:01AA:00FF:DD72:2C4A. There are no defined classes in the IPv6 version.
A repeater is operated at the physical layer. It is used to regenerate the signal before it becomes too weak or corrupted so that the signal can be transmitted over the network. The repeaters do not amplify the signal. They put the signal bit by bit and regenerate it at the original strength when the signal becomes weak.
Computers are connected via cables to create a network. But sometimes individual computers and cables were not able to make a good system so to overcome this issue hubs came into existence. The cable is a media that transmits the communication signals.
There are three types of cables:
- Twisted-pair cable: It is a high-speed cable that transmits the data over 1Gbps or more.
- Coaxial cable: Coaxial cable resembles a TV installation cable. Coaxial cable is expensive than the twisted pair cable and provides a high data transmission speed.
- Fiber optic cable: It is a high-speed cable that transmits the data using a light beam. It has a high data transmission speed as compared to other cables and more expensive as compared to other cables.
A hub connects multiple cables coming from different branches. Hubs do not filter the data. Data packets are sent to all connected devices. The collision domain of all hosts connected through Hub remains one. It does not have the intelligence to find out the best path for the data packets, which leads to inefficiencies and wastage. It is the way to interconnect LANs. The Hub amplifies the signal. If a device wants to communicate a message to many computers, it sends a message to the Hub. It is the responsibility of Hub to resend the message to all the computers. Hub can slow down if many computers try to send a message again and again at a time, which leads to confusing the hub.
- Hub works on the physical layer (Layer 1 of the OSI model).
- Hub works internally in bus topology.
- It has 2,4,8 ports to establish communication between the devices. They allow communication to a maximum of 8 devices.
- Hub is not an intelligent device. If computer1 wants to communicate with the computer2, routers transfer the message to all the computers attached to its ports.
- The message which is broadcasted contains the sender and receiver address. It can be read or modified by the others, but it is accepted by its respective receiver.
- Hub favors a single Collision domain that has a single way to communicate with other computers. If more than one computer sends its packets, they will collide among themselves.
- Hub can only be used in the LAN network.
- Hub does not have a table to store MAC address or IP address for the computers; that is why it broadcast the messages to all the computers.
There are two types of Hub:
- Active Hub-It acts as a Repeaters as it needs electricity to run. Still, it amplifies the analog signal or regenerates the digital signal so that the signal can be transferred over a long way.
- Passive Hub-It does not require electricity. It does not amplify the signal but only receive and forward the signal.
A bridge is operated at the data link layer. A bridge functions as a repeater, with the functionality of filtering content by reading the MAC addresses of source and destination. It is used to interconnect two LANs working on the same protocol.
The bridges are full-fledged packet switches that forward and filter frames using the LAN destination addresses. When a frame reaches the bridge interface, the bridge does not copy the frame onto all of the other interfaces. The bridge specifies the destination address of the frame and attempts to forward the frame to the destination.
The bridge determines the 48-bit destination address for the packet and directs the packet only to the cable where the recipient resides. One packet causes less congestion, thereby. A bridge may store (buffer) enough bits to interpret the destination address. It could store (buffer) a whole packet and queue it for the correct outgoing link.
- It is an intelligent device. Bridge inspect incoming traffic and decide whether to forward or reject.
- It checks the source and destination MAC address, not IP address as the IP address do not work on the datalink layer.
- The bridge is broadcasted to all devices at the first time, then it is broadcast to one of the segments of either side as it knows the MAC address of the device on its port.
- It is used to connect multiple network segments or LAN segment.
- It can filter data traffic.
- Bridges decrease the traffic on the LAN by dividing it into two segments.
- It has two collision Domains.
A switch is a networking device that works on the data link layer device. It makes efficient communication as it does not forward packets that have errors and forward good packets selectively to correct port only. A network switch is a device that is used to connect multiple computers inside LAN (Local Area Network). Network switches perform at the second layer (Data Link Layer) of the OSI model. The basic function of a Network switch is to forward layer 2 packets (Ethernet frames) from source device to destination device.
- It is an intelligent device and also works on the Data Link layer.
- The switch has multiple collision Domains, so there is less chance of collision of messages, among others.
- It has full-duplex communication.
- It maintains a CAM table based on the MAC address. It will communicate among all the devices on the basis of the CAM table.
- First, it will broadcasts then unicast and multicast.
- Every port of the switch is a separate collision domain.
- The switch has one Broadcasting domain.
- It has a maximum of 48 switch ports.
- It is slow as compared to others.
Types of switches
- Store and forward switch- The switch buffers and verify each frame before forwarding it. It is a little bit slow but very reliable.
- Cut through switch- The switch reads up to the frame hardware address before starting to forward it. No error checking.
- Fragment Free Switch- A method that attempts to retain the benefits of both store and forward and cut through check first 64 bytes.
Routers and Firewalls
A router relates to the network device that routes data packets based on their IP addresses. The router is mainly a Network Layer device. A Router connects WANs and LANs and has a dynamically updating routing table based on which they make decisions on routing the data packets. Routers are the first device in your network that gives you internet connectivity.
- It is the layer 3 device.
- It is a WAN device.
- The router is an internetworking device. It can communicate between two or more different networks.
- The router has a routing table and maintains Network ID and port number in its table. It communicates the messages based on its Network ID.
- It mostly uses WAN, not LAN.
- In the router, every port has its broadcast domain for broadcasting the messages.
- It has 2,4,8 ports.
- It is fast as compared to others.
They can filter users and keep them out of private networks. Firewalls are network security appliances. Most of the small networks have perimeter-hardware firewalls, for controlling access and securing local networks from the outside world. Due to this reason, most perimeter firewalls also have routing capabilities.
The routers route traffic between two separate networks, firewalls monitor the traffic and block unauthorized traffic coming from the outside into your network. Few firewalls contain antivirus mechanisms to protect your network from viruses and unwanted emails.
There are many software firewall programs you can install on your computer, such as McAfee total protection, Norton antivirus, etc.
A gateway is a connecting device used to connect remote networks with the host network. Generally, it acts as an entry or exit point. Mostly gateway is operated at the application layer.
A gateway is a way to connect two networks that may work upon different networking models. They work as a messenger agent that take data from one system, interpret it, and transfer it to another system. Gateways are also referred to as protocol converters and can operate at any network layer.
Uses of computer Network
- Resource sharing
These are the resources that are shared, such as programs, printers, and data among the users on the network.
- Server-client model
The computer networking is used in the server-client model. A server acts as a central computer, used to store the information, and maintained by the system administrator. Clients act as a machine used to access the information stored in the remote server.
- Communication medium
The computer network offers the communication medium among the users.
The computer network is also important in businesses. We can perform business over the internet.by |
Learn about this topic in these articles:
development by Curtiss
The addition of a retractable landing wheel gear to a float seaplane or flying boat, also accomplished by Curtiss, created the amphibian aircraft capable of operating from land runways or water. A post-World War II development was the pantobase, or all-base, airplane incorporating devices for operating from water or from a variety of unprepared surfaces such as snow, ice, mud, and sod.
...takeoff from or landing in water. These include floatplanes, which are fitted with pontoons for operation on water; flying boats, in which the fuselage also serves as a hull for water travel; and amphibians, which are equipped to land on and take off from both land and water. |
What's Up With All the Sheep?
Lesson 4 of 14
Objective: SWBAT compare and contrast familiar features of nursery rhymes. Student Objective: I can compare the nursery rhymes about sheep.
My plan for today is to teach or for some, reteach the nursery rhymes of Mary Had a Little Lamb, Little Bo Peep, and Baa Baa Black Sheep. Rather than just teaching memorization of nursery rhymes, we are going to analyze the information on a kindergarten level of comparing and contrasting. I want to first build understanding of the vocabulary of the rhymes since some of these were "written" hundreds of years ago, and the English Language has changed a bit.
If you would please join me on the rug, I would like to share three nursery rhyme posters with you: Mary Had a Little Lamb, Little Bo Peep, and Baa Baa Black Sheep. As I read these posters, I would like you to read along with me.
Why do you think these are the posters that I chose to share today? It is because each of these rhymes have sheep in them.
Raise your hand if you have seen real sheep. What do you know about real sheep? Who takes care of sheep? Sometimes they are farmers, sometimes they are shepherds. Take a look at this picture. Do you see the shepherd's crook? This let's us know that Little Bo Peep was a shepherd. A girl shepherd is called a shepherdess.
Because nursery rhymes are fun and involve everyday activities, Kindergarten children can relate to them. Making a personal connection through nursery rhymes helps children become better readers. Many authors assume that children know nursery rhymes. These rhymes that have survived since the time of Shakespeare, and because the Common Core Standards include critical types of content for all students, including classic stories and the writings of Shakespeare, these memorizations and comparisons are the perfect venue for five-year-olds.
On the board, I have put a comparing and contrasting chart for us to fill in. Who can remember what it means to compare something? When we compare, we are looking at what is the same. So, when we compared these three rhymes earlier, we recognized that there was sheep in each rhyme. What does it mean to contrast? If compare means the same, contrast probably means...different.
Our chart has room for three nursery rhymes. We will be comparing and contrasting the three rhymes we heard today: Mary Had a Little Lamb, Little Bo Peep, and Baa Baa Black Sheep. I will be asking you some questions about the rhymes and then when you answer them, I will write your responses on the chart.
Let's talk about how these lists compare. (Two had girls , all of the sheep caused some problem, etc.) Now let's compare. (The black sheep was sharing his wool, but the others weren't, Baa Baa Black sheep didn't show his owner, etc.)
The children need to now take what they've learned and share it with the adults in the room. They will take their Nursery Rhyme booklets around to the adults and read the rhymes. The adults have been prompted to ask the children comparative questions, like "Who did a better job caring for her sheep?" "What did Mary do differently than the boy in Baa Baa Black Sheep?" "Which rhyme did you like best, and why?"
Because we have been working with nursery rhymes all week, I have made you a booklet of the nursery rhymes we have read so that you can take it home to read to your families. Won't they be surprised at how much you know?
Since we have some other adults in the room (parapro, Special Education teacher consultant, high school and parent volunteers, Title One teacher--not all at the same time, but throughout our day, we have "push-in" programs), we are going to walk from table to table and ask you to read to us. This will give you extra practice and we can listen to see if you need more help. |
So how has Roman concrete outlasted the empire, while modern concrete mixtures erode within decades of being exposed to seawater?
Scientists have uncovered the chemistry behind how Roman sea walls and harbour piers resisted the elements, and what modern engineers could learn from it.
Romans built their sea walls from a mixture of lime (calcium oxide), volcanic rocks and volcanic ash, a study, published in the journal American Mineralogist, found.
Elements within the volcanic material reacted with sea water to strengthen the concrete structure and prevent cracks from growing over time.
“It’s the most durable building material in human history, and I say that as an engineer not prone to hyperbole,” Roman monument expert Phillip Brune told the Washington Post.
What is modern concrete made of?
Nowadays, we create concrete from a mixture of limestone, sandstone, ash, chalk, iron and clay.
Modern sea walls require steel reinforcements, and the concrete is designed not to change after it sets.
On the other hand, the Roman recipe was designed to reinforce itself over time.
“They spent a tremendous amount of work [on developing] this — they were very, very intelligent people,” study co-author Marie Jackson told The Guardian.
So how does it work?
Scientists previously discovered Roman concrete contained aluminous tobermorite, a rare mineral that is hard to produce.
The tobermorite formed within the Roman concrete early on, as seawater reacted with the mixture to generate heat.
Now a more detailed examination of the chemistry of the concrete showed significant amounts of that rare mineral growing out of another mineral naturally found in volcanic rock called phillipsite.
The long-term exposure of the concrete to seawater caused both the tobermorite and phillipsite to crystallise throughout the concrete.
These prevented cracks from forming, therefore reinforcing the concrete over time.
The researchers said this could lead to more environmentally friendly ways of modern concrete construction, but warned it may take years before the precise Roman mixture was discovered.
“I think [the research] opens up a completely new perspective for how concrete can be made,” Dr Jackson said.
“That what we consider corrosion processes can actually produce extremely beneficial mineral cement and lead to continued resilience, in fact, enhanced perhaps resilience over time.” |
The World Day Against Child Labour this year will focus particularly on the importance of quality education as a key step in tackling child labour. It is very timely to do so, as in 2015 the international community will be reviewing reasons for the failure to reach development targets on education and will be setting new goals and strategies.
On this year’s World Day Against Child Labour we call for:
- free, compulsory and quality education for all children at least to the minimum age for admission to employment and action to reach those presently in child labour;
- new efforts to ensure that national policies on child labour and education are consistent and effective;
- policies that ensure access to quality education and investment in the teaching profession.
Free, compulsory and quality education for all children at least to the minimum age for admission to employment and action to reach those presently in child labour
Many child labourers do not attend school at all. Others combine school and work but often to the detriment of their education. Lacking adequate education and skills, as adults former child labourers are more likely to end up in poorly paid, insecure work or to be unemployed. In turn there is a high probability that their own children will end up in child labour. Breaking this cycle of disadvantage is a global challenge and education has a key role to play.
Free and compulsory education of good quality up to the minimum age for admission to employment is a key tool in ending child labour. Attendance at school removes children in part at least from the labour market and lays the basis for the acquisition of employable skills needed for future gainful employment. The global youth employment crisis and problems experienced by young people in making the school to work transition highlight the need for quality and relevant education which develops the skills necessary to succeed both in the labour market and in life generally.
In the Millennium Development Goals the United Nations set the target of ensuring that by 2015 all boys and girls complete a full course of primary education. We know now that this target will not be met. Recent UNESCO data on school enrollment indicates that 58 million children of primary school age and 63 million adolescents of junior secondary school age are still not enrolled in school. Many of those who are enrolled are not attending on a regular basis. As the international community reviews reasons for the failure to reach the targets is clear that the persistence of child labour remains a barrier to progress on education and development. If the problem of child labour is ignored or if laws against it are not adequately enforced, children who should be in school will remain working instead. To make progress national and local action is required to identify and reach out to those in child labour.
New efforts to ensure that national policies on child labour and education are consistent and effective
The ILO’s Convention No. 138 on the minimum age of employment emphasises the close relationship between education and the minimum age for admission to employment or work. It states that the minimum age “shall not be less than the age of completion of compulsory schooling and, in any case, shall not be less than 15 years.” However recent research suggests that only 60% of States that have fixed both a minimum age for admission to employment and an age for the end of compulsory education have aligned the two ages.
There is a clear need for greater coordination of national policies and strategies on issues of child labour and education. In this effort the ILO and other specialised agencies of the United Nations can play an important role in working with governments to identify the policies and financing requirements to tackle child labour.
Policies that ensure access to quality education and investment in the teaching profession
Education and training can be key drivers of social and economic development and they require investment. In many countries, however, the schools which are available to the poor are under-resourced. Wholly inadequate school facilities, large class sizes, and lack of trained teachers constrain rather than enable learning, and act as a disincentive to school attendance. For far too many children the provision of education stops at primary level simply because of the physical absence of accessible schools, particularly in rural areas. This inevitably leads to children entering the labour force well before the legal minimum age for admission to employment. National policies therefore need to ensure adequate investment in public education and training.
The ILO also supports the key people who deliver education: teachers. Together with UNESCO, the ILO promotes principles of quality teaching at all levels of education through Recommendations concerning teaching personnel. Ensuring a professional and competent teaching force with decent working conditions based on social dialogue is a vital step in delivering quality education.
Making progress – action required
Despite the challenges some progress has been made and more progress is possible. There has been a downward trend in child labour over the past 10 years and the numbers attending school have increased. However much more needs to be done to end child labour. The urgent need now is to learn from where progress has been made, and apply the lessons learned to significantly accelerate action. Among the most important steps required are:
- providing free, compulsory and quality education;
- ensuring that all girls and boys have a safe and quality learning environment;
- providing opportunities for older children who have so far missed out on formal schooling including through targeted vocational training programmes that also offer basic education support;
- ensuring coherence and enforcement of laws on child labour and school attendance;
- promoting social protection policies to encourage school attendance;
- having a properly trained, professional and motivated teaching force, with decent working conditions based on social dialogue;
- protecting young workers when they leave school and move into the workforce, preventing them being trapped in unacceptable forms of work.
Join with us on the World Day Against Child Labour 2015
The World Day is an opportunity to raise your voice against child labour and in the call for all children to have a right to education.
We would like to invite you and your organization to be part of the World Day. Join with us and add your voice to the worldwide movement against child labour.
For more information contact: unescocenterforpeacenys.org
In the broader age group of all children aged 5-17, 168 million children are estimated to be in child labour. |
This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (August 2012) (Learn how and when to remove this template message)
The early modern period (late 15th or 16th-18th centuries) in Catalan literature and historiography, while extremely productive for Castilian writers of the Siglo de Oro, has been termed La Decadència (Catalan pronunciation: [ɫə ðəkəˈðɛnsiə], Western Catalan: [la ðekaˈðɛnsia]; "The Decadence"), an era of decadence in Catalan literature and history, generally thought to be caused by a general falling into disuse of the vernacular language in cultural contexts and lack of patronage among the nobility, even in lands of the Crown of Aragon. This decadence is thought to accompany the general Castilianization of Spain and overall neglect for the Crown of Aragon's institutions after the dynastic union of the crowns of Castile and Aragon that resulted from the marriage of Ferdinand II of Aragon and Isabella I of Castile, a union finalized in 1474.
This is, however, a Romantic view made popular by writers and thinkers of the national awakening period known as Renaixença, in the 19th century. This presumed state of decadence is being contested with the appearance of recent cultural and literary studies showing there were indeed works of note in the period.
Historically, the decadent period refers to the decline of the thriving commercial Mediterranean empire that was the Crown of Aragon’s exclusive provenance, which was absorbed into the Trastámara and later the Habsburg dynasties. What this signified was that the thriving bourgeoisie and commerce of the Crown of Aragon became subject to the increasingly inward-looking and absolutist policies that characterized Castile (Elliott 34). The Catalan-Aragonese empire declined for several reasons: the outbreaks of the black plague in the fourteenth and fifteenth centuries that decimated the population; banking failures led to increased Italian involvement and loss of Mediterranean market share; the textile trade foundered; and, most importantly, the civil war of 1462-72 left the Crown of Aragon “a war-torn country, shorn of two of its richest provinces [Cerdanya and Roussillon], and its problems all unsolved” (Elliott 37-41). In other words, the decadence of the Crown of Aragon led directly to the ascendance of Castile and the Habsburg empire. During the time of the literary production of the Catalan baroque (approx. 1600-1740), it is important to note the growing opposition to the Habsburg monarchy and its absolutist policies, especially under the Conde-Duque de Olivares’ regime. Catalonia was a separate kingdom of the monarchy with its own institutions (the Diputació, Generalitat, Consell de Cent, etc.), liberties, exemptions, laws, and, of course, language. It was governed much like a colony, albeit a privileged one, yet one whose institutions and importance were being ignored if not openly attacked. Since the Habsburg monarchy was more of a federation of separate kingdoms than an absolutely centralized system of power, Olivares ran into serious problems of troop recruitment and financing his frequent military endeavors, as evidenced by his “Unión de armas” project begun in 1624, which never came to fruition. The year 1640—which Olivares described as “el más infeliz que esta Monarquía ha alcanzado” [the worst that this Monarchy has suffered] in a memorial—saw revolts both in Catalonia and Portugal. While the direct cause of the war was the billeting of Castilian troops in Catalonia for the war with France, it is clear that years of neglect for Catalan institutions and privileges also led to the conflict. Pau Claris declared Catalonia a republic under the protection of France in 1641. With further conflict looming on the horizon with the War of Succession that finally led to the abolition of all Catalan rights, privileges, and attempted to abolish the language itself with the Nueva Planta Decrees in 1714, these were dire times for Catalans; yet they were also times in which a new identity was being forged under the aegis of a new literary, linguistic, and national consciousness in which the writers of the baroque participated heavily. Writers such as Francesc Vicenç Garcia and Josep Romaguera wished to revitalize Catalan literary language by importing forms taken from the Castilian Baroque.
The ‘Decadència,’ however, refers to a period that is too conveniently capacious, as evidenced by Antoni Comas’ definition: “We call the period between the 15th-18th centuries the 'Decadence' in the field of Catalan literature or culture...it seems a dead period, but at the core is more lethargic than anything else" (La decadència 15) [translation]. Moreover, the most available English text on the subject is quite discouraging: Arthur Terry’s A Companion to Catalan Literature devotes fifty-two pages to Medieval and early renaissance literature, but only a total of eight to both the “Decadence” and the Enlightenment in Catalonia. Terry’s text is symptomatic of larger currents of traditional literary history, for it highlights the dual evils of Castilian imitation and Baroque excess, the main reasons literature of the “Decadència” has been vilified by Catalan Literary Historians from Martí de Riquer and Joaquim Molas (1964–88) to Terry (2003). Many literary historians are more interested in medieval or modern authors starting from the nineteenth-century movement known as the Renaixença that led to the movements known as noucentisme and modernisme (see below). However, what these authors and recent critics have prized most in Catalan literature is, naturally, its autochthonous qualities or “catalanness”; in other words, either the folkloric or innovative nature of this literary production. By contrast, baroque Catalan literature is imitative, not innovative. Moreover, Catalan baroque literature, so influenced and infiltrated by Castilian, blurs linguistic boundaries and cannot support absolutist nation-building projects based on differentiation, political, literary or linguistic exceptionalism important to nineteenth-century thinkers in a way that modern or medieval literature could.
A new generation of scholars led by Albert Rossich has begun to revise the prevailing views of early modern Catalan literature, even deliberately refusing to employ the term ‘Decadència’ in order to highlight its debilitating and contentious nature. Rossich’s article, “És valid avui el concepte de decadència de la cultura catalana a l’època moderna?” [Is the concept of the decadence of Catalan culture in the modern period valid today?], critically reexamines the so-called “Decadència” and concludes that it results from critics' own presuppositions. As a reconstructed fiction, “per provar que hi va haver una decadència cultural i literària hi ha d’haver ganes de veure-ho així” (128) [to prove that there was a cultural and literary decadence there must be a desire to see it so]. Another problem Rossich associates with traditional literary history and criticism is the basis of the appellation ‘decadence’ on the supposed lack of imaginative literature alone, eliminating from the analysis not only scientific and linguistic texts but also literature by Catalans in other languages. Moreover, for Rossich we err by eliding “conceptist” or “gongoresque” poetry because it imitates Castilian models, when these same poets consciously based themselves on forms previously imported from Italy—and, one might add, on the Valencian poet Ausiàs March, a known influence on Castilian authors writing in Castilian such as Juan Boscán and Garcilaso de la Vega. Perhaps the worst problem with the narrative of the ‘Decadència’ is that it discourages people from studying the period to which it refers. The "decadence" of Catalan literature in the early modern period, therefore, depends on one's presuppositions and point of view. The "decadent" aspect of this period is in part a construction of the writers and critics of the Renaixença that aims at establishing a clear difference between the movements.
Authors and works
Important authors writing in Catalan during the early modern period include Francesc Fontanella, Francesc Vicenç Garcia, and Josep Romaguera. Both Fontanella and Vicenç Garcia wrote theatrical and poetic works, including sonnet sequences, religious verse, and even erotica. Romaguera was renowned for his oratory skills, conserved in sermons published in Castilian, as well as the only book of emblems ever published in Catalan, the Atheneo de Grandesa (1681). The earlier work Tirant lo Blanc by Joanot Martorell (1490) was considered the best chivalric romance by Cervantes in his Don Quixote, which was quite an influence for writers of the period. Other works of the early modern period include popular poetry such as goigs and broadsides.
- Elliott, J.H. The Revolt of the Catalans: A Study in the Decline of Spain. Cambridge: Cambridge UP, 1963. |
A galley is a type of ship that is propelled mainly by rowing. The galley is characterized by its long, slender hull, shallow draft and low freeboard (clearance between sea and railing). Virtually all types of galleys had sails that could be used in favorable winds, but human strength was always the primary method of propulsion. This allowed galleys to navigate independently of winds and currents. The galley originated among the seafaring civilizations around the Mediterranean Sea in the late second millennium BC and remained in use in various forms until the early 19th century in warfare, trade and piracy.
Galleys were the warships used by the early Mediterranean naval powers, including the Greeks, Phoenicians and Romans. They remained the dominant types of vessels used for war and piracy in the Mediterranean Sea until the last decades of the 16th century. As warships, galleys carried various types of weapons throughout their long existence, including rams, catapults and cannons, but also relied on their large crews to overpower enemy vessels in boarding actions. They were the first ships to effectively use heavy cannons as anti-ship weapons. As highly efficient gun platforms they forced changes in the design of medieval seaside fortresses as well as refinement of sailing warships.
The zenith of galley usage in warfare came in the late 16th century with battles like that at Lepanto in 1571, one of the largest naval battles ever fought. By the 17th century, however, sailing ships and hybrid ships like the xebec displaced galleys in naval warfare. They were the most common warships in the Atlantic Ocean during the Middle Ages, and later saw limited use in the Caribbean, the Philippines and the Indian Ocean in the early modern period, mostly as patrol craft to combat pirates. From the mid-16th century galleys were in intermittent use in the Baltic Sea, with its short distances and extensive archipelagoes. There was a minor revival of galley warfare in the 18th century in the wars among Russia, Sweden and Denmark.
- 1 Definition and terminology
- 2 History
- 2.1 The first warships
- 2.2 Hellenistic era and rise of the Republic
- 2.3 Roman Imperial era
- 2.4 Middle Ages
- 2.5 Development of the true galley
- 2.6 Transition to sailing ships
- 2.7 Introduction of guns
- 2.8 Mediterranean decline
- 2.9 Use in northern Europe
- 2.10 Southeast Asia
- 3 Construction
- 4 Propulsion
- 5 Armament and tactics
- 6 Ceremonial symbolism
- 7 Surviving examples
- 8 References
- 9 Further reading
- 10 External links
Definition and terminology
The term "galley" derives from the medieval Greek galea, a smaller version of the dromon, the prime warship of the Byzantine navy. The origin of the Greek word is unclear but could possibly be related to galeos, dogfish shark. The word "galley" has been attested in English from c. 1300 and has been used in most European languages from around 1500 both as a general term for oared warships, and from the Middle Ages and onwards more specifically for the Mediterranean-style vessel. It was only from the 16th century that a unified galley concept came in use. Before that, particularly in antiquity, there was a wide variety of terms used for different types of galleys. In modern historical literature, "galley" is occasionally used as a general term for various types of oared vessels larger than boats, though the "true" galley is defined as the ships belonging to the Mediterranean tradition.
Ancient galleys were named according to the number of oars, the number of banks of oars or lines of rowers. The terms are based on contemporary language use combined with more recent compounds of Greek and Latin words. The earliest Greek single-banked galleys are called triaconters (from triakontoroi, "thirty-oars") and penteconters (pentēkontoroi, "fifty-oars"). For later galleys with more than one row of oars, the terminology is based on Latin numerals with the suffix -reme from rēmus, "oar". A monoreme has one bank of oars, a bireme two and a trireme three. Since the maximum banks of oars was three, any expansion above that did not refer to additional banks of oars, but of additional rowers for every oar. Quinquereme (quintus + rēmus) was literally a "five-oar", but actually meant that there were several rowers to certain banks of oars which made up five lines of oar handlers. For simplicity, they have by many modern scholars been referred to as "fives", "sixes", "eights", "elevens", etc. Anything above six or seven rows of rowers was not common, though even a very exceptional "forty" is attested in contemporary source. Any galley with more than three or four lines of rowers is often referred to as a "polyreme".
Archaeologist Lionel Casson has used the term "galley" to describe all North European shipping in the early and high Middle Ages, including Viking merchants and even their famous longships, though this is rare. Oared military vessels built on the British Isles in the 11th to 13th centuries were based on Scandinavian designs, but were nevertheless referred to as "galleys". Many of them were similar to birlinns, close relatives of longship types like the snekkja. By the 14th century, they were replaced with balingers in southern Britain while longship-type "Irish galleys" remained in use throughout the Middle Ages in northern Britain.
Medieval and early modern galleys used a different terminology than their ancient predecessors. Names were based on the changing designs that evolved after the ancient rowing schemes were forgotten. Among the most important is the Byzantine dromon, the predecessor to the Italian galea sottila. This was the first step toward the final form of the Mediterranean war galley. As galleys became an integral part of an advanced, early modern system of warfare and state administration, they were divided into a number of ranked grades based on the size of the vessel and the number of its crew. The most basic types were the following: large commander "lantern galleys", half-galleys, galiots, fustas, brigantines and fregatas. Naval historian Jan Glete has described as a sort of predecessor of the later rating system of the Royal Navy and other sailing fleets in Northern Europe.
The French navy and the British Royal Navy built a series of "galley frigates" from c. 1670–1690 that were small two-decked sailing cruisers with a set of oarports on the lower deck. The three British galley frigates also had distinctive names - James Galley, Charles Galley and Mary Galley. In the late 18th century, the term "galley" was in some contexts used to describe minor oared gun-armed vessels which did not fit into the category of the classic Mediterranean type. In North America, during American Revolutionary War, and other wars with France and Britain, the early US Navy and other navies built vessels that were called "galleys" or "row galleys", though they were actually brigantines or Baltic gunboats. This type of description was more a characterization of their military role, and was in part due to technicalities in administration and naval financing.
Among the earliest known watercraft were canoes made from hollowed-out logs, the earliest ancestors of galleys. Their narrow hulls required them to be paddled in a fixed sitting position facing forwards, a less efficient form of propulsion than rowing with proper oars, facing backwards. Seagoing paddled craft have been attested by finds of terracotta sculptures and lead models in the region of the Aegean Sea from the 3rd millennium BC. However, archaeologists believe that the Stone Age colonization of islands in the Mediterranean around 8,000 BC required fairly large, seaworthy vessels that were paddled and possibly even equipped with sails. The first evidence of more complex craft that are considered to prototypes for later galleys comes from Ancient Egypt during the Old Kingdom (c. 2700–2200 BC). Under the rule of pharaoh Pepi I (2332–2283 BC) these vessels were used to transport troops to raid settlements along the Levantine coast and to ship back slaves and timber. During the reign of Hatshepsut (c. 1479–57 BC), Egyptian galleys traded in luxuries on the Red Sea with the enigmatic Land of Punt, as recorded on wall paintings at the Mortuary Temple of Hatshepsut at Deir el-Bahari.
Shipbuilders, probably Phoenician, a seafaring people who lived on the southern and eastern coasts of the Mediterranean, were the first to create the two-level galley that would be widely known under its Greek name, diērēs, or bireme. Even though the Phoenicians were among the most important naval civilizations in early Antiquity, little detailed evidence have been found concerning the types of ships they used. The best depictions found so far have been small, highly stylized images on seals which depict crescent-shape vessels equipped with one mast and banks of oars. Colorful frescoes on the Minoan settlement on Santorini (c. 1600 BC) show more detailed pictures of vessels with ceremonial tents on deck in a procession. Some of these are rowed, but others are paddled with men laboriously bent over the railings. This has been interpreted as a possible ritual reenactment of more ancient types of vessels, alluding to a time before rowing was invented, but little is otherwise known about the use and design of Minoan ships.
In the earliest days of the galley, there was no clear distinction between ships of trade and war other than their actual usage. River boats plied the waterways of ancient Egypt during the Old Kingdom (2700–2200 BC) and seagoing galley-like vessels were recorded bringing back luxuries from across the Red Sea in the reign of pharaoh Hatshepsut. Fitting rams to the bows of vessels sometime around the 8th century BC resulted in a distinct split in the design of warships, and set trade vessels apart, at least when it came to use in naval warfare. The Phoenicians used galleys for transports that were less elongated, carried fewer oars and relied more on sails. Carthaginian galley wrecks found off Sicily that date to the 3rd or 2nd century BC had a length to breadth ratio of 6:1, proportions that fell between the 4:1 of sailing merchant ships and the 8:1 or 10:1 of war galleys. Merchant galleys in the ancient Mediterranean were intended as carriers of valuable cargo or perishable goods that needed to be moved as safely and quickly as possible.
The first Greek galleys appeared around the second half of the 2nd millennium BC. In the epic poem, the Iliad, set in the 12th century BC, galleys with a single row of oarsmen were used primarily to transport soldiers to and from various land battles. The first recorded naval battle, the battle of the Delta between Egyptian forces under Ramesses III and the enigmatic alliance known as the Sea Peoples, occurred as early as 1175 BC. It is the first known engagement between organized armed forces, using sea vessels as weapons of war, though primarily as fighting platforms. It was distinguished by being fought against an anchored fleet close to shore with land-based archer support.
The first true Mediterranean galleys usually had between 15 and 25 pairs of oars and were called triaconters or penteconters, literally "thirty-" and "fifty-oared", respectively. Not long after they appeared, a third row of oars was added by the addition to a bireme of an outrigger, a projecting construction that gave more room for the projecting oars. These new galleys were called triērēs ("three-fitted") in Greek. The Romans later called this design the triremis, trireme, the name it is today best known under. It has been hypothesized that early types of triremes existed as early as 700 BC, but the earliest conclusive literary reference dates to 542 BC. With the development of triremes, penteconters disappeared altogether. Triaconters were still used, but only for scouting and express dispatches.
The first warships
The earliest use for galleys in warfare was to ferry fighters from one place to another, and until the middle of the 2nd millennium BC had no real distinction from merchant freighters. Around the 14th century BC, the first dedicated fighting ships were developed, sleeker and with cleaner lines than the bulkier merchants. They were used for raiding, capturing merchants and for dispatches. During this early period, raiding became the most important form of organized violence in the Mediterranean region. Maritime classicist historian Lionel Casson used the example of Homer's works to show that seaborne raiding was considered a common and legitimate occupation among ancient maritime peoples. The later Athenian historian Thucydides described it as having been "without stigma" before his time.
The development of the ram sometime before the 8th century BC changed the nature of naval warfare, which had until then been a matter of boarding and hand-to-hand fighting. With a heavy projection at the foot of the bow, sheathed with metal, usually bronze, a ship could incapacitate an enemy ship by punching a hole in its planking. The relative speed and nimbleness of ships became important, since a slower ship could be outmaneuvered and disabled by a faster one. The earliest designs had only one row of rowers that sat in undecked hulls, rowing against tholes, or oarports, that were placed directly along the railings. The practical upper limit for wooden constructions fast and maneuverable enough for warfare was around 25-30 oars per side. By adding another level of oars, a development that occurred no later than c. 750 BC, the galley could be made shorter with as many rowers, while making them strong enough to be effective ramming weapons.
The emergence of more advanced states and intensified competition between them spurred on the development of advanced galleys with multiple banks of rowers. During the middle of the first millennium BC, the Mediterranean powers developed successively larger and more complex vessels, the most advanced being the classical trireme with up to 170 rowers. Triremes fought several important engagements in the naval battles of the Greco-Persian Wars (502–449 BC) and the Peloponnesian War (431-404 BC), including the battle of Aegospotami in 405 BC, which sealed the defeat of Athens by Sparta and its allies. The trireme was an advanced ship that was expensive to build and to maintain due its large crew. By the 5th century, advanced war galleys had been developed that required sizable states with an advanced economy to build and maintain. It was associated with the latest in warship technology around the 4th century BC and could only be employed by an advanced state with an advanced economy and administration. They required considerable skill to row and oarsmen were mostly free citizens who had years of experience at the oar.
Hellenistic era and rise of the Republic
As civilizations around the Mediterranean grew in size and complexity, both their navies and the galleys that made up their numbers became successively larger. The basic design of two or three rows of oars remained the same, but more rowers were added to each oar. The exact reasons are not known, but are believed to have been caused by addition of more troops and the use of more advanced ranged weapons on ships, such as catapults. The size of the new naval forces also made it difficult to find enough skilled rowers for the one-man-per-oar system of the earliest triremes. With more than one man per oar, a single rower could set the pace for the others to follow, meaning that more unskilled rowers could be employed.
The successor states of Alexander the Great's empire built galleys that were like triremes or biremes in oar layout, but manned with additional rowers for each oar. The ruler Dionysius I of Syracuse (ca. 432–367 BC) is credited with pioneering the "five" and "six", meaning five or six rows of rowers plying two or three rows of oars. Ptolemy II (283-46 BC) is known to have built a large fleet of very large galleys with several experimental designs rowed by everything from 12 up to 40 rows of rowers, though most of these are considered to have been quite impractical. Fleets with large galleys were put in action in conflicts such as the Punic Wars (246-146) between the Roman republic and Carthage, which included massive naval battles with hundreds of vessels and tens of thousands of soldiers, seamen and rowers.
Most of the surviving documentary evidence comes from Greek and Roman shipping, though it is likely that merchant galleys all over the Mediterranean were highly similar. In Greek they were referred to as histiokopos ("sail-oar-er") to reflect that they relied on both types of propulsion. In Latin they were called actuaria (navis) ("ship that moves") in Latin, stressing that they were capable of making progress regardless of weather conditions. As an example of the speed and reliability, during an instance of the famous "Carthago delenda est"-speech, Cato the Elder demonstrated the close proximity of the Roman arch enemy Carthage by displaying a fresh fig to his audience that he claimed had been picked in North Africa only three days past. Other cargoes carried by galleys were honey, cheese, meat and live animals intended for gladiator combat. The Romans had several types of merchant galleys that specialized in various tasks, out of which the actuaria with up to 50 rowers was the most versatile, including the phaselus (lit. "bean pod") for passenger transport and the lembus, a small-scale express carrier. Many of these designs continued to be used until the Middle Ages.
Roman Imperial era
The Battle of Actium in 31 BC between the forces of Augustus and Mark Antony marked the peak of the Roman fleet arm. After Augustus' victory at Actium, most of the Roman fleet was dismantled and burned. The Roman civil wars were fought mostly by land forces, and from the 160s until the 4th century AD, no major fleet actions were recorded. During this time, most of the galley crews were disbanded or employed for entertainment purposes in mock battles or in handling the sail-like sun-screens in the larger Roman arenas. What fleets remained were treated as auxiliaries of the land forces, and galley crewmen themselves called themselves milites, "soldiers", rather than nautae, "sailors".
The Roman galley fleets were turned into provincial patrol forces that were smaller and relied largely on liburnians, compact biremes with 25 pairs of oars. These were named after an Illyrian tribe known by Romans for their sea roving practices, and these smaller craft were based on, or inspired by, their vessels of choice. The liburnians and other small galleys patrolled the rivers of continental Europe and reached as far as the Baltic, where they were used to fight local uprisings and assist in checking foreign invasions. The Romans maintained numerous bases around the empire: along the rivers of Central Europe, chains of forts along the northern European coasts and the British Isles, Mesopotamia and North Africa, including Trabzon, Vienna, Belgrade, Dover, Seleucia and Alexandria. Few actual galley battles in the provinces are found in records. One action in 70 AD at the unspecified location of the "Island of the Batavians" during the Batavian Rebellion was recorded, and included a trireme as the Roman flagship. The last provincial fleet, the classis Britannica, was reduced by the late 200s, though there was a minor upswing under the rule of Constantine (272–337). His rule also saw the last major naval battle of the unified Roman Empire (before the permanent split into Western and Eastern [later "Byzantine"] Empires), the battle of Hellespont of 324. Some time after Hellespont, the classical trireme fell out of use, and its design was forgotten.
A transition from galley to sailing vessels as the most common types of warships began in the high Middle Ages (c. 11th century). Large high-sided sailing ships had always been formidable obstacles for galleys. To low-freeboard oared vessels, the bulkier sailing ships, the cog and the carrack, were almost like floating fortresses, being difficult to board and even harder to capture. Galleys remained useful as warships throughout the entire Middle Ages because of their maneuverability. Sailing ships of the time had only one mast, usually with just a single, large square sail. This made them cumbersome to steer and it was virtually impossible to sail into the wind direction. Galleys therefore were still the only ship type capable of coastal raiding and amphibious landings, both key elements of medieval warfare.
In the eastern Mediterranean, the Byzantine Empire struggled with the incursion from invading Muslim Arabs from the 7th century, leading to fierce competition, a buildup of fleet, and war galleys of increasing size. Soon after conquering Egypt and the Levant, the Arab rulers built ships highly similar to Byzantine dromons with the help of local Coptic shipwrights from former Byzantine naval bases. By the 9th century, the struggle between the Byzantines and Arabs had turned the Eastern Mediterranean into a no man's land for merchant activity. In the 820s Crete was captured by Andalusian Muslims displaced by a failed revolt against the Emirate of Cordoba, turning the island into a base for (galley) attacks on Christian shipping until the island was recaptured by the Byzantines in 960.
In the western Mediterranean and Atlantic, the division of the Carolingian Empire in the late 9th century brought on a period of instability, meaning increased piracy and raiding in the Mediterranean, particularly by newly arrived Muslim invaders. The situation was worsened by raiding Scandinavian Vikings who used longships, vessels that in many ways were very close to galleys in design and functionality and also employed similar tactics. To counter the threat, local rulers began to build large oared vessels, some with up to 30 pairs of oars, that were larger, faster and with higher sides than Viking ships. Scandinavian expansion, including incursions into the Mediterranean and attacks on both Muslim Iberia and even Constantinople itself, subsided by the mid-11th century. By this time, greater stability in merchant traffic was achieved by the emergence of Christian kingdoms such as those of France, Hungary and Poland. Around the same time, Italian port towns and city states, like Venice, Pisa and Amalfi, rose on the fringes of the Byzantine Empire as it struggled with eastern threats.
After the advent of Islam and Muslim conquests of the 7th and 8th century, the old Mediterranean economy collapsed and the volume of trade went down drastically. The Eastern Roman (Byzantine) Empire, neglected to revive overland trade routes but was dependent on keeping the sea lanes open to keep the empire together. Bulk trade fell around 600-750 while the luxury trade increased. Galleys remained in service, but were profitable mainly in the luxury trade, which set off their high maintenance cost. In the 10th century, there was a sharp increase in piracy which resulted in larger ships with more numerous crews. These were mostly built by the growing city-states of Italy which were emerging as the dominant sea powers, including Venice, Genoa and Pisa. Inheriting the Byzantine ship designs, the new merchant galleys were similar dromons, but without any heavy weapons and both faster and wider. They could be manned by crews of up to 1,000 men and were employed in both trade and warfare. A further boost to the development of the large merchant galleys was the upswing in Western European pilgrims traveling to the Holy Land.
In Northern Europe, Viking longships and their derivations, knarrs, dominated trading and shipping, though developed separately from the Mediterranean galley tradition. In the South galleys continued to be useful for trade even as sailing vessels evolved more efficient hulls and rigging; since they could hug the shoreline and make steady progress when winds failed, they were highly reliable. The zenith in the design of merchant galleys came with the state-owned great galleys of the Venetian Republic, first built in the 1290s. These were used to carry the lucrative trade in luxuries from the east such as spices, silks and gems. They were in all respects larger than contemporary war galleys (up to 46 m) and had a deeper draft, with more room for cargo (140-250 t). With a full complement of rowers ranging from 150 to 180 men, all available to defend the ship from attack, they were also very safe modes of travel. This attracted a business of carrying affluent pilgrims to the Holy Land, a trip that could be accomplished in as little 29 days on the route Venice-Jaffa, despite landfalls for rest and watering or for respite from rough weather.
Development of the true galley
Late medieval maritime warfare was divided in two distinct regions. In the Mediterranean galleys were used for raiding along coasts, and in the constant fighting for naval bases. In the Atlantic and Baltic there was greater focus on sailing ships that were used mostly for troop transport, with galleys providing fighting support. Galleys were still widely used in the north and were the most numerous warships used by Mediterranean powers with interests in the north, especially the French and Iberian kingdoms.
During the 13th and 14th century, the galley evolved into the design that was to remain essentially the same until it was phased out in the early 19th century. The new type descended from the ships used by Byzantine and Muslim fleets in the early Middle Ages. These were the mainstay of all Christian powers until the 14th century, including the great maritime republics of Genoa and Venice, the Papacy, the Hospitallers, Aragon and Castile, as well as by various pirates and corsairs. The overall term used for these types of vessels was gallee sottili ("slender galleys"). The later Ottoman navy used similar designs, but they were generally faster under sail, and smaller, but slower under oars. Galley designs were intended solely for close action with hand-held weapons and projectile weapons like bows and crossbows. In the 13th century the Iberian Crown of Aragon built several fleet of galleys with high castles, manned with Catalan crossbowman, and regularly defeated numerically superior Angevin forces.
From the first half of the 14th century the Venetian galere da mercato ("merchantman galleys") were being built in the shipyards of the state-run Arsenal as "a combination of state enterprise and private association, the latter being a kind of consortium of export merchants", as Fernand Braudel described them. The ships sailed in convoy, defended by archers and slingsmen (ballestieri) aboard, and later carrying cannons. In Genoa, the other major maritime power of the time, galleys and ships in general were more produced by smaller private ventures.
In the 14th and 15th centuries merchant galleys traded high-value goods and carried passengers. Major routes in the time of the early Crusades carried the pilgrim traffic to the Holy Land. Later routes linked ports around the Mediterranean, between the Mediterranean and the Black Sea (a grain trade soon squeezed off by the Turkish capture of Constantinople, 1453) and between the Mediterranean and Bruges— where the first Genoese galley arrived at Sluys in 1277, the first Venetian galere in 1314— and Southampton. Although primarily sailing vessels, they used oars to enter and leave many trading ports of call, the most effective way of entering and leaving the Lagoon of Venice. The Venetian galera, beginning at 100 tons and built as large as 300, was not the largest merchantman of its day, when the Genoese carrack of the 15th century might exceed 1000 tons. In 1447, for instance, Florentine galleys planned to call at 14 ports on their way to and from Alexandria. The availability of oars enabled these ships to navigate close to the shore where they could exploit land and sea breezes and coastal currents, to work reliable and comparatively fast passages against the prevailing wind. The large crews also provided protection against piracy. These ships were very seaworthy; a Florentine great galley left Southampton on 23 February 1430 and returned to its port at Pisa in 32 days. They were so safe that merchandise was often not insured. These ships increased in size during this period, and were the template from which the galleass developed.
Transition to sailing ships
During the early 15th century, sailing ships began to dominate naval warfare in northern waters. While the galley still remained the primary warship in southern waters, a similar transition had begun also among the Mediterranean powers. A Castilian naval raid on the island of Jersey in 1405 became the first recorded battle where a Mediterranean power employed a naval force consisting mostly of cogs or nefs, rather than the oared-powered galleys. The battle of Gibraltar between Castile and Portugal in 1476 was another important sign of change; it was the first recorded battle where the primary combatants were full-rigged ships armed with wrought-iron guns on the upper decks and in the waists, foretelling of the slow decline of the war galley.
The transition from the Mediterranean war galley to the sailing vessel as the preferred method of vessel in the Mediterranean is tied directly to technological developments and the inherent handling characteristics of each vessel types. The primary factors were changing sail design, the introduction of cannons aboard vessels, and the handling characteristics of the vessels.
The sailing vessel was always at the mercy of the wind for propulsion, and those that did carry oars were placed at a disadvantage because they were not optimized for oar use. The galley did have disadvantages compared to the sailing vessel though. Their smaller hulls were not able to hold as much cargo and this limited their range as the crews were required to replenish food stuffs more frequently. The low freeboard of the galley meant that in close action with a sailing vessel, the sailing vessel would usually maintain a height advantage. The sailing vessel could also fight more effectively farther out at sea and in rougher wind conditions because of the height of their freeboard.
Under sail, an oared warship was placed at much greater risk as a result of the piercings for the oars which were required to be near the waterline and would allow water to ingress into the galley if the vessel heeled too far to one side. These advantages and disadvantages led the galley to be and remain a primarily coastal vessel. The shift to sailing vessels in the Mediterranean was the result of the negation of some of the galley's advantages as well as the adoption of gunpowder weapons on a much larger institutional scale. The sailing vessel was propelled in a different manner than the galley but the tactics were often the same until the 16th century. The real-estate afforded to the sailing vessel to place larger cannons and other armament mattered little because early gunpowder weapons had limited range and were expensive to produce. The eventual creation of cast iron cannons allowed vessels and armies to be outfitted much more cheaply. The cost of gunpowder also fell in this period.
The armament of both vessel types varied between larger weapons such as bombards and the smaller swivel guns. For logistical purposes it became convenient for those with larger shore establishments to standardize upon a given size of cannon. Traditionally the English in the North and the Venetians in the Mediterranean are seen as some the earliest to move in this direction. The improving sail rigs of northern vessels also allowed them to navigate in the coastal waters of the Mediterranean to a much larger degree than before. Aside from warships the decrease in the cost of gunpowder weapons also led to the arming of merchants. The larger vessels of the north continued to mature while the galley retained its defining characteristics. Attempts were made to stave this off such as the addition of fighting castles in the bow, but such additions to counter the threats brought by larger sailing vessels often offset the advantages of galley.
Introduction of guns
From around 1450, three major naval powers established a dominance over different parts of the Mediterranean using galleys as their primary weapons at sea: the Ottomans in the east, Venice in the center and Habsburg Spain in the west. The core of their fleets were concentrated in the three major, wholly dependable naval bases in the Mediterranean: Constantinople, Venice and Barcelona. Naval warfare in the 16th century Mediterranean was fought mostly on a smaller scale, with raiding and minor actions dominating. Only three truly major fleet engagements were actually fought in the 16th century: the battles of Preveza in 1538, Djerba in 1560 and Lepanto in 1571. Lepanto became the last large all-galley battle ever, and was also one of the largest battle in terms of participants anywhere in early modern Europe before the Napoleonic Wars.
Occasionally the Mediterranean powers employed galley forces for conflicts outside the Mediterranean. Spain sent galley squadrons to the Netherlands during the later stages of the Eighty Years' War which successfully operated against Dutch forces in the enclosed, shallow coastal waters. From the late 1560s, galleys were also used to transport silver to Genoese bankers to finance Spanish troops against the Dutch uprising. Galleasses and galleys were part of an invasion force of over 16,000 men that conquered the Azores in 1583. Around 2,000 galley rowers were on board ships of the famous 1588 Spanish Armada, though few of these actually made it to the battle itself. Outside European and Middle Eastern waters, Spain built galleys to deal with pirates and privateers in both the Caribbean and the Philippines. Ottoman galleys contested the Portuguese intrusion in the Indian Ocean in the 16th century, but failed against the high-sided, massive Portuguese carracks in open waters.
Despite the huge loss of men and material after the loss of the Spanish Armada in 1588 Spain maintained four permanent galley squadrons. Together they formed the largest galley navy in the Mediterranean in the early 17th century. They formed the backbone of the Spanish war fleet and were used for ferrying troops, supplies, horses and munitions to Spain's Italian and African possessions. The Ottoman Turks attempted to contest the Portuguese rise to power in the Indian Ocean in the 16th century with Mediterranean-style galleys, but were foiled by the formidable Portuguese carracks. Even though the carracks themselves were soon surpassed by other types of sailing vessels, their greater range, great size and high superstructures, armed with numerous wrought iron guns easily outmatched the short-ranged, low-freeboard Turkish galleys. The Spanish used galleys to more success in their colonial possessions in the Caribbean and the Philippines to hunt pirates and were used sporadically in the Netherlands and the Bay of Biscay.
Galleys had been synonymous with warships in the Mediterranean for at least 2,000 years, and continued to fulfill that role with the invention of gunpowder and heavy artillery. Though early 20th-century historians often dismissed the galleys as hopelessly outclassed with the first introduction of naval artillery on sailing ships, it was the galley that was favored by the introduction of heavy naval guns. Galleys were a more "mature" technology with long-established tactics and traditions of supporting social institutions and naval organizations. In combination with the intensified conflicts this led to a substantial increase in the size of galley fleets from c. 1520–80, above all in the Mediterranean, but also in other European theatres. Galleys and similar oared vessels remained uncontested as the most effective gun-armed warships in theory until the 1560s, and in practice for a few decades more, and were actually considered a grave risk to sailing warships. They could effectively fight other galleys, attack sailing ships in calm weather or in unfavorable winds (or deny them action if needed) and act as floating siege batteries. They were also unequaled in their amphibious capabilities, even at extended ranges, as exemplified by French interventions as far north as Scotland in the mid-16th century.
Heavy artillery on galleys was mounted in the bow, which aligned easily with the long-standing tactical tradition of attacking head on, bow first. The ordnance on galleys was heavy from its introduction in the 1480s, and capable of quickly demolishing the high, thin medieval stone walls that still prevailed in the 16th century. This temporarily upended the strength of older seaside fortresses, which had to be rebuilt to cope with gunpowder weapons. The addition of guns also improved the amphibious abilities of galleys as they could make assaults supported with heavy firepower, and were even more effectively defended when beached stern-first. An accumulation and generalizing of bronze cannons and small firearms in the Mediterranean during the 16th century increased the cost of warfare, but also made those dependent on them more resilient to manpower losses. Older ranged weapons, like bows or even crossbows, required considerable skill to handle, sometimes a lifetime of practice, while gunpowder weapons required considerably less training to use successfully. According to a highly influential study by military historian John F. Guilmartin, this transition in warfare, along with the introduction of much cheaper cast iron guns in the 1580s, proved the "death knell" for the war galley as a significant military vessel. Gunpowder weapons began to displace men as the fighting power of armed forces, making individual soldiers more deadly and effective. As offensive weapons, firearms could be stored for years with minimal maintenance and did not require the expenses associated with soldiers. Manpower could thus be exchanged for capital investments, something which benefited sailing vessels that were already far more economical in their use of manpower. It also served to increase their strategic range and to out-compete galleys as fighting ships.
Atlantic-style warfare based on heavily armed sailing ships began to change the nature of naval warfare in the Mediterranean in the 17th century. In 1616, a small Spanish squadron of five galleons and a patache was used to cruise the eastern Mediterranean and defeated a fleet of 55 galleys at the battle of Cape Celidonia. By 1650, war galleys were used primarily in the wars between Venice and the Ottoman Empire in their struggle for strategic island and coastal trading bases and until the 1720s by both France and Spain but for largely amphibious and cruising operations or in combination with heavy sailing ships in a major battle, where they played specialized roles. An example of this was when a Spanish fleet used its galleys in a mixed naval/amphibious battle in the second 1641 battle of Tarragona, to break a French naval blockade and land troops and supplies. Even Venice, a purely Mediterranean power, began to construct sail only warships in the latter part of the century, as did other purely Mediterranean powers. Christian and Muslim corsairs had been using galleys in sea roving and in support of the major powers in times of war, but largely replaced them with xebecs, various sail/oar hybrids, and a few remaining light galleys in the early 17th century.
No large all galley battles were fought after the gigantic clash at Lepanto in 1571, and galleys were mostly used as cruisers or for supporting sailing warships as a rearguard in fleet actions, similar to the duties performed by frigates outside the Mediterranean. They could assist damaged ships out of the line, but generally only in very calm weather, as was the case at the Battle of Málaga in 1704. For small states and principalities as well as groups of private merchants, galleys were more affordable than large and complex sailing warships, and were used as defense against piracy. Galleys required less timber to build, the design was relatively simple and they carried fewer guns. They were tactically flexible and could be used for naval ambushes as well amphibious operations. They also required few skilled seamen and were difficult for sailing ships to catch, but vital in hunting down catching other galleys and oared raiders.
The largest galley fleets in the 17th century were operated by the two major Mediterranean powers, France and Spain. France had by the 1650s become the most powerful state in Europe, and expanded its galley forces under the rule of the absolutist "Sun King" Louis XIV. In the 1690s the French galley corps (Corps des galères) reached its all-time peak with more than 50 vessels manned by over 15,000 men and officers, becoming the largest galley fleet in the world at the time. Though there was intense rivalry between France and Spain, not a single galley battle occurred between the two great powers during this period, and virtually no naval battles between other nations either. During the War of the Spanish Succession, French galleys were involved in actions against Antwerp and Harwich, but due to the intricacies of alliance politics there were never any Franco-Spanish galley clashes. In the first half of the 18th century, the other major naval powers in the Mediterranean Sea, the Order of Saint John based in Malta and of the Papal States in central Italy, cut down drastically on their galley forces. Despite the lack of action, the galley corps received vast resources (25-50% of the French naval expenditures) during the 1660s. It was maintained as a functional fighting force right up until its abolishment in 1748, though its primary function was more of a symbol of Louis XIV's absolutist ambitions.
The last recorded battle in the Mediterranean where galleys played a significant part was at Matapan in 1717, between the Ottomans and Venice and its allies, though they had little influence on the final outcome. Few large-scale naval battles were fought in the Mediterranean throughout most of the remainder of the 18th century. The Tuscan galley fleet was dismantled around 1718, Naples had only four old vessels by 1734 and the French Galley Corps had ceased to exist as an independent arm in 1748. Venice, the Papal States and the Knights of Malta were the only state fleets that maintained galleys, though in nothing like their previous quantities. By 1790, there were fewer than 50 galleys in service among all the Mediterranean powers, half of which belonged to Venice.
Use in northern Europe
Oared vessels remained in use in northern waters for a long time, though in subordinate role and in particular circumstances. In the Italian Wars, French galleys brought up from the Mediterranean to the Atlantic posed a serious threat to the early English Tudor navy during coastal operations. The response came in the building of a considerable fleet of oared vessels, including hybrids with a complete three-masted rig, as well as a Mediterranean-style galleys (that were even attempted to be manned with convicts and slaves). Under King Henry VIII, the English navy used several kinds of vessels that were adapted to local needs. English galliasses (very different from the Mediterranean vessel of the same name) were employed to cover the flanks of larger naval forces while pinnaces and rowbarges were used for scouting or even as a backup for the longboats and tenders for the larger sailing ships. During the Dutch Revolt (1566–1609) both the Dutch and Spanish found galleys useful for amphibious operations in the many shallow waters around the Low Countries where deep-draft sailing vessels could not enter.
While galleys were too vulnerable to be used in large numbers in the open waters of the Atlantic, they were well-suited for use in much of the Baltic Sea by Denmark, Sweden, Russia and some of the Central European powers with ports on the southern coast. There were two types of naval battlegrounds in the Baltic. One was the open sea, suitable for large sailing fleets; the other was the coastal areas and especially the chain of small islands and archipelagos that ran almost uninterrupted from Stockholm to the Gulf of Finland. In these areas, conditions were often too calm, cramped and shallow for sailing ships, but they were excellent for galleys and other oared vessels. Galleys of the Mediterranean type were first introduced in the Baltic Sea around the mid-16th century as competition between the Scandinavian states of Denmark and Sweden intensified. The Swedish galley fleet was the largest outside the Mediterranean, and served as an auxiliary branch of the army. Very little is known about the design of Baltic Sea galleys, except that they were overall smaller than in the Mediterranean and they were rowed by army soldiers rather than convicts or slaves.
Baltic revival and decline
Galleys were introduced to the Baltic Sea in the 16th century but the details of their designs are lacking due to the absence of records. They might have been built in a more regional style, but the only known depiction from the time shows a typical Mediterranean vessel. There is conclusive evidence that Denmark became the first Baltic power to build classic Mediterranean-style galleys in the 1660s, though they proved to be generally too large to be useful in the shallow waters of the Baltic archipelagos. Sweden and especially Russia began to launch galleys and various rowed vessels in great numbers during the Great Northern War in the first two decades of the 18th century. Sweden was late in the game when it came to building an effective oared fighting fleet, while the Russian galley forces under Tsar Peter I developed into a supporting arm for the sailing navy and a well-functioning auxiliary of the army which infiltrated and conducted numerous raids on the eastern Swedish coast in the 1710s.
Sweden and Russia became the two main competitors for Baltic dominance in the 18th century, and built the largest galley fleets in the world at the time. They were used for amphibious operations in Russo-Swedish wars of 1741–43 and 1788–90. The last galleys ever constructed were built in 1796 by Russia, and remained in service well into the 19th century, but saw little action. The last time galleys were deployed in action was when the Russian navy was attacked in Åbo (Turku) in 1854 as part of the Crimean War. In the second half of the 18th century, the role of Baltic galleys in coastal fleets was replaced first with hybrid "archipelago frigates" (such as the turuma or pojama) and xebecs, and after the 1790s with various types of gunboats.
Both the Russian and Swedish navies were based on a form of conscription, and both navies used conscripts as galley rowers. This had several advantages over convicts or slaves: the rowers could be armed to fight as marines, they could be also used as land soldiers and invasion force, and they could be taught better and more skilled than convicts or slaves. Since most naval conscripts came from coastal parishes and towns, most were already experienced seafarers when they entered the service.
Various types of indigenous galley-like vessels are used in Southeast Asia, namely: lancaran, penjajap, kelulus, lanong, garay, kora-kora, and karakoa. During the turn of the 16th century, mediterranean influence came, mainly by Ottoman influences of sultanates in Nusantaran archipelago. A royal galley (ghali kenaikan raja) of the Malacca sultanate that was built approximately in 1453 is called Mendam Berahi (Malay for "Suppressed Passion"). It was 60 gaz (67 m) long and 6 depa (11 m) wide. This ghali had 3 masts and could carry 400 men; 200 of them were rowers in 50 rowing line. It was armed with 5 bow-mounted rentaka and ramming beam.
Acehnese in 1568 siege of Portuguese Malacca used 4 large galley 40-50 meter long each with 190 rowers and 3 masts. They were armed with 12 large camelos (3 at each bow side, 4 at stern), 1 basilisk (bow-mounted), 12 falcons, and 40 swivel guns. By then cannons, firearms, and other war material had come annually from Jeddah, and the Turks also sent military expert, master of galleys, and technicians. In 1575 siege, Aceh used 40 two-masted galleys with Turkish captains carrying 200-300 soldier of Turk, Arab, Deccanis, and Aceh origins. The state galleys of Aceh, Daya, and Pedir is said to carry 10 meriam, 50 lela, and 120 cecorong (not counting the ispinggar). Smaller galley carry 5 meriam, 20 lela, and 50 cecorong. Western and native sources mention that Aceh had 100-120 galleys at any time (not counting the smaller fusta and galiot), spread from Daya (west coast) to Pedir (east coast). One galley captured by Portuguese in 1629 is very large, and it was reported there were total 47 of them. She reached 100 m in length and 17 m in breadth, had 3 masts with square sails and topsails, propelled by 35 oars on each side and able to carry 700 men. It is armed with 18 large cannon (five 55-pounders at the bow, one 25-pounder at the stern, the rest were 17 and 18-pounders), 80 falcons and many swivel guns. The ship is called "Espanto do Mundo" (terror of the universe), which probably a free translation from "Cakra Dunia". The Portuguese reported that it was bigger than anything ever built in the Christian world, and that its castle could compete with that of galleons.
Two Dutch engravings from 1598 and 1601 depicted galley from Banten and Madura. They had 2 and 1 mast(s), respectively. Major difference from mediterranean galleys, Nusantaran galley had raised fighting platform called "balai" in which the soldier stood, a feature common in warships of the region. Sultanate of Goa of mid 17th century had "Galle" 40 m long and 6 m breadth, carrying 200-400 men. Other galle of the kingdom varied between 23-35 m in length.
Galleys have since their first appearance in ancient times been intended as highly maneuverable vessels, independent of winds by being rowed, and usually with a focus on speed under oars. The profile has therefore been that of a markedly elongated hull with a ratio of breadth to length at the waterline of at least 1:5, and in the case of ancient Mediterranean galleys as much as 1:10 with a small draught, the measurement of how much of a ship's structure that is submerged under water. To make it possible to efficiently row the vessels, the freeboard, the height of the railing to the surface of the water, was by necessity kept low. This gave oarsmen enough leverage to row efficiently, but at the expense of seaworthiness. These design characteristics made the galley fast and maneuverable, but more vulnerable to rough weather.
The documentary evidence for the construction of ancient galleys is fragmentary, particularly in pre-Roman times. Plans and schematics in the modern sense did not exist until the 17th century and nothing like them has survived from ancient times. How galleys were constructed has therefore been a matter of looking at circumstantial evidence in literature, art, coinage and monuments that include ships, some of them actually in natural size. Since the war galleys floated even with a ruptured hull and virtually never had any ballast or heavy cargo that could sink them, not a single wreckage of one has so far been found. The only exception has been a partial wreckage of a small Punic liburnian from the Roman era, the Marsala Ship.
On the funerary monument of the Egyptian king Sahure (2487–2475 BC) in Abusir, there are relief images of vessels with a marked sheer (the upward curvature at each end of the hull) and seven pairs of oars along its side, a number that was likely to have been merely symbolical, and steering oars in the stern. They have one mast, all lowered and vertical posts at stem and stern, with the front decorated with an Eye of Horus, the first example of such a decoration. It was later used by other Mediterranean cultures to decorate seagoing craft in the belief that it helped to guide the ship safely to its destination. These early galleys apparently lacked a keel meaning they lacked stiffness along their length. Therefore, they had large cables connecting stem and stern resting on massive crutches on deck. They were held in tension to avoid hogging, or bending the ship's construction upwards in the middle, while at sea. In the 15th century BC, Egyptian galleys were still depicted with the distinctive extreme sheer, but had by then developed the distinctive forward-curving stern decorations with ornaments in the shape of lotus flowers. They had possibly developed a primitive type of keel, but still retained the large cables intended to prevent hogging.
The design of the earliest oared vessels is mostly unknown and highly conjectural. They likely used a mortise construction, but were sewn together rather than pinned together with nails and dowels. Being completely open, they were rowed (or even paddled) from the open deck, and likely had "ram entries", projections from the bow lowered the resistance of moving through water, making them slightly more hydrodynamic. The first true galleys, the triaconters (literally "thirty-oarers") and penteconters ("fifty-oarers") were developed from these early designs and set the standard for the larger designs that would come later. They were rowed on only one level, which made them fairly slow, likely only 5-5.5 knots. By the 8th century BC the first galleys rowed at two levels had been developed, among the earliest being the two-level penteconters which were considerably shorter than the one-level equivalents, and therefore more maneuverable. They were an estimated 25 m in length and displaced 15 tonnes with 25 pairs of oars. These could have reached an estimated top speed of up to 7.5 knots, making them the first genuine warships when fitted with bow rams. They were equipped with a single square sail on mast set roughly halfway along the length of the hull.
By the 5th century BC, the first triremes were in use by various powers in the eastern Mediterranean. It had now become a fully developed, highly specialized vessel of war that was capable of high speeds and complex maneuvers. At nearly 40 m in length, displacing almost 50 tonnes, it was more than three times as expensive as a two-level penteconter. A trireme also had an additional mast with a smaller square sail placed near the bow. Up to 170 oarsmen sat on three levels with one oar each that varied slightly in length. To accommodate three levels of oars, rowers sat staggered on three levels. Arrangements of the three levels are believed to have varied, but the most well-documented design made use of a projecting structure, or outrigger, where the oarlock in the form of a thole pin was placed. This allowed the outermost row of oarsmen enough leverage for full strokes that made efficient use of their oars.
The first dedicated war galleys fitted with rams were built with a mortise and tenon technique, a so-called shell-first method. In this, the planking of the hull was strong enough to hold the ship together structurally, and was also watertight without the need for caulking. Hulls had sharp bottoms without keelsons in order to support the structure and were reinforced by transverse framing secured with dowels with nails driven through them. To prevent the hull from hogging there was a hypozoma, a thick cable that connected bow with stern. It was kept taut to add strength to the construction along its length, but its exact design or the method of tightening is not known. The ram, the primary weapon of ancient galleys from around the 8th to the 4th century, was not attached directly on the hull but to a structure extending from it. This way the ram could twist off if got stuck after ramming rather than breaking the integrity of the hull. The ram fitting consisted of a massive, projecting timber and the ram itself was a thick bronze casting with horizontal blades that could weigh from 400 kg up to 2 tonnes.
Galleys from 4th century BC up to the time of the early Roman Empire in the 1st century AD became successively larger. Three levels of oars was the practical upper limit, but it was improved on by making ships longer, broader and heavier and placing more than one rower per oar. Naval conflict grew more intense and extensive, and by 100 BC galleys with four, five or six rows of oarsmen were commonplace and carried large complements of soldiers and catapults. With high freeboard (up to 3 m) and additional tower structures from which missiles could be shot down onto enemy decks, they were intended to be like floating fortresses. Designs with everything from eight rows of oarsmen and upwards were built, but most of them are believed to have been impractical show pieces never used in actual warfare. Ptolemy IV, the Greek pharaoh of Egypt 221–205 BC, is recorded as building a gigantic ship with forty rows of oarsmen, though no specification of its design remains. One suggested design was that of a huge trireme catamaran with up to 14 men per oar and it is assumed that it was intended as a showpiece rather than a practical warship.
With the consolidation of Roman imperial power, the size of both fleets and galleys decreased considerably. The huge polyremes disappeared and the fleet were equipped primarily with triremes and liburnians, compact biremes with 25 pairs of oars that were well suited for patrol duty and chasing down raiders and pirates. In the northern provinces oared patrol boats were employed to keep local tribes in check along the shores of rivers like the Rhine and the Danube. As the need for large warships disappeared, the design of the trireme, the pinnacle of ancient war ship design, fell into obscurity and was eventually forgotten. The last known reference to triremes in battle is dated to 324 at the battle of the Hellespont. In the late 5th century the Byzantine historian Zosimus declared the knowledge of how to build them to have been long since forgotten.
The earliest medieval galley specification comes from an order of Charles I of Sicily, in 1275 AD. Overall length 39.30 m, keel length 28.03 m, depth 2.08 m. Hull width 3.67 m. Width between outriggers 4.45 m. 108 oars, most 6.81 m long, some 7.86 m, 2 steering oars 6.03 m long. Foremast and middle mast respectively heights 16.08 m, 11.00 m; circumference both 0.79 m, yard lengths 26.72 m, 17.29 m. Overall deadweight tonnage approximately 80 metric tons. This type of vessel had two, later three, men on a bench, each working his own oar. This vessel had much longer oars than the Athenian trireme which were 4.41 m & 4.66 m long. This type of warship was called galia sottil.
The dromon and the galea
The primary warship of the Byzantine navy until the 12th century was the dromon and other similar ship types. Considered an evolution of the Roman liburnian, the term first appeared in the late 5th century, and was commonly used for a specific kind of war-galley by the 6th century. The term dromōn (literally "runner") itself comes from the Greek root drom-(áō), "to run", and 6th-century authors like Procopius are explicit in their references to the speed of these vessels. During the next few centuries, as the naval struggle with the Arabs intensified, heavier versions with two or possibly even three banks of oars evolved.
The accepted view is that the main developments which differentiated the early dromons from the liburnians, and that henceforth characterized Mediterranean galleys, were the adoption of a full deck, the abandonment of rams on the bow in favor of an above-water spur, and the gradual introduction of lateen sails. The exact reasons for the abandonment of the ram are unclear. Depictions of upward-pointing beaks in the 4th-century Vatican Vergil manuscript may well illustrate that the ram had already been replaced by a spur in late Roman galleys. One possibility is that the change occurred because of the gradual evolution of the ancient shell-first construction method, against which rams had been designed, into the skeleton-first method, which produced a stronger and more flexible hull, less susceptible to ram attacks. At least by the early 7th century, the ram's original function had been forgotten.
The dromons that Procopius described were single-banked ships of probably 25 oars per side. Unlike ancient vessels, which used an outrigger, these extended directly from the hull. In the later bireme dromons of the 9th and 10th centuries, the two oar banks were divided by the deck, with the first oar bank was situated below, whilst the second oar bank was situated above deck; these rowers were expected to fight alongside the marines in boarding operations. The overall length of these ships was probably about 32 meters. The stern (prymnē) had a tent that covered the captain's berth; the prow featured an elevated forecastle that acted as a fighting platform and could house one or more siphons for the discharge of Greek fire; and on the largest dromons, there were wooden castles on either side between the masts, providing archers with elevated firing platforms. The bow spur was intended to ride over an enemy ship's oars, breaking them and rendering it helpless against missile fire and boarding actions.
From the 12th century, the design of war galleys evolved into the form that would remain largely the same until the building of the last war galleys in the late 18th century. The length to breadth-ratio was a minimum of 8:1. A rectangular telaro, an outrigger, was added to support the oars and the rowers' benches were laid out in a diagonal herringbone pattern angled aft on either side of a central gangway, or corsia. It was based on the form of the galea, the smaller Byzantine galleys, and would be known mostly by the Italian term gallia sottila (literally "slender galley"). A second, smaller mast was added sometime in the 13th century and the number of rowers was rose from two to three rowers per bench as a standard from the late 13th to the early 14th century. The gallee sottili would make up the bulk the main war fleets of every major naval power in the Mediterranean, assisted by the smaller single-masted galiotte, as well as the Christian and Muslim corsairs fleets. Ottoman galleys were very similar in design, though in general smaller, faster under sail, but slower under oars. The standard size of the galley remained stable from the 14th until the early 16th century, when the introduction of naval artillery began to have effects on design and tactics.
The traditional two side rudders were complemented with a stern rudder sometime after c. 1400 and eventually the side rudders disappeared altogether. It was also during the 15th century that large artillery pieces were first mounted on galleys. Burgundian records from the mid-15th century describe galleys with some form of guns, but do not specify the size. The first conclusive evidence of a large cannon mounted on a galley comes from a woodcut of a Venetian galley in 1486. The first guns were fixed directly on timbers in the bow and aimed directly forwards, a placement that would remain largely unchanged until the galley disappeared from active service in the 19th century.
With the introduction of guns in the bows of galleys, a permanent wooden structure called rambade (French: rambade; Italian: rambata; Spanish: arrumbada) was introduced. The rambade became standard on virtually all galleys in the early 16th century. There were some variations in the navies of different Mediterranean powers, but the overall layout was the same. The forward-aiming battery was covered by a wooden platform which gave gunners a minimum of protection, and functioned as both a staging area for boarding attacks and as a firing platform for on-board soldiers. After its introduction, the rambade became a standard detail on every fighting galley until the very end of galley era in the early 19th century.
In the mid-17th century, galleys reached what has been described as their "final form". Galleys had looked more or less the same for over four centuries and a fairly standardized classification system for different sizes of galleys had been developed by the Mediterranean bureaucracies, based mostly on the number of benches in a vessel. A Mediterranean galley would have 25-26 pairs of oars with five men per oar (c. 250 rowers), 50-100 sailors and 50-100 soldiers for a total of about 500 men. The exceptions were the significantly larger "flagships" (often called lanternas, "lantern galleys") that had 30 pairs of oars and up to seven rowers per oar. The armament consisted of one heavy 24- or 36-pounder gun in the bows flanked by two to four 4- to 12-pounders. Rows of light swivel guns were often placed along the entire length of the galley on the railings for close-quarter defense. The length-to-width ratio of the ships was about 8:1, with two main masts carrying one large lateen sail each. In the Baltic, galleys were generally shorter with a length-to-width ratio from 5:1 to 7:1, an adaptation to the cramped conditions of the Baltic archipelagos.
A single mainmast was standard on most war galleys until c. 1600. A second, shorter mast could be raised temporarily in the bows, but became permanent by the early 17th century. It was stepped slightly to the side to allow for the recoil of the heavy guns; the other was placed roughly in the center of the ship. A third smaller mast further astern, akin to a mizzen mast, was also introduced on large galleys, possibly in the early 17th century, but was standard at least by the early 18th century. Galleys had little room for provisions and depended on frequent resupplying and were often beached at night to rest the crew and cook meals. Where cooking areas were actually present, they consisted of a clay-lined box with a hearth or similar cooking equipment fitted on the vessel in place of a rowing bench, usually on the port (left) side.
Throughout their long history, galleys relied on rowing as the most important means of propulsion. The arrangement of rowers during the 1st millennium BC developed gradually from a single row up to three rows arranged in a complex, staggered seating arrangement. Anything above three levels, however, proved to be physically impracticable. Initially, there was only one rower per oar, but the number steadily increased, with a number of different combinations of rowers per oar and rows of oars. The ancient terms for galleys was based on the numbers of rows or rowers plying the oars, not the number of rows of oars. Today it is best known by a modernized Latin terminology based on numerals with the ending "-reme" from rēmus, "oar". A trireme was a ship with three rows of oarsmen, a quadrireme four, a hexareme six, and so forth. There were warships that ran up to ten or even eleven rows, but anything above six was rare. A huge forty-rowed ship was built during the reign of Ptolemy IV in Egypt. Little is known about its design, but it is assumed to have been an impractical prestige vessel.
Ancient rowing was done in a fixed seated position, the most effective rowing position, with rowers facing the stern. A sliding stroke, which provided the strength from both legs as well as the arms, was suggested by earlier historians, but no conclusive evidence has supported it. Practical experiments with the full-scale reconstruction Olympias has shown that there was insufficient space, while moving or rolling seats would have been highly impractical to construct with ancient methods. Rowers in ancient war galleys sat below the upper deck with little view of their surroundings. The rowing was therefore managed by supervisors, and coordinated with pipes or rhythmic chanting. Galleys were highly maneuverable, able to turn on their axis or even to row backwards, though it required a skilled and experienced crew. In galleys with an arrangement of three men per oar, all would be seated, but the rower furthest inboard would perform a stand-and-sit stroke, getting up on his feet to push the oar forwards and then sitting down again to pull it back.
The faster a vessel travels, the more energy it uses. Reaching high speed requires energy which a human-powered vessel is incapable of producing. Oar system generate very low amounts of energy for propulsion (only about 70 W per rower) and the upper limit for rowing in a fixed position is around 10 knots. Ancient war galleys of the kind used in Classical Greece are by modern historians considered to be the most energy efficient and fastest of galley designs throughout history. A full-scale replica of a 5th-century BC trireme, the Olympias was built 1985–87 and was put through a series of trials to test its performance. It proved that a cruising speed of 7-8 knots could be maintained for an entire day. Sprinting speeds of up to 10 knots were possible, but only for a few minutes and would tire the crew quickly. Ancient galleys were built very light and the original triremes are assumed to never have been surpassed in speed. Medieval galleys are believed to have been considerably slower, especially since they were not built with ramming tactics in mind. A cruising speed of no more than 2-3 knots has been estimated. A sprint speed of up to 7 knots was possible for 20–30 minutes, but risked exhausting the rowers completely.
Rowing in headwinds or even moderately rough weather was difficult as well as exhausting. In high seas, ancient galleys would set sail to run before the wind. They were highly susceptible to high waves, and could become unmanageable if the rowing frame (apostis) came awash. Ancient and medieval galleys are assumed to have sailed only with the wind more or less astern with a top speed of 8-9 knots in fair conditions.
Contrary to the popular image of rowers chained to the oars, conveyed by movies such as Ben Hur, there is no evidence that ancient navies ever made use of condemned criminals or slaves as oarsmen, with the possible exception of Ptolemaic Egypt. Literary evidence indicates that Greek and Roman navies relied on paid labor or ordinary soldiers to man their galleys. Slaves were put at the oars only in times of extreme crisis. In some cases, these people were given freedom thereafter, while in others they began their service aboard as free men. Roman merchant vessels (usually sailing vessels) were manned by slaves, sometimes even with slaves as ship's master, but this was seldom the case in merchant galleys.
It was only in the early 16th century that the modern idea of the galley slave became commonplace. Galley fleets as well as the size of individual vessels increase in size, which required more rowers. The number of benches could not be increased without lengthening hulls beyond their structural limits, and more than three oars per bench was not practicable. The demand for more rowers also meant that the relatively limited number of skilled oarsmen could not keep up with the demand of large galley fleets. It became increasingly common to man galleys with convicts or slaves, which required a simpler method of rowing. The older method of employing professional rowers using the alla sensile method (one oar per man, with two to three sharing the same bench) was gradually phased out in favor of rowing a scaloccio, which required less skill. A single large oar was used for each bench, with several rowers working it together and the number of oarsmen per oar rose from three up to five. In some very large command galleys, there could be as many as seven to an oar.
All major Mediterranean powers sentenced criminals to galley service, but initially only in time of war. Christian naval powers such as Spain frequently employed Muslim captives and prisoners of war. The Ottoman navy and its North African corsair allies often put Christian prisoners to the oars, but also mixed volunteers. Spain relied on mostly servile rowers, in great part because its organizational structure was geared toward employing slaves and convicts. Venice was one of few major naval powers that used almost only free rowers, a result of their reliance on alla sensile rowing which required skilled professional rowers. The Knights of Saint John used slaves extensively, as did the Papal States, Florence and Genoa. North African ghazi corsairs relied almost entirely on Christian slaves for rowers.
In ancient galleys under sail, most of the moving power came from a single square sail. It was rigged on a mast somewhat forwards of the center of the ship with a smaller mast carrying a head sail in the bow. Triangular lateen sails are attested as early as the 2nd century AD, and gradually became the sail of choice for galleys. By the 9th century lateens were firmly established as part of the standard galley rig. The lateen rig was more complicated and required a larger crew to handle than a square sail rig, but this was not a problem in the heavily manned galleys. Belisarius' Byzantine invasion fleet of 533 was at least partly fitted with lateen sails, making it probable that by the time the lateen had become the standard rig for the dromon, with the traditional square sail gradually falling from use in medieval navigation in the Mediterranean. Unlike a square sail rig, the spar of a lateen sail did not pivot around the mast. To change tacks, the entire spar had to be lifted over the mast and to the other side. Since the spar was often much longer than the mast itself, and not much shorter than the ship itself, it was a complex and time-consuming maneuver.
Armament and tactics
In the earliest times of naval warfare boarding was the only means of deciding a naval engagement, but little to nothing is known about the tactics involved. In the first recorded naval battle in history, the battle of the Delta, the forces of Egyptian Pharaoh Ramesses III won a decisive victory over a force made up of the enigmatic group known as the Sea Peoples. As shown in commemorative reliefs of the battle, Egyptian archers on ships and the nearby shores of the Nile rain down arrows on the enemy ships. At the same time Egyptian galleys engage in boarding action and capsize the ships of the Sea Peoples with ropes attached to grappling hooks thrown into the rigging.
Introduction of the ram
Around the 8th century BC, ramming began to be employed as war galleys were equipped with heavy bronze rams. Records of the Persian Wars in the early 5th century BC by the Ancient historian Herodotus (c. 484-25 BC) show that by this time ramming tactics had evolved among the Greeks. The formations adapted for ramming warfare could either be in columns in line ahead, one ship following the next, or in a line abreast, with the ships side by side, depending on the tactical situation and the surrounding geography. The primary methods for attack was either to break through the enemy formation or to outflank it. Ramming itself was done by smashing into the rear or side of an enemy ship, punching a hole in the planking. This did not actually sink an ancient galley unless it was heavily laden with cargo and stores. With a normal load, it was buoyant enough to float even with a breached hull. Breaking the enemy's oars was another way of rendering ships immobile, rendering them easier targets. If ramming was not possible or successful, the on-board complement of soldiers would attempt to board and capture the enemy vessel by securing it with grappling irons, accompanied by missile fire with arrows or javelins. Trying to set the enemy ship on fire by hurling incendiary missiles or by pouring the content of fire pots attached to long handles is thought to have been used, especially since smoke below decks would easily disable rowers. Rhodes was the first naval power to employ this weapon, sometime in the 3rd century, and used it to fight off head-on attacks or to frighten enemies into exposing their sides for a ramming attack.
A successful ramming was difficult to achieve; just the right amount of speed and precise maneuvering were required. Fleets that did not have well-drilled, experienced oarsmen and skilled commanders relied more on boarding with superior infantry (such as increasing the complement to 40 soldiers). Ramming attempts were countered by keeping the bow towards the enemy until the enemy crew tired, and then attempting to board as quickly as possible. A double-line formation could be used to achieve a breakthrough by engaging the first line and then rushing the rearguard in to take advantage of weak spots in the enemy's defense. This required superiority in numbers, though, since a shorter front risked being flanked or surrounded.
Despite the attempts to counter increasingly heavy ships, ramming tactics were gradually superseded in the last centuries BC by the Macedonians and Romans, both primarily land-based powers. Hand-to-hand fighting with large complements of heavy infantry supported by ship-borne catapults dominated the fighting style during the Roman era, a move that was accompanied by the conversion to heavier ships with larger rowing complements and more men per oar. Though effectively lowering mobility, it meant that less skill was required from individual oarsmen. Fleets thereby became less dependent on rowers with a lifetime of experience at the oar.
By late antiquity, in the 1st centuries AD, ramming tactics had completely disappeared along with the knowledge of the design of the ancient trireme. Medieval galleys instead developed a projection, or "spur", in the bow that was designed to break oars and to act as a boarding platform for storming enemy ships. The only remaining examples of ramming tactics were passing references to attempts to collide with ships in order to destabilize or capsize them.
The Byzantine navy, the largest Mediterranean war fleet throughout most of the early Middle Ages, employed crescent formations with the flagship in the center and the heavier ships at the horns of the formation, in order to turn the enemy's flanks. Similar tactics are believed to have been employed by the Arab fleets they frequently fought from the 7th century onwards. The Byzantines were the first to employ Greek fire, a highly effective incendiary liquid, as a naval weapon. It could be fired through a metal tube, or siphon mounted in the bows, similar to a modern flame thrower. The properties of Greek fire were close to that of napalm and was a key to several major Byzantine victories. By 835, the weapon had spread to the Arabs, who equipped harraqas, "fireships", with it. The initial stages in naval battles was an exchanges of missiles, ranging from combustible projectiles to arrows, caltrops and javelins. The aim was not to sink ships, but to deplete the ranks of the enemy crews before the boarding commenced, which decided the outcome. Once the enemy strength was judged to have been reduced sufficiently, the fleets closed in, the ships grappled each other, and the marines and upper bank oarsmen boarded the enemy vessel and engaged in hand-to-hand combat. Byzantine dromons had pavesades, racks along the railings, on which marines could hang their shields, providing protection to the deck crew. Larger ships also had wooden castles on either side between the masts, which allowed archers to shoot from an elevated firing position.
Later medieval navies continued to use similar tactics, with the line abreast formation as standard. As galleys were intended to be fought from the bows, and were at their weakest along the sides, especially in the middle. The crescent formation employed by the Byzantines continued to be used throughout the Middle Ages. It would allow the wings of the fleet to crash their bows straight into the sides of the enemy ships at the edge of the formation.
Roger of Lauria (c. 1245–1305) was a successful medieval naval tactician who fought for the Aragon navy against French Angevin fleets in the War of the Sicilian Vespers. At the Battle of Malta in July 1283, he lured out Angevin galleys that were beached stern-first by openly challenging them. Attacking them in a strong defensive position head-on would have been very dangerous since it offered good cohesion, allowed rowers to escape ashore and made it possible to reinforce weak positions by transferring infantry along the shore. He also employed skilled crossbowmen and almogavars, light infantry, that were more nimbler in ship-to-ship actions than heavily armed and armored French soldiers. At the battle of the Gulf of Naples in 1284, his forces launched clay cooking pots filled with soap before attacking; when the pots broke against the enemy decks, they became perilously slippery and difficult for heavy infantry to keep their feet on.
The earliest guns were of large calibers, and were initially of wrought iron, which made them weak compared to cast bronze guns that would become standard in the 16th century. They were at first fixed directly on timbers in the bow, aiming directly forwards. This placement would remain largely unchanged until the galley disappeared from active service in the 19th century. The introduction of heavy guns and small arms did not change tactics considerably. If anything, it accentuated the bow as the offensive weapon, being both a staging area for boarders and the given position for small arms and cannons. The galley was capable of outperforming sailing vessel in early battles. It retained a distinct tactical advantage even after the initial introduction of naval artillery because of the ease with which it could be brought to bear upon an opposing vessel.
In large-scale galley-to-galley engagements, tactics remained essentially the same until the end of the 16th century. Cannons and small firearms were introduced around the 14th century, but did not have immediate effects on tactics; the same basic crescent formation in line abreast that was employed at the battle of Lepanto in 1571 was used by the Byzantine fleet almost a millennium earlier. Artillery on early gun galleys was not used as a long-range standoff weapon against other gun-armed ships. The maximum distance at which contemporary cannons were effective, c. 500 m (1600 ft), could be covered by a galley in about two minutes, much faster than the reload time of any heavy artillery. Gun crews would therefore hold their fire until the last possible moment, somewhat similar to infantry tactics in the pre-industrial era of short range firearms. The weak points of a galley remained the sides and especially the rear, the command center. Unless one side managed to outmaneuver the other, battle would be met with ships crashing into each other head on. Once fighting began with ships locking on to one another bow to bow, the fighting would be fought over the front line ships. Unless a galley was completely overrun by an enemy boarding party, fresh troops could be fed into the fight from reserve vessels in the rear.
Galleys were used for purely ceremonial purposes by many rulers and states. In early modern Europe, galleys enjoyed a level of prestige that sailing vessels did not enjoy. Galleys had from an early stage been commanded by the leaders of land forces, and fought with tactics adapted from land warfare. As such, they enjoyed the prestige associated with land battles, the ultimate achievement of a high-standing noble or king. In the Baltic, the Swedish king Gustav I, the founder of the modern Swedish state, showed particular interest in galleys, as was befitting a Renaissance prince. Whenever traveling by sea, Gustav, the court, royal bureaucrats and the royal bodyguard would travel by galley. Around the same time, English king Henry VIII had high ambitions to live up to the reputation of the omnipotent Renaissance ruler and also had a few Mediterranean-style galleys built (and even manned them with slaves), though the English navy relied mostly on sailing ships at the time.
Despite the rising importance of sailing warships, galleys were more closely associated with land warfare, and the prestige associated with it. British naval historian Nicholas Rodger has described this as display of "the supreme symbol of royal power ... derived from its intimate association with armies, and consequently with princes". This was put to perhaps its greatest effect by the French "Sun King", Louis XIV, in the form of a dedicated galley corps. Louis and the French state created a tool and symbol of royal authority that did little fighting, but was a potent extension of absolutist ambitions. Galleys were built to scale for the royal flotilla at the Grand Canal at Versailles for the amusement of the court. The royal galleys patrolled the Mediterranean, forcing ships of other states to salute the King's banner, convoyed ambassadors and cardinals, and obediently participating in naval parades and royal pageantry. Historian Paul Bamford described the galleys as vessels that "must have appealed to military men and to aristocratic officers ... accustomed to being obeyed and served".
Sentencing criminals, political dissenters and religious deviants as galley rowers also turned the galley corps into a large, feared, and cost-effective prison system. French Protestants were particularly ill-treated at the oar and though they were only a small minority, their experiences came to dominate the legacy of the king's galleys. In 1909, French author Albert Savine (1859–1927) wrote that "[a]fter the Bastille, the galleys were the greatest horror of the old regime". Long after convicts stopped serving in the galleys, and even after the reign of Napoleon, the term galérien ("galley rower") remained a symbolic general term for forced labor and convicts serving harsh sentences.
Being a galley rower did not carry such stigma at Baltic, where galley rowers were conscripts: rather they considered themselves as marine soldiers. The main building of the Finnish Naval Academy at Suomenlinna, Helsinki bears the nickname Kivikaleeri ("Stone Galley") as a legacy of the era.
The naval museum in Istanbul contains the galley Kadırga (Turkish for "galley", ultimately from Byzantine Greek katergon), dating from the reign of Mehmed IV (1648–1687). She was the personal galley of the sultan, and remained in service until 1839. She is presumably the only surviving galley in the world, albeit without its masts. It is 37 m long, 5.7 m wide, has a draught of about 2 m, weighs about 140 tons, and has 48 oars powered by 144 oarsmen.
A 1971 reconstruction of the Real, the flagship of John of Austria in the Battle of Lepanto (1571), is in the Museu Marítim in Barcelona. The ship was 60 m long and 6.2 m wide, had a draught of 2.1 m, weighing 239 tons empty, was propelled by 290 rowers, and carried about 400 crew and fighting soldiers at Lepanto. She was substantially larger than the typical galleys of her time.
In the mid 1990s, a sunken medieval galley was found close to the island of San Marco in Boccalama, in the Venice Lagoon. The hull has been dated, for the context and the C14 analysis, between the end of the XIII and the beginning of the XIV century.
The excavation and the photogrammetric survey (photogrammetry) and 3D laser scanner of this important testimony of medieval nautical archaeology has started in 2001 through two complex executive phases. The stratigraphic excavation of the wreck was in fact performed entirely underwater, according to the archaeological methodologies. The survey of the hull was instead realized after the setting in dry the entire medieval perimeter of the submerged island. This operation took place through the infixation of a continuous barrier consisting of sheet piles and the use of water pumps. This long excavation and documentation campaign was directed by underwater archaeologist Marco D'Agostino and, as deputy director, by his colleague Stefano Medas.
The lower hull is mostly intact and it was not recovered due to high costs.
- Pryor (2002), pp. 86–87; Anderson (1962), pp. 37–39
- Henry George Liddell & Robert Scott Galeos, A Greek-English Lexicon
- Oxford English Dictionary (2nd edition, 1989), "galley"
- See for example Svenska Akademiens ordbok, "galeja" or "galär " and Woordenboek der Nederlandsche Taal, "galeye"
- Anderson (1962), pp. 1, 42; Lehmann (1984), p. 12
- Casson (1971), pp. 53—56
- Murray (2012), p. 3
- Casson (1995), p. 123
- Rodger (1997), pp. 66—68
- Glete (1993), p. 81
- Winfield (2009), pp. 116—118
- Karl Heinz Marquardt, "The Fore and Aft Rigged Warship" in Gardiner & Lavery (1992), p. 64
- Mooney (1969), p. 516
- Wachsmann (1995), p. 10
- Wachsmann (1995), p. 11–12
- Wachsmann (1995), pp. 21–23
- Casson (1995), pp. 57–58
- Wachsmann (1995), pp. 13–18
- Casson (1995), pp. 117–21
- Casson (1971), pp. 68–69
- Morrison, Coates & Rankov (2000), p. 25
- Wachsmann (1995), pp. 28–34
- Morrison, Coates & Rankov, pp. 32–35
- Casson (1991), p. 87
- Casson (1991), pp. 30–31
- Casson (1991), pp. 44–46
- Morrison, Coates & Rankov, (2000), pp. 27–32
- Morrison, Coates & Rankov (2000), pp. 38–41
- D.B. Saddington (2011) . "the Evolution of the Roman Imperial Fleets," in Paul Erdkamp (ed), A Companion to the Roman Army, 201-217. Malden, Oxford, Chichester: Wiley-Blackwell. ISBN 978-1-4051-2153-8. Plate 12.2 on p. 204.
- Coarelli, Filippo (1987), I Santuari del Lazio in età repubblicana. NIS, Rome, pp 35-84.
- Morrison, Coates & Rankov (2000), pp. 48–49
- Morrison (1995), pp. 66–67
- Casson (1995), pp. 119–23
- Rankov (1995), pp. 78–80
- Rankov (1995), pp. 80–81
- Rankov (1995), pp. 82–85
- Rodger, (1997), pp. 64–65
- Unger (1980), pp. 53–55.
- Unger (1980), pp. 96–97
- Unger (1980), p. 80
- Unger (1980), pp. 75–76
- Pirenne, Mohammed and Charlemagne; the thesis appears in chapters 1–2 of Medieval Cities (1925)
- Unger (1980), pp. 40, 47
- Unger (1980), p. 102–4
- Casson (1995), pp. 123–126
- Glete (2000), p. 2
- Mott (2003), pp. 105–6
- Pryor (1992), pp. 64–69
- Mott (2003), p. 107
- Braudel, The Perspective of the World, vol. III of Civilization and Capitalism (1979) 1984:126
- Higgins, Courtney Rosali (2012) The Venetian Galley of Flanders: From Medieval (2-Dimensional) Treatises to 21st Century (3-Dimensional) Model. Master's thesis, Texas A&M University
- Fernand Braudel, The Mediterranean in the Age of Philip II I, 302.
- Pryor (1992), p. 57
- Mallett (1967)
- Bass, p. 191
- Mott (2003), pp. 109–111
- Hattendorf and Unger (2003) pp, 70
- Glete (2000) pp 18
- Glete, (2000) pp. 23
- Glete, (2000) pp. 28
- Guilmartin (1974) pp. 252
- Glete (1993), p. 114
- Guilmartin (1974), p. 101
- Glete (1993), pp. 114–15
- Glete (2000), pp. 154, 163
- Glete (2000), pp., 156, 158-59
- Bamford (1973), p. 12; Mott, 113-14
- Mott (2003), p. 112
- Goodman (1997), pp. 11–13
- Bamford (1973), p. 12
- Mott (2003), pp. 113–14
- See especially Rodger (1996)
- Glete (2003), p. 27
- The British naval historian Nicholas Rodger describes this as a "crisis in naval warfare" which eventually led to the development of the galleon, which combined ahead-firing capabilities, heavy broadside guns and a considerable increase in maneuverability by introduction of more advanced sailing rigs; Rodger (2003), p. 245. For more detailed arguments concerning the development of broadside armament, see Rodger (1996).
- Glete (2003), p. 144
- Guilmartin (1974), pp. 264–66
- Guilmartin (1974), p. 254
- Guilmartin (1974), p. 57
- Glete (2003), pp. 32–33
- Glete (2000), p. 183
- Jan Glete, "The Oared Warship" in Gardiner & Lavery (1992), p. 99
- Rodger (2003), p. 170
- Bamford (1974), pp. 14–18
- Bamford (1974), p. 52
- Bamford (1974), p. 45
- Lehmann (1984), p. 12
- Bamford (1974), pp. 272–73
- Bamford (1974), pp. 23–25
- Bamford (1974), pp. 277–278
- Bamford, (1974), pp. 272–73; Anderson, (1962), pp. 71–73
- Glete (1992), p. 99
- Rodger (1997), p. 208–12
- John Bennel, "The Oared Vessels" in Knighton & Loades (2000), pp. 35–37.
- Rodger (2003), pp. 230–30; see also R. C. Anderson, Naval Wars in the Baltic, pp. 177–78
- Glete (2003), pp. 224–25
- Anderson (1962), pp. 91–93; Berg, "Skärgårdsflottans fartyg" in Norman (2000) pp. 51
- Glete, "Den ryska skärgårdsflottan" in Norman (2000), p. 81
- Anderson (1962), p. 95
- Bondioli, Burlet & Zysberg (1995), p. 205
- Jan Glete, "Den ryska skärgårdsflottan: Myt och verklighet" in Norman (2000), pp. 86–88
- Reid, Anthony (2012). Anthony Reid and the Study of the Southeast Asian Past. Institute of Southeast Asian Studies. ISBN 978-981-4311-96-0.
- Boxer. The Acehnese attack on Malacca in 1629. pp. 119–121.
- Iskandar, Teuku (1958). De Hikajat Atjeh. Gravenhage: KITLV. p. 175.
- Lode (1601). Tweede Boek. Amsterdam. p. 17.
- Coates (1995), p. 127
- This flower-inspired stern detail would later be widely used by both Greek and Roman ships.
- Unger (1980), pp. 41–42
- Coates (1995), p. 136–37
- Coates (1995), pp. 133–34; Morrison, Coates & Rankov (2000), pp. 165–67
- Coates (1995), pp. 137–38
- Casson (1991), pp. 135–36
- Coates (1995), pp. 131–32
- Coates (1995), pp. 138–40
- Morrison, Coates & Rankov (2000), p. 77
- Shaw(1995), pp. 164–65
- Hocker (1995), p. 88
- Rankov (1995), pp. 80–83
- Rankov (1995), p. 85
- See both Bass and Pryor
- Morrison p. 269
- Pryor & Jeffreys (2006), pp. 123–125
- Pryor & Jeffreys (2006), pp. 125–126
- Pryor (1995), p. 102
- Pryor & Jeffreys (2006), p. 127
- Pryor & Jeffreys (2006), pp. 138–140
- Pryor & Jeffreys (2006), pp. 145–147, 152
- Pryor & Jeffreys (2006), pp. 134–135
- Pryor (1995), pp. 103–104
- Pryor & Jeffreys (2006), pp. 232, 255, 276
- Pryor & Jeffreys (2006), pp. 205, 291
- Pryor & Jeffreys (2006), p. 215
- Pryor & Jeffreys (2006), p. 203
- Pryor (1995), p. 104
- Pryor & Jeffreys (2006), pp. 143–144
- Anderson (1962), pp. 52, 54–55
- Pryor (1992), p. 64
- Pryor (1992), pp. 66–69
- Anderson (1962), pp. 55–56
- Pryor refers to claims that stern rudders evolved by the Byzantines and Arabs as early as the 9th century, but refutes it due to lack of evidence. Anderson (1962), pp. 59–60; Pryor (1992), p. 61.
- Lehmann (1984), p. 31
- Guilmartin (1974), p. 216
- Guilmartin (1974), p. 200
- Lehmann (1984), pp. 32–33
- Jan Glete, "The Oared Warship" in Gardiner & Lavery (1992), p. 98
- Jan Glete, "The Oared Warship" in Gardiner & Lavery (1992), pp. 98–100
- Anderson (1962), p. 17
- Lehmann (1984), p. 22
- Morrison, Coates & Rankov, The Athenian Trireme, pp. 246–47; Shaw (1995), pp. 168–169
- Morrison, Coates & Rankov, The Athenian Trireme, pp. 249–52
- Morrison, Coates & Rankov, The Athenian Trireme, pp. 246–47
- Coates 1995, pp. 127–28
- Shaw (1995), p. 169
- Shaw (1995), p. 163
- Guilmartin (1974), pp. 210–211
- Morrison, Coates & Rankov, The Athenian Trireme, p. 248
- Pryor (1992), pp. 71–75
- Casson (1995), pp. 325–26
- Rachel L. Sargent, "The Use of Slaves by the Athenians in Warfare", Classical Philology, Vol. 22, No. 3 (Jul., 1927), pp. 264–279
- Lionel Casson, "Galley Slaves", Transactions and Proceedings of the American Philological Association, Vol. 97 (1966), pp. 35–44
- Unger (1980), p. 36
- From Italian remo di scaloccio from scala, "ladder; staircase"; Anderson (1962), p. 69
- Guilmartin (1974), pp. 226–227
- Guilmartin (1974), pp. 109–112
- Guilmartin (1974), pp. 114–119
- Unger (1980), pp. 47–49.
- Basch (2001), p. 64
- Pryor & Jeffreys (2006), pp. 153–159
- Pryor (1992), p. 42
- Wachsmann (1995), pp. 28–34, 72
- Morrison, Coates & Rankov (2000), pp. 42–43, 92–93
- John Coates (1995), pp. 133–135
- Casson (1991), p. 139
- Casson (1991), pp. 90–91
- Hocker (1995), pp. 95, 98–99.
- Pryor & Jeffreys (2006), p. 282
- Pryor (1983), pp. 193–194
- Pryor (1983), pp. 184–188
- Pryor (1983), p. 194
- Rose (2002), pp. 133
- Guilmartin (1974), pp. 157–158
- Guilmartin (1974), pp. 199–200
- Guilmartin (1974), pp. 248–249
- Jan Glete, "Vasatidens galärflottor" in Norman (2000), pp. 39, 42
- Rodger (2003), p. 237
- For more information on the royal flotilla of Louis XIV, see Amélie Halna du Fretay, "La flottille du Grand Canal de Versailles à l'époque de Louis XIV : diversité, technicité et prestige" (in French)
- Bamford (1974), pp. 24–25
- Bamford (1974), pp. 275–278
- Bamford (1973), pp. 11–12
- Bamford (1973), p. 282
- The Trireme Trust
- Cornwall goes to the movies
- Scandurra, Enrico (1972), pp 209–10
- AA.VV., 2003, La galea di San Marco in Boccalama. Valutazioni scientifiche per un progetto di recupero (ADA - Saggi 1), Venice
- D'Agostino - Medas, (2003), Excavation and Recording of the medieval Hulls at San Marco in Boccalama (Venice), the INA Quarterly (Institute of Nautical Archaeology), 30, 1, Spring 2003, pp. 22–28
- Anderson, Roger Charles, Oared fighting ships: From classical times to the coming of steam. London. 1962.
- Bamford, Paul W., Fighting ships and prisons: the Mediterranean Galleys of France in the Age of Louis XIV. Cambridge University Press, London. 1974. ISBN 0-8166-0655-2
- Basch, L. & Frost, H. "Another Punic wreck off Sicily: its ram" in International journal of Nautical Archaeology vol 4.2, 1975. pp. 201–228
- Bass, George F. (editor), A History of Seafaring, Thames & Hudson, 1972
- Scandurro, Enrico, Chapter 9 The Maritime Republics: Medieval and Renaissance ships in Italy pp. 205–224
- (in Italian) Bragadin, Marc'Antonio, Storia delle repubbliche marinare (I grandi libri d'oro), Arnoldo Mondadori Editore, 1974. ISBN 9788862880824
- Capulli, Massimo, Le Navi della Serenissima - La Galea Veneziana di Lazise. Marsilio Editore, Venezia, 2003.
- Gardiner, Robert & Lavery, Brian (editors), The Line of Battle: Sailing Warships 1650–1840. Conway Maritime Press, London. 1992. ISBN 0-85177-561-6
- Casson, Lionel, "Galley Slaves" in Transactions and Proceedings of the American Philological Association, Vol. 97 (1966), pp. 35–44
- Casson, Lionel, Ships and Seamanship in the Ancient World, Princeton University Press, 1971
- Casson, Lionel, The Ancient Mariners: Seafarers and Sea Fighters of the Mediterranean in Ancient Times Princeton University Press, Princeton, NJ. 1991. ISBN 0-691-06836-4
- Casson, Lionel, "The Age of the Supergalleys" in Ships and Seafaring in Ancient Times, University of Texas Press, 1994. ISBN 0-292-71162-X , pp. 78–95
- D'Agostino, Marco - Medas, Stefano, Excavation and Recording of the medieval Hulls at San Marco in Boccalama (Venice), the INA Quarterly (Institute of Nautical Archaeology), 30, 1, Spring 2003, pp. 22–28
- Glete, Jan, Navies and nations: Warships, navies and state building in Europe and America, 1500–1860. Almqvist & Wiksell International, Stockholm. 1993. ISBN 91-22-01565-5
- Glete, Jan, Warfare at Sea, 1500–1650: Maritime Conflicts and the Transformation of Europe. Routledge, London. 2000. ISBN 0-415-21455-6
- Guilmartin, John Francis, Gunpowder and Galleys: Changing Technology and Mediterranean Warfare at Sea in the Sixteenth Century. Cambridge University Press, London. 1974. ISBN 0-521-20272-8
- Guilmartin, John Francis,"Galleons and Galleys", Cassell & Co., London, 2002 ISBN 0-304-35263-2
- Hattendorf, John B. & Unger, Richard W. (editors), War at Sea in the Middle Ages and the Renaissance. Woodbridge, Suffolk. 2003. ISBN 0-85115-903-6
- Balard, Michel, "Genoese Naval Forces in the Mediterranean During the Fifteenth and Sixteenth Centuries", pp. 137–49
- Bill, Jan, "Scandinavian Warships and Naval Power in the Thirteenth and Fourteenth Centuries", pp. 35–51
- Doumerc, Bernard, "An Exemplary Maritime Republic: Venice at the End of the Middle Ages", pp. 151–65
- Friel, Ian, "Oars, Sails and Guns: the English and War at Sea c. 1200-c. 1500", pp. 69–79
- Glete, Jan, "Naval Power and Control of the Sea in the Baltic in the Sixteenth Century", pp. 215–32
- Hattendorf, John B., "Theories of Naval Power: A. T. Mahan and the Naval History of Medieval and Renaissance Europe", pp. 1–22
- Hattendorf, John B.and Richard W. Unger, eds. War At Sea In The Middle Ages and the Renaissance. The Boydell Press, Woodbridge. 2003.
- Mott, Lawrence V., "Iberian Naval Power, 1000–1650", pp. 103–118
- Pryor, John H., "Byzantium and the Sea: Byzantine Fleets and the History of the Empire in the Age of the Macedonian Emperors, c. 900-1025 CE", pp. 83–104
- Rodger, Nicholas A. M., "The New Atlantic: Naval Warfare in the Sixteenth Century", pp. 231–47
- Runyan, Timothy J., "Naval Power and Maritime Technology During the Hundred Years' War", pp. 53–67
- Hutchinson, Gillian, Medieval Ships and Shipping. Leicester University Press, London. 1997. ISBN 0-7185-0117-9
- Knighton, C. S. and Loades, David M., The Anthony Roll of Henry VIII's Navy: Pepys Library 2991 and British Library Additional MS 22047 with related documents. Ashgate Publishing, Aldershot. 2000. ISBN 0-7546-0094-7
- Lehmann, L. Th., Galleys in the Netherlands. Meulenhoff, Amsterdam. 1984. ISBN 90-290-1854-2
- Morrison, John S. & Gardiner, Robert (editors), The Age of the Galley: Mediterranean Oared Vessels Since Pre-Classical Times. Conway Maritime, London, 1995. ISBN 0-85177-554-3
- Alertz, Ulrich, "The Naval Architecture and Oar Systems of Medieval and Later Galleys", pp. 142–62
- Bondioli, Mauro, Burlet, René & Zysberg, André, "Oar Mechanics and Oar Power in Medieval and Later Galleys", pp. 142–63
- Casson, Lionel, "Merchant Galleys", pp. 117–26
- Coates, John, "The Naval Architecture and Oar Systems of Ancient Galleys", pp. 127–41
- Dotson, John E, "Economics and Logistics of Galley Warfare", pp. 217–23
- Hocker, Frederick M., "Late Roman, Byzantine, and Islamic Galleys and Fleets", pp. 86–100
- Morrison, John, "Hellenistic Oared Warships 399-31 BC", pp. 66–77
- Pryor, John H."From dromon to galea: Mediterranean bireme galleys AD 500-1300", pp. 101–116.
- Rankov, Boris, "Fleets of the Early Roman Empire, 31 BC-AD 324", pp. 78–85
- Shaw, J. T., "Oar Mechanics and Oar Power in Ancient Galleys", pp. 163–71
- Wachsmann, Shelley, "Paddled and Oared Ships Before the Iron Age", pp. 10–25
- Mallett, Michael E. (1967) The Florentine Galleys in the Fifteenth Century with the Diary of Luca di Maso degli Albizzi, Captain of the Galleys 1429–1430. Clarendon Press, Oxford. 1967
- Mooney, James L. (editor), Dictionary of American Naval Fighting Ships: Volume 4. Naval Historical Center, Washington. 1969.
- Morrison, John S., Coates, John F. & Rankov, Boris,The Athenian Trireme: the History and Reconstruction of An Ancient Greek Warship. Cambridge University Press, Cambridge. 2000. ISBN 0-521-56456-5
- Murray, William (2012) The Age of Titans: The Rise and Fall of the Great Hellenistic Navies. Oxford University Press, Oxford. ISBN 978-0-19-538864-0
- (in Swedish) Norman, Hans (editor), Skärgårdsflottan: uppbyggnad, militär användning och förankring i det svenska samhället 1700–1824. Historiska media, Lund. 2000. ISBN 91-88930-50-5
- Pryor, John H., "The naval battles of Roger of Lauria" in Journal of Medieval History 9. Amsterdam. 1983; pp. 179–216
- Pryor, John H., Geography, technology and war: Studies in the maritime history of the Mediterranean 649-1571. Cambridge University Press, Cambridge. 1992. 0-521-42892-0
- Rodger, Nicholas A. M., "The Development of Broadside Gunnery, 1450–1650." Mariner's Mirror 82 (1996), pp. 301–24.
- Rodger, Nicholas A. M., The Safeguard of the Sea: A Naval History of Britain 660–1649. W.W. Norton & Company, New York. 1997. ISBN 0-393-04579-X
- Rose, Susan, Medieval Naval Warfare, 1000–1500.Routledge. London. 2002.
- Rodgers, William Ledyard, Naval Warfare Under Oars: 4th to 16th Centuries, Naval Institute Press, 1940.
- Tenenti, Alberto Piracy and the Decline of Venice 1580–1615 (English translation). 1967
- Unger, Richard W. The Ship in Medieval Economy 600-1600 Croom Helm, London. 1980. ISBN 0-85664-949-X
- Winfield, Rif (2009) British Warships in the Age of Sail, 1603–1714: Design, Construction, Careers and Fates. Seaforth, Barnsley. ISBN 978-1-84832-040-6
|Look up galley in Wiktionary, the free dictionary.|
|Wikimedia Commons has media related to Galleys.|
- "Galley". Encyclopædia Britannica (11th ed.). 1911.
- John F. Guilmartin, "The Tactics of the Battle of Lepanto Clarified: The Impact of Social, Economic, and Political Factors on Sixteenth Century Galley Warfare". A very detailed discussion of galley warfare at the Battle of Lepanto
- (in Spanish) Rafael Rebolo Gómez - "The Carthaginian navy"., 2005, Treballs del Museu Arqueologic d'Eivissa e Formentera.
- "Some Engineering Concepts applied to Ancient Greek Trireme Warships", John Coates, University of Oxford, The 18th Jenkin Lecture, 1 October 2005. |
Researchers have recently discovered that some female small-tooth sawfish are reproducing in the wild without the need of male sexual partners.
Smalltooth sawfish are one of five species of sawfish, a group of large rays known for their long, tooth-studded rostrum that they use to subdue small fish. The researchers say that sawfish could be the first family of marine animals to be driven to extinction due to overfishing and coastal habitat loss.
Historically, the smalltooth sawfish inhabited the western Atlantic Ocean from New York to central Brazil, including the Gulf of Mexico, and the eastern Atlantic along the central west coast of Africa. The species was once common on both Florida coasts, especially in the late 1800s and early 1900s. In the late 1800s, a fisherman on the east coast reported capturing 300 sawfish in the Indian River Lagoon during one season. When commercial netting became more prevalent, fishers often accidentally caught sawfish because the rostrum (saw) can easily become entangled in nets.
Now, smalltooth sawfish are no longer found throughout their full historical range and are mainly found today in a handful of locations in southern Florida, including the Caloosahatchee and Peace Rivers.
Earlier evidence that vertebrates might sometimes reproduce without a mate via a process called parthenogenesis had primarily come from isolated examples of captive animals--including birds, reptiles, and sharks. In those instances, the animals in question surprised their keepers by giving birth despite the fact that they'd had no opportunity to mate. In addition, researchers recently reported two free-living female snakes, each pregnant with a single parthenogen, but it was not known if these embryos would have lived in the wild. Therefore, no one really knew if this phenomenon took place to any significant extent in wild populations.
Demian Chapman of Stony Brook University in New York and his colleagues from the Priztker Laboratory at the Field Museum of Chicago and Florida Fish and Wildlife Conservation Commission made the discovery that vertebrate parthenogens can and do live in the wild after conducting some routine DNA fingerprinting of smalltooth sawfish in a Florida estuary. The researchers' DNA analyses show that about 3% of the sawfish in their studies are products of this unusual form of reproduction.
"We were conducting routine DNA fingerprinting of the sawfish found in this area in order to see if relatives were often reproducing with relatives due to their small population size," says lead author of the study, Andrew Fields, a PhD candidate at the Stony Brook University's School of Marine and Atmospheric Science. "What the DNA fingerprints told us was altogether more surprising: female sawfish are sometimes reproducing without even mating."
Parthenogenesis is common in invertebrates but rare in vertebrate animals, the researchers explain. Vertebrate parthenogenesis is thought to occur when an unfertilized egg absorbs a genetically identical sister cell. The resulting offspring have about half of the genetic diversity of their mothers and often die.
"There was a general feeling that vertebrate parthenogenesis was a curiosity that didn't usually lead to viable offspring," says Gregg Poulakis of the Florida Fish and Wildlife Conservation Commission, who led field collections of the sawfish.
And yet the seven parthenogens the researchers found appeared to be in perfect health. All of the animals were tagged and released back into the wild as part of an ongoing study of sawfish movements.
"Occasional parthenogenesis may be much more routine in wild animal populations than we ever thought," says Kevin Feldheim of the Pritzker Laboratory at the Field Museum of Chicago, where the DNA fingerprinting was conducted.
It's possible that this form of reproduction occurs mainly in small or dwindling populations. The researchers are now encouraging others to screen their DNA databases in search of other hidden instances of vertebrate parthenogens living in the wild.
As for smalltooth sawfish, it's possible this ability could keep them going for a little longer. But it won't be enough to save them.
"This should serve as a wake-up call that we need serious global efforts to save these animals," Feldheim says.
Image Credit: Florida Fish and Wildlife Conservation Commission (FWC) |
More than a decade ago the Cassini probe entered orbit around Saturn. It was a risky mission. We had never orbited such a distant planet, which meant there was plenty to go wrong. Cassini also carries a companion probe, known as Huygens, which had a mission to land on Titan. Cassini has been a remarkable success, and has given us an unprecedented view of Saturn and its moons. But now the aging spacecraft is running out of power, so it’s time for Cassini to have one last mission.
It would be easy to let Cassini simply run out of power, letting it drift around Saturn like a cold rock. But some of Saturn’s moons, such as Enceladus, have conditions that could be suitable for life. If Cassini were to crash on a moon hundreds of years from now, there’s a small chance it could contaminate it with terrestrial life. So a better option is to bring it into a close orbit with Saturn, eventually letting the spacecraft burn up in Saturn’s atmosphere. This is the idea behind the Grand Finale mission.
During the Grand Finale, Cassini will pass between Saturn and its rings several times. It’s a tricky maneuver, since the gap between planet and rings is small on an astronomical scale. It’s also a region where no spacecraft has ever explored, so we aren’t entirely sure what to expect. For example, as Cassini passes between the rings and Saturn, the gravitational tugs of both will shift Cassini’s orbit slightly. The amount of shift will let us calculate the mass of Saturn’s rings. We have a basic idea of the rings’ mass, but this will give us a far more precise answer. It will also give us a view of the inner region of Saturn’s ring system, which we haven’t had yet. If Cassini survives all the way into Saturn’s atmosphere, we could also get a better understanding of Saturn’s composition.
This is a high risk mission, and it’s possible that Cassini will fail to complete it. That’s why it hasn’t been tried until now. But Cassini has reached a point where it has little to lose, and a Grand Finale now holds a lot of promise. |
(Nanowerk News) In the tiny world of amino acids and proteins and in the helical shape of DNA, a biological phenomenon abounds.
These objects are all chiral they cannot exactly superimpose their mirror image by translation or rotation. A common example of this is human hands a right hand cannot superimpose itself into its mirror image, a left hand. This description of a molecule's symmetry (or lack thereof) is important in determining the molecule's properties in chemistry.
But while scientists and engineers know that at the sub-atomic level weak forces are chiral, how these electrostatic forces can generate a chiral world is still a mystery.
Researchers at Northwestern University in the group of Monica Olvera de la Cruz, professor of materials science and engineering and chemical and biological engineering at the McCormick School of Engineering and Applied Science, have recently shown how electrostatic interactions commonly known as static electricity alone can give rise to helical shapes. The group has constructed a mathematical model that can capture all possible regular shapes chiral objects could have, and they computed the preferred arrangements induced by electrostatic interactions.
Their work will be published as the cover story in the journal Soft Matter and is published online.
"In this way we are simply letting nature tell us how it would like to be, and we generalize it to many different systems," Olvera de la Cruz says." She and her colleagues report that chirality can only spontaneously arise as a consequence of electrostatic interactions and does not require the presence of other more complicated interactions, like dipolar or short-range van der Waals interactions.
Their model also describes arrangement of DNA mixed with carbon nanotubes. DNA has been shown to form helices around nanotubes, thereby separating the different types of carbon nanotubes into families.
The research findings concur with previous research using microscopy.
"From our predicted helical shapes of DNA wrapped around carbon nanotubes, we found amazing correspondence to those that were recently measured by atomic force microscopy," Olvera de le Cruz says.
The work shows that electrostatics is a pathway for understanding how nature generates helical symmetries. Researchers hope that future work can show how to use simple interactions to generate other symmetries that drive complex phenomena. |
Managing crops and animals near shorelands
On this page
- Crops and animals affect water quality
- Nonpoint pollution in NE Minnesota
- Major agricultural pollutants
- BMPs to prevent nonpoint source pollution
Crops and animals affect water quality
Rainfall and snow melt running off farmland or seeping into the ground can carry pollution into lakes and streams. Pollution carried by runoff is called nonpoint source pollution. In the past, nonpoint pollution from one farm or field has been easy to ignore as insignificant, but it cannot be ignored any longer because the sum of the thousands of nonpoint pollution sources is the main cause of today's water quality problems. Raising crops and animals can contribute to nonpoint pollution if runoff is not properly treated.
Nonpoint pollution in NE Minnesota
Northeastern Minnesota is blessed with an abundance of clean water. Our lakes and streams are important to tourism, recreation, and the residents who live or vacation in our area.
Nonpoint source pollution from crops and animals in northern Minnesota results from operations ranging from dairy and beef farms to sled dog kennels and hobbyhorse farms. These operations have the potential to send nutrients and organic matter into surface water. Pasturing animals along stream banks can also cause erosion that adds sediment to lakes and streams. Sheet and rill erosion strip away topsoil from steep fields that are farmed in continuous row crops. The topsoil that ends up in lakes and streams often carries nutrients and pesticides along with it.
Major agricultural pollutants
The major nonpoint source pollutants are sediment, nutrients, pesticides, bacteria, and oxygen-demanding substances.
sediment--Eroded soil particles from fields, ditches, and streambanks make water turbid, damaging fish and plant habitat and reducing water's aesthetic appeal; sediment may carry nutrients and heavy metals with it.
nutrients--Fertilizer or animal waste in runoff water delivers nutrients such as phosphorus and nitrogen to lakes and streams, causing excessive algae and weed growth; high nitrate concentrations in drinking water can present a health threat for infants.
pesticides--Agricultural chemicals such as insecticides or herbicides can wash off crops and fields into lakes and streams where they can be toxic to fish and other aquatic life; some pesticides pose a threat to human health if they reach drinking water supplies.
bacteria--Runoff or seepage from feedlots and failing septic systems can carry coliform bacteria into surface and ground water, presenting health risks for drinking or body contact.
oxygen-demanding substances--Manure, sewage, crop residue, and other decaying organic matter use up oxygen needed by fish.
BMPs to prevent nonpoint source pollution
Figure 1 illustrates several BMPs designed to minimize the impact of agriculture on nearby lakes and streams.
Figure 1: Several BMPs work together to control agricultural runoff.
(1) Cropped land erosion control
Careful management of your tillage practice can lead to a more profitable farm operation, reduce erosion, and improve water quality. These management choices added to your tillage options can enhance your operation. Some tillage options to consider are:
- mulch tillage
- no-till and ridge-till systems
- contouring and grass field borders
- strip cropping
Many operations still use the moldboard plow in a conventional tillage system. Fall plowing heavy soils is the best option, but the ground should be left rough and cloddy. Winter conditions can help improve your soil structure by reducing the clump size. Leaving a rough surface also helps cut down surface erosion. Never disc a fall plowing unless it is early enough to establish a fall cover crop. Discing or making seedbeds in the fall will create the opportunity for significant soil and nutrient loss. Spring plowing is your best option in lighter soils and can reduce overall soil erosion.
Some basic BMP practices such as soil management, crop rotation, nutrient management, and seeding fragile and drainage areas with grass for sediment control can greatly increase the profitability of your long-term farming operations. At the same time, negative impacts to water quality will be lessened.
(2) Diversions and roof gutters
A diversion is a permanently vegetated ridge constructed at the base of a slope to safely divert the runoff. Gutters simply redirect significant amounts of water away from building foundations or, in this case, an animal barnyard.
(3) & (4) Manure catchment
This structure allows for the buildup of manure and channels liquid manure to a single outlet. Liquid manure can be either stored and used to fertilize fields or "treated" by a grass filter strip. Solid manure within the catchment can be removed during the growing season and applied to the field, adding organic matter and nutrients. There are many designs and methods of storage for managing both solids and liquids.
(5) Grass filter strip
This is permanent grass sod that filters potentially harmful nutrients from the manure catchment area. In the growing months, excess nutrients can be utilized by the grasses. This method is enhanced by the addition of a buffer strip between the grass filter strip and the stream.
(6) Buffer strips
Along lakes and streams, removal of excess nutrients can be enhanced by the use of buffer strips. These consist of natural or planted woody vegetation along the edge of the stream or lake. In this case, red pine and spruce trees were planted. The buffer strip acts to:
- stabilize soil
- trap nutrients by filtering runoff
- shade and cool the water to improve aquatic habitat
The wider the buffer strip, the greater its effectiveness. Planting high value tree species could increase your farm's future value.
(7) Stream crossing
The least expensive method is to make a low-flow gravel crossing allowing livestock access to pasture on the other side of the stream. Fencing can be installed on either side of the crossing as gates to prevent them from walking along the stream.
Culverts and bridges are more costly but might be necessary in sensitive areas. These also can be built to allow machinery to cross.
Fencing animals out of lakes or streams will prevent water pollution. Watering your animals can be done with electric pumps, solar-powered pumps, mechanical nose-pumps, and stock watering ponds. Permits may be needed for work done along streams or lakeshores.
(8) Pasturing livestock
Intensive rotational grazing provides better forage for your animals while improving sod and soil coverage between grazing cycles and can reduce overall erosion. Fencing animals from sensitive areas is also important.
(9) Unusable land conversion
Highly erodible and marginal fields can be converted to various uses depending on your objectives. Changing marginal cropland or pastures can provide long-term benefits both financially and environmentally. Some conversion possibilities are:
- intercropping trees and pasture
- planting nut trees or high quality timber
- planting Christmas trees
- using native and imported species for wildlife habitat
(10) Fuel, fertilizer & pesticide storage
A small amount of fuel oil, gasoline, diesel, fertilizer, or other chemicals can contaminate a large volume of water. Here are some suggestions:
Fuel Oil, Gasoline, and Diesel
- Locate tanks away from other buildings and water.
- Dike the area around above-ground tanks to contain spills.
- Follow maintenance, safety, and disposal precautions.
Fertilizer and Other Chemicals
- Store only small amounts for short periods.
- Clearly mark containers and check their condition.
- Cover and store on a sealed surface to contain any spills.
- Properly dispose of outdated unused chemicals; contact your county solid waste officer who may accept unused chemicals free of charge.
Improperly contained silage can contaminate ground and surface water. Using basic BMPs minimizes risk from these operations:
- Store silage away from any water source.
- Provide impermeable surface soil around the storage.
- Install a seepage collection system.
- Divert clean water away from area.
- Adequately cover silage.
Regulations that apply
Owners of feedlots with more than ten animal units are required to have a feedlot permit available from the MN Pollution Control Agency (PCA). Check with local zoning authorities for assistance.
Program assistance for agricultural BMPs
Programs are available to help individuals cover up to 75% of the cost of applying BMPs. Many animal owners have used this assistance to apply systems such as the ones shown in Figure 1. They find these practices save time and money. Valuable organic fertilizer is stored for use on fields rather than flowing downstream.
The Soil and Water Conservation Districts (SWCD), the MN Board of Water and Soil Resources (BWSR), the University of Minnesota Extension, and the U.S. Department of Agriculture (USDA) agencies of the Natural Resources Conservation Service (NRCS) and the Farm Services Administration (FSA) all offer programs to help people plan and adopt BMPs. Through the SWCD, state and federal cost-share programs are available to help people apply these practices. Planning and design assistance is offered at no cost and up to 75% of the installation cost can be covered by cost-share dollars.
For more information
Regional offices of MN State agencies:
Agriculture and Water Quality--Best Management Practices for Minnesota. MN Pollution Control Agency
Running your Feedlot for Farm Economy and Water Resource Protection. MN Pollution Control Agency
Nitrogen Management for Livestock Producers. Beltrami Soil and Water Conservation District.
Protecting Minnesota's Water Resources--Best Management Practices for Atrazine and Nitrogen. MN Department of Agriculture.Shoreland Best Management Practices |
Lithium (from the Greek word "lithos" meaning stone) is the lightest of all metals. In purified form, it possesses some unique properties. For example, it has the highest specific heat of any solid element and therefore is very useful in heat transfer applications, however it is corrosive and requires special handling. Lithium is also a leading contender as battery anode material due to its high electrochemical potential.
It does not occur "free" (purified or uncombined) in nature, but is found joined (ionized) as various salts in nearly all igneous rocks and hydrated in many mineral springs * and seawater. Traces of lithium are also found in numerous plants, plankton, and invertebrates ranging from 69 to 5,760 parts per billion. Nearly all vertebrate tissue and body fluids have also been found to contain lithium at slightly lower levels ranging from 21 to 763 parts per billion. However, it is not known whether lithium has a physiological role in any of these organisms. In natural seawater, lithium is found hydrated at substantially low levels (Actual Test Results) and usually ranges from 140 to 250 parts per billion. Only near hydrothermal vents ** under the oceans does natural seawater contain elevated levels of lithium. Here it approaches 7,000 parts per billion.
The lithium ion is exceptionally small and has, therefore, an exceptionally high charge to radius ratio . As a result, its properties are considerably different from similar ions found at much higher concentrations in seawater such as sodium and potassium. Contrarily, its hydrated radius (size of the ion in water solution) is much larger than similar ions found in seawater and therefore exhibits very different solution properties such as very low vapor pressure, freezing point, and other colligative properties. This large solvation shell around the lithium ion also causes it to have lower mobility in solution.
The aforementioned properties of the lithium ion contribute to its unique behavior and effects in a variety of environments. The chloride salt of lithium is one of the most hygroscopic materials known and is therefore used in many air conditioning and drying systems. Of greater relevance, is its ability to control crystal shape in synthetic crystals of Aragonite. This property is interestingly reminiscent of the morphology observed in the shells of many mollusks . In general, the lithium ion has been well documented to have profound biological effects of varying intensity on the gamut of lifeforms. Numerous publications have confirmed its influence on enzyme activity, metabolism, respiration, and active transport.
More specifically, the lithium ion has demonstrated significant bacteriostatic activity towards various lactic acid microorganisms. This has prompted the development of a lithium containing toothpaste to prevent dental caries, however, lack of clinical experience precludes assessment of its therapeutic value. In the late nineteenth century, Herbst discovered that the lithium ion has powerful morphogenetic effects on developing sea urchin eggs . Since then, lithium has been proven to cause morphological deviations in representatives of almost the entire animal kingdom, but seems most dramatic in primitive embryological systems. Deviations ranging from exogastrulation to cyclopia (development of one-eyed monsters) have been observed in tunicates, cyclostomes, teleosts, cephalopods, and many other organisms. These effects are believed to be a result of the lithium ions ability to interfere with metabolic processes that are responsible for the determination and differentiation of a developing embryo.
One of the most striking features of the lithium ion is its resemblance to the sodium ion with respect to transport in biological systems. Experiments on goldfish gills and amphibian skin have clearly demonstrated an active transport of the lithium ion through tissues. However, as noted earlier, the effects of the lithium ion are distinctly different from those of the sodium ion on various tissues and organisms. These effects have also been extensively studied on nerve tissue from many organisms. The results of these studies are beyond the scope of this work, but it seems prudent to mention that lithium causes a variety of changes in nerve tissue ranging from mild to severe and sometimes are permanent.
A prominent effect of lithium on the nervous system can be found in the field of medicine. Here it is used to treat severe mental disorders such as manic-depression by controlling mania and stabilizing mood swings. Although its therapeutic value has been debated, there is no doubt that a number of manic patients have benefited from lithium treatment. Its effects differ markedly from the results of treatment with tranquilizing agents. The patients do not appear to be "drugged" or drowsy, but are quiet and cooperative, often capable of normal daily activities while on a maintenance treatment. The lithium ion seems to counteract the manic symptoms in a specific way, however, little is understood about the mechanism of its therapeutic action. The literature states that lithium is relatively toxic to animals and man . Interestingly, its toxicity is not only related to the amount of lithium administered but is also dependent upon the sodium intake.
In closing, it is evident that lithium can have some profound effects
on a variety of chemical and biological systems. Although it has been
intensively studied, little is known about the details of lithium's
action including required concentrations and mechanisms by which changes
(1) A. Cotton and G. Wilkinson, Advanced Inorganic Chemistry - A Comprehensive Text. Fourth Edition, Chapter Seven.
(2) S. Sims, J. Didymus, and S. Mann, Habit Modification in Synthetic Crystals of Aragonite and Vaterite. J. Chem. Soc., Chem. Commun., 1031-1032 (1995)
(3) E. Gralla, and H. McIlhenny, Studies in Pregnant Rats, Rabbits and Monkeys with Lithium Carbonate. Toxicol. Appl. Pharmacol. 21, 428-433 (1972)
(4) M. Schou, Biology and Pharmacology of the Lithium
Ion. Pharmacol. Rev. 9, 17-58 (1957)
Back to feature article
* The lithium content of mineral waters is usually about 7,000 parts per billion ( Actual Test Results ) and was believed to be responsible for its reputed effects on rheumatoid diseases including gout.
** Formed deep within the oceans where the earth's crust opens and heats nearby seawater to almost 400°C. This creates an unusual chemistry rich in minerals and hydrogen sulfide. As a result unique biological communities colonize these vent environments and contain many organisms that are not found anywhere else on earth. |
Bronchitis is a respiratory disease that is characterised by the inflammation of the airways. Generally, a virus is the cause of this disease, but there are other environmental possibilities. Bronchitis can become chronic, which means that it recurs often. When it does, common medications and treatment will not likely work. A nebuliser is an effective treatment to use when bronchitis is persistent. Its potential to relax the airways is a crucial part in treating this disease.
Bronchitis is a respiratory disease that often accompanies a cold or other respiratory infection. Chronic bronchitis can be caused by smoking, exposure to chemical fumes, as well as pollutants. In some cases, ongoing bronchitis may actually be asthma, so see your doctor to be properly diagnosed.
Because bronchitis is associated with a cold, some of the symptoms are also similar. A low-grade fever, along with fatigue, are often present with bronchitis. However, the most obvious sign is a persistent, productive cough that has thick, coloured mucus. The cough is often accompanied by chest tightness, shortness of breath, and wheezing, which is a whistle-like sound that you hear when you breathe.
What is a Nebulizer?
A nebuliser is a portable machine that works with an air compressor. Tubing connects the compressor to a liquid medication cup, which is connected to either a mouthpiece or a mask. A nebuliser takes the liquid medication, Albuterol, and changes it into a mist form that is inhaled. Nebulizers are most commonly used to treat asthma, but are just as effective when treating bronchitis.
Who should use a Nebulizer?
Some people who have bronchitis will use a fast-acting or rescue inhaler for relief of inflamed airways. Unfortunately, an inhaler is not suitable for everyone, as it can be difficult to use, and when used incorrectly might not dispense the correct dose of medication. A nebuliser is ideal for the elderly, babies and young children due its ease of use. When the mask is in place, all that is needed is regular respiration.
Proper care of your nebuliser is essential, so be sure to clean it after each use. An unclean nebuliser can get clogged, as well as carry the possibility for infection. Proper care involves washing the medicine cup in warm, soapy water after each use and letting it air dry. After your last use of the day, wash the cup and mouthpiece or mask in the same way, and let air dry. It is important to wash the nebuliser parts in a vinegar and water solution every three days. Soak the parts for 20 minutes in 1/2 a cup of vinegar and 1 1/2 cups of water, and let air dry. |
US nanotech boffins track evanescent light
By following the Poynting vector, of course
Researchers in the States have found a way of predicting how evanescent light waves might behave. The breakthrough could clear the way for a new generation of nanoscale optical devices, including solar thermal energy technologies.
When things get very small, nanoscale small, the rules all change and almost every assumption based on knowledge of the big world has to be checked and checked again. For example, from our experience with lasers we know that light travels in a nice straight line, right?
Wrong. Make the path the light has to travel down small enough, and the poor photons weave across it like so many drunken fools wheeling home from a pub. Sort of.
Specifically, we're talking about evanescent light, the light that is emitted when photons are radiated between two surfaces separated by a gap that is less than the wavelength of light. The light wave is interrupted and - until now - unpredictable evanescent waves are produced.
"Understanding the behaviour of light at this scale is the key to designing technologies to take advantage of the unique capabilities of this phenomenon," said Zhuomin Zhang, a lead researcher on the project and a professor in the Woodruff School of Mechanical Engineering.
"This discovery gives us the fundamental information to determine things like how far apart plates should be and what size they should be when designing a technology that uses nanoscale radiation heat transfer."
The team set out to track the evanescent waves by tracking the direction of the electromagnetic energy flow (a Poynting vector), rather then the direction of the photons themselves (which is unknowable, in physics terms). Even electrodynamics is different at the nanoscale, and the cornerstone of the research was working out what those differences are.
"We’re using classic electrodynamics to explain the behaviour of the waves, not quantum mechanics," Zhang said. "We’re predicting the energy propagation - and not the actual movement - of the photons."
Zhang explains that the team observed the light bending as the photons tunnelled through the vacuum separating the surfaces, just nanometres apart. The evanescent waves separated during this process, allowing the team to predict the energy path of the waves.
This information is vital to the construction of near-field thermophotovoltaic systems, nanoscale imaging based on thermal radiation scanning tunnelling microscopy and scanning photon-tunnelling microscopy, the researchers said. ® |
Alphabet Letter A Ape
Preschool Lesson Plan Printable Activities and Worksheets
|Alphabet > Letter A | Animals > Mammals > Endangered > Rainforest > Apes | Holidays & Events > Jun > Zoo and
Aquarium Month - Dec 27th >
Visit the Zoo Day
Here are printable materials and some suggestions to present letter A (long a sound).
Animals > Mammals > Ape > Endangered > Rainforest > Apes
An ape is a mammal in the group of primates, which includes chimpanzees, gibbons, gorillas, and orangutans. Apes do not have tails. They have very flexile hands and feet. Apes live in the forests in Africa and Asia. Apes are endangered animals.
Review the apes theme lesson plan and crafts to present additional information, images, and printable activities about apes.
Alphabet Activity > Letter A is for Ape (long a sound)
Present and display your choice of printable materials listed in the materials column.
Children Age 3 and under:
* Print a letter A coloring page in standard block or D'Nealian font and an ape coloring page image behind it or on a separate page if using paints to decorate later. Discuss other letter A words found on the worksheet.
* Finger Tracing: Trace letter A's in upper and lower case with your finger as you also sound out the letter. Invite the children to do the same on their coloring page.
* Children can trace and color the letter A's. After completing coloring the letter, encourage children to color the ape image. Write the word ape on the finished coloring page.
Children Ages 3+:
Alphabet - beginning letter A sound (short sound) > Present the Letter A Worksheet and Mini Book program. Read suggested instructions for using the worksheet and mini-book. These materials can be used to reinforce letter practice and to identify related words.
Finger and Pencil Tracing: Trace letter A's in upper and lower case with your finger as you also sound out the letter. Invite the children to do the same on their worksheet.
Encourage the children to trace the dotted letter, and demonstrate the direction of the arrows and numbers that help them trace the letter correctly. During the demonstration, you may want to count out loud as you trace so children become aware of how the number order aids them in the writing process.
Find the letter A's: Have the children find all the letter A's in upper and lower case on the page and encourage them to circle or trace/shade them first.
Discuss other letter A words and images in the worksheet. You can also display other posters and coloring pages or even make a letter A classroom book. Visit Letter A Alphabet Printable Activities to make your choice.
Advanced Handwriting Practice:
1. Print your choice of printable lined-paper. Have children draw an ape behind the lined paper or select and print an ape coloring page to color and decorate and writing practice.
2. Drawing and writing paper: encourage children to draw and color an ape in the rainforest | jungle and practice writing letter A a.
Letter A coloring page
Letter A Worksheet & Mini-Book
D'Nealian & Standard block
Drawing & Writing Paper
Apes Coloring Pages |
Textbook illustrations and museum dioramas could soon be even more accurate in their depiction of the rich colors of long-extinct animals like dinosaurs. An international team of scientists used advanced X-ray imaging techniques to map out elements related to pigmentation in modern birds of prey, which they will use to reconstruct the likely color patterns of fossil specimens.
Scientists at the Stanford Synchrotron Radiation Lightsource, SLAC National Accelerator Laboratory, and the United Kingdom’s Diamond Light Source teamed up with researchers from the University of Manchester on the experiments, described in a new paper in Scientific Reports.
The critical factor here is melanin, which determines variation in skin tone in humans and plays a big role in the coloring of mammals and birds. But little is known about its exact chemistry, according to lead author Nick Edwards of the University of Manchester, because the stuff is notoriously difficult to characterize via the usual methods.
That’s where the advanced X-ray imaging capabilities come in. “These techniques are non-destructive, so they don’t destroy the samples you are studying,” SLAC scientist and co-author Dimosthenis Sokaras told Gizmodo. “And with the rapid scan imaging we have developed at SLAC, we can analyze objects ranging from a few centimeters to tens of centimeters in size in a few hours.”
This latest work builds on earlier research from 2011, when SLAC scientists used x-ray imaging to examine the fossilized remains of two birds (Confuciusornis sanctus) that lived 120 million years ago. They found traces of the pigment eumelanin, responsible for the brown eyes and dark hair found in many modern species, including humans. It would have been one factor (among many) that determined the patterns of colors for those birds.
But eumelanin isn’t the only pigment of interest to the scientists. Pheomelanin also plays a role, notably in the production of reddish/yellow hues. At the time, the team just didn’t have sufficient data to conduct a similar examination focusing on pheomelanin.
Past studies have shown that pigmented tissues are richer in certain long-lived trace elements like zinc, calcium, and copper. So it made sense to target those elements in the new experiments. It also made sense to test this hypothesis in modern birds rather than ancient fossils, since the colors and pigmentation patterns are already known.
The team collected features from four species of birds of prey, shed naturally in sanctuaries: the Harris hawk, the red-tailed hawk, the kestrel, and the barn owl. Then they used X-ray fluorescent imaging to pinpoint concentrations of those key elements in the feathers.
This in turn enabled researchers to distinguish between the two types of melanin in samples, because there were subtle differences in he concentrations between the two. For instance, the presence of zinc bonded to sulphur compounds indicates the feather has pheomelanin, so it should have a reddish/yellow hue.
Now the Manchester researchers plan to apply what they’ve learned about those trace metal concentrations to fossilized specimens like those ancient birds, or dinosaurs. Was T. Rex predominantly black or dark brown, or did its coloration fall more in the reddish/yellow range?
“A fundamental rule in geology is that the present is the key to the past,” geochemist and senior author Roy Wogelius said in a statement. “This work on modern animals provides another chemical ‘key’ for helping us to accurately reconstruct the appearance of long extinct animals.” |
Explore Primary Sources
Teach about the social traditions of the Pueblo people before and after Spanish arrival through the study of ceramic bowls.
Explore the power of language and the religious conversion of native peoples in New England by studying the Eliot Indian Bible.
Discover the relationship between the Plains Indians and U.S. traders through the materials used in the making of a side-fold dress.
Rationale for Exploring Settler-Native Interactions in North America Through Primary Sources
Looking at interactions between North American natives and European settlers through primary sources offers us fresh and sometimes surprising insights into the fascinating exchanges that took place in early America as peoples encountered others who were different. It allows us to look beyond school textbook accounts of political and military conflicts or alliances to witness the plentiful cross-pollination between cultures. Indians and settlers were often intrigued by one another’s ways, and open to adopting items, ideas and motifs that they found useful or pleasing. We see products of these encounters emerging that are hybrids of cultures – and are no less "authentically" Indian or colonial for being so. Examining remains of these interactions also helps us to dispel the seeming silence of native populations, as their words and ideas have been preserved in many forms. Texts, visual art, artifacts and physical structures all document ways that native peoples interacted with the Spanish, French and British in North America. They offer a richer and more complete story of what the encounters meant to the people involved, and give students a chance to explore those meanings for themselves. |
What's New on the Moon?
by Dr. Bevan M. FrenchIn 1969 over a billion people witnessed the "impossible" coming true as the first men walked on the surface of the Moon. For the next three years, people of many nationalities watched as one of the great explorations of human history was displayed on their television screens.
Between 1969 and 1972, supported by thousands of scientists and
engineers back on Earth, 12 astronauts explored the surface of the
Moon. Protected against the airlessness and the killing heat of the
lunar environment, they stayed on the Moon for days and some of them
travelled for miles across its surface in Lunar Rovers. They made
scientific observations and set up instruments to probe the interior
of the Moon. They collected hundreds of pounds of lunar rock and soil,
thus beginning the first attempt to decipher the origin and geological
history of another world from actual samples of its crust.
Image from NASA Spacelink The initial excitement of new success and discovery has passed. The TV sets no longer show astronauts moving across the sunlit lunar landscape. But here on Earth, scientists are only now beginning to understand the immense treasure of new knowledge returned by the Apollo astronauts.
The Apollo Program has left us with a large and priceless legacy of lunar materials and data. We now have Moon rocks collected from eight different places on the Moon. The six Apollo landings returned a collection weighing 382 kilograms (843 pounds) and consisting of more than 2,000 separate samples. Two automated Soviet spacecraft named Luna-16 and Luna-20 returned small but important samples totalling about 130 grams (five ounces).
Instruments placed on the Moon by the Apollo astronauts as long ago as 1969 are still detecting moonquakes and meteorite impacts, measuring the Moon's motions, and recording the heat flowing out from inside the Moon. The Apollo Program also carried out a major effort of photographing and analyzing the surface of the Moon. Cameras on the Apollo spacecraft obtained so many accurate photographs that we now have better maps of parts of the Moon than we do for some areas on Earth. Special detectors near the cameras measured the weak X-rays and radioactivity given off by the lunar surface. From these measurements, we have been able to determine the chemical composition of about one-quarter of the Moon's surface, an area the size of the United States and Mexico combined. By comparing the flight data with analyses of returned Moon rocks, we can draw conclusions about the chemical composition and nature of the entire Moon.
Thus, in less than a decade, science and the Apollo Program have changed our Moon from an unknown and unreachable object into a familiar world.
What Has the Apollo Program Told Us About the Moon?What have we gained from all this exploration? Before the landing of Apollo 11 on July 20, 1969, the nature and origin of the Moon were still mysteries. Now, as a result of the the Apollo Program, we can answer questions that remained unsolved during centuries of speculation and scientific study:
(1) Is There Life On The Moon?Despite careful searching, neither living organisms nor fossil life have been found in any lunar samples. The lunar rocks were so barren of life that the quarantine period for returned astronauts was dropped after the third Apollo landing.
(2) What Is The Moon Made Of?The Moon is made of rocks. The Moon rocks are so much like Earth rocks in their appearance that we can use the same terms to describe both. The rocks are all IGNEOUS, which means that they formed by the cooling of molten lava. (No sedimentary rocks, like limestone or shale, which are deposited in water, have ever been found on the Moon.).
The dark regions (called "maria") that form the features of "The Man in the Moon" are low, level areas covered with layers of basalt lava, a rock similar to the lavas that erupt from terrestrial volcanoes in Hawaii, Iceland, and elsewhere. The light-colored parts of the Moon (called "highlands") are higher, more rugged regions that are older than the maria. These areas are made up of several different kinds of rocks that cooled slowly deep within the Moon. Again using terrestrial terms, we call these rocks gabbro, norite, and anorthosite.
Despite these similarities, Moon rocks are basically different and it is easy to tell them apart by analyzing their chemistry or by examining them under a microscope. The most obvious difference is that Moon rocks have no water at all, while almost all terrestrial rocks contain at least a percent or two of water. The Moon rocks are therefore very well-preserved, because they never were able to react with water to form clay minerals or rust. A 3 1/2-billion-year-old Moon rock looks fresher than water-bearing lava just erupted from a terrestrial volcano.
(3) What Is The Inside Of The Moon Like?Sensitive instruments placed on the lunar surface by the Apollo astronauts are still recording the tiny vibrations caused by meteorite impacts on the surface of the Moon and by small moonquakes deep within it. These vibrations provide the data from which scientists determine what the inside of the Moon is like.
About 3,000 moonquakes are detected each year. All of them are very weak by terrestrial standards. The average moonquake releases about as much energy as a firecracker, and the whole Moon releases less than one-ten-billionth of the earthquake energy of the Earth. The moonquakes occur about 600 to 800 kilometers (370-500 miles) deep inside the Moon, much deeper than almost all the quakes on our own planet. Certain kinds of moonquakes occur at about the same time every month, suggesting that they are triggered by repeated tidal strains as the Moon moves in its orbits around the Earth.
A picture of the inside of the Moon has slowly been put together from the records of thousands of moonquakes, meteorite impacts, and the deliberate impacts of discarded Apollo rocket stages onto the Moon. The Moon is not uniform inside, but is divided into a series of layers just as the Earth is, although the layers of the Earth and Moon are different. The outermost part of the Moon is a crust about 60 kilometers (37 miles) thick, probably composed of calcium-and aluminium-rich rocks like those found in the highlands. Beneath the crust is a thick layer of denser rock (the mantle) which extends down to more than 800 kilometers (500 miles).
The deep interior of the Moon is still unknown. The Moon may contain a small iron core at its center, and there is some evidence that the Moon may be hot and even partly molten inside.
(4) What Is The Moon's Surface Like?Long before the Apollo Program scientists could see that the Moon's surface was complex. Earth-based telescopes could distinguish the level maria and the rugged highlands. We could recognize countless circular craters, rugged mountain ranges, and deep winding canyons or rilles.
Because of the Apollo explorations, we have now learned that all these lunar landscapes are covered by a layer of fine broken-up powder and rubble about 1 to 20 meters (3 to 60 feet) deep. This layer is usually called the "lunar soil," although it contains no water or organic material, and it is totally different from soils formed on Earth by the action of wind, water, and life.
The lunar soil is something entirely new to scientists, for it could only have been formed on the surface of an airless body like the Moon. The soil has been built up over billions of years by the continuous bombardment of the unprotected Moon by large and small meteorites, most of which would have burned up if they had entered the Earth's atmosphere.
These meteorites form craters when they hit the Moon. Tiny particles of cosmic dust produce microscopic craters perhaps 1/1000 of a millimeter (1/25,000 inch) across, while the rare impact of a large body may blasts out a crater many kilometers, or miles, in diameter. Each of these impacts shatters the solid rock, scatters material around the crater, and stirs and mixes the soil. As a result, the lunar soil is a well-mixed sample of a large area of the Moon, and single samples of lunar soil have yielded rock fragments whose source was hundreds of kilometers from the collection site.
However, the lunar soil is more than ground-up and reworked lunar rock. It is the boundary layer between the Moon and outer space, and it absorbs the matter and energy that strikes the Moon fro the Sun and the rest of the universe. Tiny bits of cosmic dust and high-energy atomic particles that would be stopped high in the Earth's protective atmosphere rain continually onto the surface of the Moon.
(5) How Old Is The Moon?Scientists now think that the solar system first came into being as a huge, whirling, disk-shaped cloud of gas and dust. Gradually the cloud collapsed inward. The central part became masssive and hot, forming the Sun. Around the Sun, the dust formed small objects that rapidly collected together to form the large planets and satellites that we see today.
By carefully measuring the radioactive elements found in rocks, scientists can determine how old the rocks are. Measurements on meteorites indicate that the formation of the solar system occurred 4.6 billion years ago. There is chemical evidence in both lunar and terrestrial rocks that the Earth and Moon also formed at that time. However, the oldest known rocks on Earth are only 3.8 billion years old, and scientists think that the older rocks have been destroyed by the Earth's continuing volcanism, mountain-building, and erosion.
(6) What Is The History Of The Moon?The first few hundred million years of the Moon's lifetime were so violent that few traces of this time remain. Almost immediately after the Moon formed, its outer part was completely melted to a depth of several hundred kilometers. While this molten layer gradually cooled and solidfied into different kinds of rocks, the Moon was bombarded by huge asteroids and smaller bodies. Some of these asteroids were the size of small states, like Rhode Island or Delaware, and their collisions with the Moon created huge basins hundreds of kilometers across.
The catastrophic bombardment died away about 4 billion years ago, leaving the lunar highlands covered with huge overlapping craters and a deep layer of shattered and broken rock. As the bombardment subsided, heat produced by the decay of radioactive elements began to melt the inside of the Moon at depths of about 200 kilometers (125 miles) below its surface. Then, for the next half billion years, from about 3.8 to 3.1 billion years ago, great floods of lava rose from the inside the Moon and poured out over its surface, filling in the large impact basins to form the dark parts of the Moon that we see today.
As far as we know, the Moon has been quiet since the last lavas erupted more than 3 billion years ago. Since then, the Moon's surface has been altered only by rare large meteorite impacts and by atomic particles from the Sun and the stars. The Moon has preserved featured formed almost 4 billion year ago, and if men had landed on the Moon a billion years ago, it would have looked very much as it does now. The surface of the Moon now changes so slowly that the footprints left by the Apollo astronauts will remain clear and sharp for millions of years.
This preserved ancient history of the Moon is in sharp contrast to the changing Earth. The Earth still behaves like a young planet. Its internal heat is active, and volcanic eruptions and mountain-building have gone on continuously as far back as we can decipher the rocks. According to new geological theories, even the present ocean basins are less than about 200 million years old, having formed by the slow separation of huge moving plates that make up the Earth's crust.
(7) Where Did The Moon Come From?Before we explored the Moon, there were three main suggestions to explain its existence: that it had formed near the Earth as a separate body; that it had separated from the Earth; and that is had formed somewhere else and been captured by the Earth.
Scientists still cannot decide among these three theories.
What Has the Moon Told Us About the Earth?It might seem that the active, inhabited Earth has nothing in common with the quiet, lifeless Moon. Nevertheless, the scientific discoveries of the Apollo Program have provided a new and unexpected look into the early history of our own planet. Scientists think that all the planets formed in the same way, by the rapid accumulation of small bodies into large ones about 4.6 billion years ago. The Moon's rocks contain the traces of this process of planetary creation. The same catastrophic impacts and widespread melting that we recognize on the Moon must also have dominated the Earth during its early years, and about 4 billion years ago the Earth may have looked much the same as the Moon does now.
The two worlds then took different paths. The Moon became quiet while the Earth continued to generate mountains, volcanoes, oceans, an atmosphere, and life. The Moon preserved its ancient rocks, while the Earth's older rocks were continually destroyed and recreated as younger ones.
The Earth's oldest preserved rocks, 3.3 to 3.8 billion years old, occur as small remnants in Greenland, Minnesota, and Africa. These rocks are not like the lunar lava flows of the same age. The Earth's most ancient rocks are granites and sediments, and they tell us that the Earth already had mountain-building, running water, oceans, and life at a time when the last lava flows were pouring out across the Moon.
In the same way, all traces of any intense early bombardment of the Earth have been destroyed. The record of later impacts remains, however, in nearly 100 ancient impact structures that have been recognized on the Earth in recent years. Some of these structures are the deeply eroded remnants of craters as large as those of the Moon and they give us a way to study on Earth the process that once dominated both the Earth and Moon.
Lunar science is also making other contributions to the study of the Earth. The new techniques developed to analyze lunar samples are now being applied to terrestrial rocks. Chemical analyses can now be made on samples weighing only 0.001 gram (3/100,000 ounce) and the ages of terrestrial rocks can now be measured far more accurately than before Apollo. These new techniques are already helping us to better understand the origin of terrestrial volcanic rocks, to identify new occurrences of the Earth's oldest rocks, and to probe further into the origin of terrestrial life more than 3 billion years ago.
What Else Can the Moon Tell Us?Although the Apollo Program officially ended in 1972, the active study of the Moon goes on. More than 125 teams of scientists are studying the returned lunar samples and analyzing the information that continues to come from the instruments on the Moon. Less than 10 percent of the lunar sample material has yet been studied in detail, and more results will emerge as new rocks and soil samples are examined.
The scientific results of the Apollo Program have spread far beyond the Moon itself. By studying the Moon, we have learned how to go about the business of exploring other planets. The Apollo Program proved that we could apply to another world the methods that we have used to learn about the Earth. Now the knowledge gained from the Moon is being used with the photographs returned by Mariner 9 and 10 to understand the histories of Mercury and Mars and to interpret the data returned by the Viking mission to Mars.
Note for Scientists and Educators
LUNAR SCIENCE INSTITUTE
Data Center, Code L
3303 NASA Road #1
Houston, TX 77058
Phone (713) 488-5200
taken from WHAT'S NEW ON THE MOON?, B.M. French, NASA 1.19:131, c1978 |
Learning Objectives: Students will demonstrate knowledge of the criteria used to evaluate historical accounts. Students will demonstrate knowledge of structuring and supporting an effective argument.
Materials/Resources: ”Circumstances Surrounding Raoul Wallenberg”s Assignment in Budapest,” ”The Soviet Arrest on Raoul Wallenberg,” and ”Possible Reasons for the Arrest,” by Sven Grundberg, available from www.raoulwallenberg.net, note: students should NOT read Grundberg’s ”Conclusion” before writing their essays, articles from the National Enquirer or other tabloid that publishes stories of questionable veracity, articles from a more reputable newspaper, copy of ”Common Sense” pamphlet by Thomas Paine, available at www.ushistory.org/paine/commonsense/singlehtml.htm, an account of the Revolutionary War from the British perspective, available at http://www.bbc.co.uk/history/british/empire_seapower/rebels_redcoats_01.shtml
Prerequisites: Students will have read all reading materials listed in the resources section above for homework.
Teaching Procedure: Teacher will ask students what questions arose as they were reading the tabloid article.
Teacher will ask students to contrast the tabloid article with the article from the reputable newspaper.
Teacher will ask students to generate a list of qualities that make an article believable based on their comparison of the two articles.
Teacher will record the list on the board.
Teacher will ask students to contrast the purpose and perspective of the authors of ”Common Sense” and the BBC account of the Revolutionary War and to discuss how readers know the purpose and perspective of the authors.
Teacher will note that to evaluate the strength of historical accounts, readers should consider the credibility of the author, the plausibility of the account in light of background knowledge, the author’s purpose in writing and the author’s perspective.
Teachers will ask students to discuss the strengths and weaknesses of each of Grundberg’s hypotheses accounting for Wallenberg’s arrest and to describe his purpose and perspective.
Teacher will ask each student to write a persuasive essay supporting one of Grundberg’s hypotheses accounting for Wallenberg’s arrest.
Assessment: Successful essays will include well-reasoned arguments in favor of the chosen hypotheses and will cite relevant evidence. Successful essays will also clearly explain why the student chose the particular hypothesis and will briefly address the strengths and weaknesses of the other hypotheses.
Lesson plan created by Sharlee DiMenichi, an award-winning freelance journalist whose work has appeared in national and regional newspapers and magazines. She is currently working on a book of curricular materials on Holocaust rescuers. DiMenichi holds an M. S. in journalism from Columbia University, a B.A. in English and a B.A. in Education from Juniata College. |
a. Nonwood alternatives
b. Paper conservation at home, school and during the holidays
c. Woodwise mail, books and magazines, and clothing
d. Woodwise homes
Individual Reading and Preparation:
*You are going to present an argument in a small group. You will argue that the other students take certain actions as consumers to promote forest conservation. Consumer actions can include what you buy, how you use it and what you do with it when you are done using it. This will be based on your selected article. To prepare:
1. Read selected article
2. Use one piece of notebook paper to create a visual aid for your presentation:
Include the following:
a. Seven - ten consumer actions (buying, using and/or disposing of products)
Use your own words based on tips in reading. Organize actions into different sections (e.g. in reading B- home, school and holidays)
c. Two - three diagrams, pictures or graphics to illustrate key points
d. One - three statistics on the state of our forests or wood consumption to support your argument to change consumer actions.
**Be PREPARED to explain not just what actions but also WHY those actions.
Jigsaw Groups will listen to one another and carefully consider all the consumer options.
As presenter, you will:
a. introduce your areas/approach to wood/forest conservation
b. justify why these actions are important for forest conservation (try to show why these actions may be more important than other actions)
c. carefully explain recommended consumer actions referring to all parts of visual aid
As listener, you will:
a. ask questions to better understand presenters' ideas
b. after each presentation, write down 2 consumer actions you feel would be effective to conserve forests and would consider doing
c. after all presentations, choose 3 actions you feel would be most effective and write them down with an explanation as to why you feel these are the best actions to pursue as a consumer to promote forest conservation (1/2 page)
Can you think of other approaches consumers can take to promote forest conservation?
What are the obstacles to changing consumer habits? using less wood? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.