content
stringlengths 275
370k
|
---|
Laura was confused. She came to the United States a year ago. She didn't understand the difference between Memorial Day and Veterans Day. She knew that both holidays had to do with remembering and honoring people who had fought in wars for the United States. She also knew that both days were national holidays. On both days banks, schools and the post offices were closed; on both holidays, stores offered special sales; on both holidays, people hung the American Flag, and sometimes there were even fireworks. Laura had seen on television that on both days the President of the United States made a speech and visited a cemetery laying a wreath on the grave of a soldier. Laura decided to ask the mailman Brian what the difference was. She figured he would know since he had both days off from work.
The next day when Brian was dropping off Laura's mail, she asked him. Brian explained that Memorial Day is usually observed on the last Monday of May. This meant that it could be on a different date every year. Memorial day is a day for remembering and honoring those in the military who died, especially if they died in a battle. Brian also told Laura that Memorial Day was also considered the unofficial start of summer. Many public beaches would open on Memorial Day. Veterans day is a day set aside to thank and honor all people who served in the United States military. Brian explained that Veterans Day is also to remember those that are still alive. Brian told Laura that the date for Veterans Day never changes. It is always held on November 11, which was the date when World War I ended in 1918.
Copyright © 2018. All rights reserved. |
Australians may have had a far more catastrophic impact on their landscape than previously suspected, according to fresh scientific evidence now coming to light.
A team from CSIRO Land & Water and the Cooperative Research Centre for Catchment Hydrology has found signs that European settlement unleashed an episode of erosion, sediment deposition and change in river systems orders of magnitude greater than we have assumed to date.
New ways to identify and date flood deposits in river catchments in Eastern Australia are building a picture of a landscape in dramatic transition over years or decades, rather than centuries, say Dr Jon Olley and Dr Peter Wallbrink.
Metres of mud and sand deposited on river floodplains, which the scientists at first guessed to be the result of hundreds or even thousands of years of erosion, are proving to have happened in as few as 30 or 40 years.
“There’s little doubt modern Australians have underestimated the extent of change we have inflicted on our landscape,” says Dr Wallbrink. “In some cases the rates are staggering.”
His research in the catchment of the Murrah river in southern NSW, dominated by dairying and forestry, is throwing the issue under the spotlight.
“Deposits of silt and sediment on the lower floodplain of the Murrah appeared to us to be at least a couple of hundred years old - until we began to test their composition and age.”
It was the atomic bomb that did the trick. Regular atmospheric testing of nuclear weapons, which began in the late 1950s, spread a telltale layer of radioactive Caesium 137 across the globe. That layer now provides a reliable benchmark for soil scientists wanting to date recent layers of sediment.
What looked like the accumulation of centuries in the Murrah floodplain turns out to have taken place since about 1960, Dr Wallbrink says. More dramatic still, nearly a third of the deposit appears to have been dumped by a single massive flood event, back in 1971.
Subsequent tests will reveal whether it was clearing for agriculture at the top of the catchment or forestry operations in the lower catchment which was mainly responsible for the sediment - and the relative contribution of the two.
This understanding will be vital in devising the best strategies for farmers, foresters and land managers to combat future large scale erosion and deposition events and improve water quality and sustainability, says Dr Wallbrink.
“We’re talking about changing the very face of Australia in comparatively few years, so dramatic is the scale of these events,” he says. “The evidence is building that our landscape underwent radical change.”
Dr Jon Olley is pioneering a technique called optically stimulated luminescence to date single grains of quartz sand in a sediment deposit. This technique is unfolding a new chapter in understanding of how we have reshaped the continent.
“Before European settlement, the picture is of a relatively stable landscape, well-vegetated, with lots of swampy meadows in the low lying areas to trap the sediment and nutrients and filter the waters slowly,” he explains.
“The river systems at that time would have been largely clear-flowing, generally slow and dominated by organic material.”
Enter European settlers and the landscape chemistry changes violently. Overclearing and heavy grazing combined with Australia’s regular cycle of drought and flood to unleash a new pattern in the rivers: spates of silt sandblasting the system caused profound changes in the rivers themselves and the life they supported.
“We went, in effect, from slow rivers dominated by organic material to rivers dominated by rushes of abrasive inorganic sediment. This had huge consequences for native fish, animals, water plants and insects.
“Regrettably,” says Dr Olley, “I don’t think the original system is restorable. We can’t put back the clock and have it the way it once was.”
However both scientists consider it likely that a new landscape balance has formed, and that the rate of change is no longer as acute as it was shortly after clearing.
Nevertheless the combination of a cleared landscape with periodic episodes of natural droughts and floods has created a river regime that is now far more energetic and prone to violent flooding than previously existed.
“It’s all about energy,” says Dr Wallbrink. “In the original rivers the rainfall was held back by vegetation and swampy areas. Today it rushes downstream in defined channels far more quickly and in larger volumes.
“It is this new energy which underlies the dramatic rates of change we are starting to see and understand for the first time.”
More information: [email protected]
The above post is reprinted from materials provided by CSIRO Australia. Note: Materials may be edited for content and length.
Cite This Page: |
A firm utilizes two inputs, unskilled labor (L) and capital (K) to produce its product. The wage rate for one unit of labor is around $5, whereas the units of capital cost around $20. The firm’s production function per day is Q (L, K) = 4LK, while the MPL= 4K and the MPK = 4L. The firm wants to keep a constant production of Q0 = 400 units of output every day.
Q1. Assume that the federal government institutes a minimum wage for the unskilled labor of $10 per unit. In short run, with capital fixed at the level K*, how much would it cost the firm to hold the output constant at Q0? Draw the latest isocost line related to this new cost in the similar graph as in part (b) and clearly label the intercepts.
Q2. Find out analytically the optimal level of inputs L** and K** which the firm will employ in the long run to produce Q0, given the minimum wage. Illustrate the cost related to this preference? Symbolize the new isocost and L** and K** in the similar graph as above.
Q3. How do you compare the original cost in (a) with the costs in (c) and (d)? Give the economic intuition behind such outcomes. |
Parabolic trough power plants are the only technology for utilizing solar energy in large power plants that has been commercially proved over a number of years. Parabolic trough power plants have been in successful commercial operation in California since 1985. They have already generated over twelve billion kilowatt hours of solar electricity, which equates to providing some 12 million people with electricity for one year. As with conventionally fuelled power plants, including nuclear power plants, the electricity in parabolic trough power plants is generated using a steam turbine connected to a generator. However, the steam required is not produced by burning fossil fuels but through the use of solar energy. The solar irradiation is captured and concentrated by long rows of parabolic mirrors. The heat generated in this way is enough to produce the steam required.
Solar Millennium has developed Europe's first parabolic trough power plants
. Solar Millennium's pioneer project Andasol 1 has been connected to the grid since December 2008. Andasol 2 was completed in early summer 2009 and has since been connected to the grid.
Andasol 3 was inaugurated in autumn 2011 and is also generating climate-friendly solar power. |
Rutherford's assistants did all the work. Rutherford's idea was, "well let's see if these guys (Geiger, Marsden etc.) are good at detecting alpha particles scattered from gold foil. Make them look for them at large angles. That sounds like a really difficult task." But then, Geiger and Marsden actually found particles scattered at extreme angles.
Experiment — alpha particles bombarding gold foil (polonium a source)
A small fraction of the α-particles falling upon a metal plate have their directions changed to such an extent that they emerge again at the side of incidence.
Compared, however, with the thickness of gold which an α-particle can penetrate, the effect is confined to a relatively thin layer. In our experiment, about half of the reflected particles were reflected from a layer equivalent to about 2 mm of air. If the high velocity and mass of the a-particle be taken into account, it seems surprising that some of the α-particles, as the experiment shows, can be tamed within a layer of 6 × 10−5 cm. of gold through an angle of 90°, and even more. To produce a similar effect by a magnetic field, the enormous field of 109 absolute units would be required.
Three different determinations showed that of the incident α-particles about 1 in 8000 was reflected, under the described conditions.
Geiger, Fellow, Marsden, 1909
It was left to Rutherford to make conclusions from their observations
did not deviate
|atoms are mostly empty space|
|there's something positive inside the atom|
|a tiny fraction
|a small positively charged region (nucleus)
contains most of the atom's mass
|electrons orbit the nucleus like a planet
orbiting the sun
Rutherford's own words.
§1 The observations, however, of Geiger and Marsden on the scattering of α rays indicate that some of the α particles, about 1 in 20,000 were turned through an average angle of 90 degrees in passing though a layer of gold-foil about 0.00004 cm. thick, which was equivalent in stopping-power of the a particle to 1.6 millimetres of air…. It seems reasonable to suppose that the deflexion through a large angle is due to a single atomic encounter, for the chance of a second encounter of a kind to produce a large deflexion must in most cases be exceedingly small. A simple calculation shows that the atom must be a seat of an intense electric field in order to produce such a large deflexion at a single encounter….
§2 Consider an atom which contains a charge ±Ne at its centre surrounded by a sphere of electrification containing a charge ∓Ne supposed uniformly distributed throughout a sphere of radius R. e is the fundamental unit of charge, which in this paper is taken as 4.65 x 10−10 E.S. unit. We shall suppose that for distances less than 10−12 cm. the central charge and also the charge on the alpha particle may be supposed to be concentrated at a point. It will be shown that the main deductions from the theory are independent of whether the central charge is supposed to be positive or negative. For convenience, the sign will be assumed to be positive. The question of the stability of the atom proposed need not be considered at this stage, for this will obviously depend upon the minute structure of the atom, and on the motion of the constituent charged parts….
§7 In comparing the theory outlined in this paper with the experimental results, it has been supposed that the atom consists of a central charge supposed concentrated at a point, and that the large single deflexions of the α and β particles are mainly due to their passage through the strong central field.
Ernest Rutherford, 1911
- electrons in orbit experience centripetal acceleration
- accelerating charge produces electromagnetic waves, electromagnetic waves transfer energy
- loss of energy would make atoms unstable (electron should spiral into nucleus)
- discrete spectra from energetic electrons
- bohr's major new idea
- quantization of angular momentum in terms of ℏ = h/2π
- electrons occupy stationary states around the nucleus
- restricted momentums lead to restricted radii and energy levels
- transition between energy levels accompanied by emission, absorption of photon. The atom only gains or loses energy when its electrons are transferred from one stationary state to another.
- sommerfeld's major new idea
- electrons form standing waves around the nucleus
- discrete nature of harmonics leads to quantization of angular momentum
- new dilemma
- one dimensional
- does not work for other elements — anything with more than one electron
- there are additional spectroscopic phenomena where discrete lines will split that it cannot explain (to be discussed elsewhere)
- zeeman effect
- hyperfine splitting
- discrete emission/absorption lines correspond to allowed energy level transitions
- kirchhoff rules (kirchhoff-bunsen rules?)
- hot objects produces light with a continuous spectrum
- an energized gas produces light with a discrete spectrum
- a hot object behind a cooler gas produces light with a nearly continuous spectrum with gaps at discrete wavelengths
- atomic spectra are atomic fingerprints or barcodes or another identification analog
- quantum optical effects
- dissociation (ozone, for example)
- photochromism (orthonitrotoluenes, for example)
- cis-trans isomerism (visual purple, for example; tioindigos, anils, and azo compounds)
- shifting of position of double bonds (ergostgerol converts to vitamin D, for example)
- Light Amplification through the Stimulated Emission of Radiation
- Looking At Source Erases Retina
Niels Bohr (1885–1962) Denmark
Let us at first assume that there is no energy radiation. In this case the electron will describe stationary elliptical orbits….
The circumstance that the frequency can be written as a difference between two functions of entire numbers [whole numbers] suggests an origin of the lines in the spectra in question similar to the one we have assumed for hydrogen; i.e. that the lines correspond to a radiation emitted during the passing of the system between two different stationary states.
Neils Bohr, 1913
For this it will be necessary to assume that the orbit of the electron can not take on all values, and in any event, the line spectrum clearly indicates that the oscillations of the electron cannot vary continuously between wide limits….
Let us now try to overcome these difficulties by applying Planck's theory to the problem….
The subject of direct observation is the distribution of radiant energy over oscillations of the various wave lengths. Even though we may assume that this energy comes from systems of oscillating particles, we know little or nothing about these systems. No one has ever seen a Planck's resonator, nor indeed even measured its frequency of oscillation; we can observe only the period of oscillation of the radiation which is emitted. It is therefore very convenient that it is possible to show that to obtain the laws of temperature radiation it is not necessary to make any assumptions about the systems which emit the radiation except that the amount of energy emitted each time shall be equal to hν, where h is Planck's constant and ν ƒ is the frequency of the radiation….
During the emission of the radiation the system may be regarded as passing from one state to another; in order to introduce a name for these states we shall call them "stationary" states, simply indicating thereby that they form some kind of waiting places between which occurs the emission of the energy corresponding to the various spectral lines….
Under ordinary circumstances a hydrogen atom will probably exist only in the state corresponding to n = 1. For this state W will have its greatest value and, consequently, the atom will have emitted the largest amount of energy possible; this will therefore represent the most stable state of the atom from which the system cannot be transferred except by adding energy to it from without.
Niels Bohr, 1913
In a letter to…
There appears to me one grave difficulty in your hypothesis, which I have no doubt you fully realize, namely, how does an electron decide at what frequency it is going to vibrate at when it passes from one stationary state to the other? It seems to me that you would have to assume that the electron knows beforehand where it is going to stop.
Ernest Rutherford, 1913
Only an integral number of wavelengths fit in an allowed electron orbit.
|L = mvr||=||nh|
|L2 = m2v2r2||=||n2h2|
|r = n2||ε0h2||= n2a0|
|C||=||2πr||=||nλ = n||h|
|r = n2||ε0h2||= n2a0|
Bohr radius, a0…
|a0 =||(8.854 × 10−12 C2/Nm2)
|π(1.602 × 10−19 C)2
|a0 =||= 5.293 × 10−11 m|
Thus the diameter of a hydrogen atom in its ground state is approximately 10−10 m, a unit also known as an ångstrom and represented with the symbol Å.
energy levels of hydrogen: total energy is the sum of the kinetic and electric potential energy of the electron
|E = K + U =||1||mev2 −||1||e2|
Replace speed with the equation derived earlier for the speed of an electron in a classical circular orbit. Then simplify.
Replace radius with the equation derived earlier for the radius of an electron in an allowed orbit. Then simplify.
|En = −||1||e2||⎛
|En = −||e4me||1|
|En = −||E1|
ground state energy, ionization energy of hydrogen
or in electron volts
|E1 =||− 2.179 × 10−18 J||= − 13.6 eV|
|1.602 × 10−19 C/e|
energy level changes are followed by the emission of a photon
ΔE = hf
Spectroscopists like wavelengths, which leads to the following funky formula — the Rydberg equation for hydrogen.
It can be derived from the Bohr model.
Rydberg constant, R∞
|R∞ =||(1.602 × 10−19 C)4 (9.109 × 10−31 kg)|
|8(8.854 × 10−12 C2/Nm2)2
|R∞ =||1.097 × 107 m−1|
spectral lines are classified according to the energy level the electron lands on
- Grotthuss-Draper law: Light must be absorbed by a chemical substance in order for a photochemical reaction to take place. Molecules that do not absorb light of a particular frequency will not undergo a photochemical reaction when irradiated at that frequency
- Stark-Einstein law (photoequivalence law): Each photon of light can cause a photochemical reaction of only one light absorbing molecule.
- The amount of photoreaction that takes place is directly proportional to the product of the light intensity and the time of illumination. In other words, more light produces more photoproduct.
Electrons have three-dimensional extent, but the Bohr model assumes the electron to be a one-dimensional standing wave wrapped around the nucleus.
major new idea
electron forms a three-dimensional standing wave around the nucleus, electron clouds, restricted wavelengths (spherical harmonics)
Erwin Schrödinger (1887–1961) Austria, Abhandlungen zur Wellenmechanik. Wave equation for matter reminiscent of Maxwell's equations for electromagnetic waves. The story I heard is that Schrödinger went to Switzerland with two goals: to keep his mistress happy and to derive a wave equation for matter. How successful he was with the former is open to speculation.
full, time-dependent form
|iℏ||∂||Ψ(r,t) = −||ℏ2||∇2Ψ(r,t) + V(r)Ψ(r,t)|
can be separated into two halves
Ψ(r, t) = ψ(r)φ(t)
spatial, time-independent half
|Eψ(r) = −||ℏ2||∇2ψ(r) + V(r)ψ(r)|
The electrons around an atom are standing, probability waves. It's their interference, when atoms bond and form molecules, that determine molecular structures.
- n — energy quantization, linear momentum quantization? (K, L, M, N, O,… originally A, B,… )
principal quantum number
- ℓ — angular momentum quantization (s, p, d, f, g,… sharp, principal, diffuse, fine/fundamental/faint,… )
(reduced? orbital?) azimuthal quantum number
- m (mℓ) — space quantization
magnetic quantum number
- s (ms) — spin quantization (up, down)
spin quantum number
Wolfgang Pauli (1900–1958) Austria — exclusion principle
The ground states of all elements follow the pattern of the excited states in the hydrogen atom. The structure of the periodic table, which was determined empirically, can be derived theoretically from first principles. Chemistry is the bastard child of physics.
|1, 2, 3, 4, 5, 6, 7, 8…||0||0||s||spherical|
|2, 3, 4, 5, 6, 7…||1||+1||p||x||dumbbell|
|3, 4, 5, 6…||2||+2||d||xy||double dumbbell|
|−2||d||x2 − y2||double dumbbell|
|4, 5…||3||+3||f||y(3x2 − y2)||flat triple dumbbell|
|−2||f||z(x2 − y2)||quadruple dumbbell|
|−3||f||x(3y2 − x2)||flat triple dumbbell|
|5…||4||−4 to +4||g||many-lobed|
1928: Dirac developed the relativistic quantum theory. Paul Dirac states his relativistic electron quantum wave equation. Charles G. Darwin and Walter Gordon solve the Dirac equation for a Coulomb potential. Paul Dirac combines quantum mechanics and special relativity to describe the electron.
Is this right?
|Ψ(xν) = 0|
- agrees with special relativity (is Lorentz invariant)
- has four solutions
- contains within it the notion of spin up and spin down (the first two solutions)
- predicts the existance of antimatter (the second two solutions)
Dirac showed that there are no stable electron orbits for more than 137 electrons, therefore the last chemical element on the periodic table will be untriseptium (137Uts) also known informally as feynmanium (137Fy). It's full electron configuration would be something like…
or is it… |
While many, many people use the Internet daily, for work, leisure, and communication, very few of them actually know how their devices are able to access the Internet in the first place. How does information from miles away make its way to your device? The answer is something called a web server. Let’s take a moment and examine how they do it.
Using the Internet seems simple enough: after typing in the URL (or uniform resource locator), your web browser displays the associated page–but how does this really work?
Well, let’s examine how you got to this article. Let’s assume, for a moment, that you are doing so on a workstation. When you saw the link to this blog and clicked it, your browser took the associated URL and analyzed its three parts.
The HyperText Transfer Protocol, or the part of the URL that says “http,” is how your machine reaches out to the web server that holds this website’s data. The middle part of the address, starting with “www” and ending with “.com,” is the server name that represents that particular IP (Internet Protocol) address. The rest of the URL is made up of a particular page’s filename, to inform the website what content needs to be viewed specifically.
Once this website’s host server received the request from the HTTP, it returned the HTML text for this requested page. Your browser then took that HTML text and converted it back into a viewable webpage, allowing you to read and understand these words.
Web servers are also responsible for managing the stored credentials that are allowed to access password-protected pages. Any time you’ve had to log into a website, you’ve essentially had to prove yourself to the web server before you were allowed access.
Of course, this is all assuming that the website is static, which is a technical way of saying that the site is only changed if the creator goes in and manually changes it. Dynamic pages, or ones that change based on input (for an example, think about Google’s results pages) operate on a different level, usually using things like CGI scripts… but that’s for another time.
If you have any other questions about the technology behind your business, or perhaps need some help with your solutions, reach out to us at Kite Technology Group.Follow |
If we ever do find extra-terrestrial life in the solar system, it's probably much more likely to look like a few cells than a walking-and-talking green man. Nonetheless, finding any kind of life beyond Earth would be extraordinary. Here are our best hopes:
The sixth-largest moon of Saturn has been called the most promising bet for life thanks to its welcoming temperature and the likely presence of water and simple organic molecules. The surface of the icy moon is thought to be about 99 per cent water ice, with a good chance of liquid water beneath. Observations from the Cassini probe's 2005 flyby of Enceladus suggest the presence of carbon, hydrogen, nitrogen and oxygen - organic molecules thought to be necessary to develop life. And the moon seems to have a boiling core of molten rock that could heat the world to the toasty temperatures needed to give rise to life.
Jupiter's moon Europa also seems a possible stomping ground for ET due to its potential water and volcanic activity. Although the surface seems to be frozen, many suspect that buried underneath is an ocean of liquid water. Volcanic activity on the moon could provide life-supporting heat, as well as important chemicals needed by living organisms. Microbial life could potentially survive near hydrothermal vents on Europa, as it does on Earth.
As far as planets go, by far the front-runner for life is our neighbour, Mars. The red planet is the most Earth-like of solar system planets, with a comparatively similar size and temperature range as our own planet. Large bodies of water ice lie on Mars' poles, and there's a reasonable chance of liquid water beneath the surface. The puny atmosphere on the planet is not strong enough to shield the planet against lethal solar radiation, although microbes could potentially exist beneath the surface. Evidence also suggests that Mars may have been even more habitable in the past. Geological features imply that liquid water once flowed across the surface, and volcanic activity, now dead, once flourished, recycling chemicals and minerals between the surface and the interior.
Saturn's largest moon looks suspiciously like it might have hosted life, because its thick atmosphere is rich in compounds that often mark the presence of living organisms. For instance, Titan's air is filled with methane, which is usually destroyed by sunlight. On Earth, life constantly replenishes methane, so it might similarly be responsible for the methane on Titan. Titan is rather cold, however, and if liquid water exists, it must be deep beneath the frozen surface.
Jupiter's moon Io is one of the few solar system moons to support an atmosphere, and it contains complex chemicals promising for life. Volcanism on the moon also makes it warmer than many others - another good sign. Io is still a long shot, though, because its location inside Jupiter's magnetic field means it is constantly being pelted with lethal radiation. Its violent surface also seems inhospitable, with temperatures often too cold to support life, as well as molten hot spots that are equally deadly. |
Using Virtual Reality in Lessons
by Susan Gaer
Whether you teach English language learners or adult secondary education learners, you need to think about ways to integrate technology into the lesson. There are many technology integration matrixes, such as SAMR, TPACK and The Technology Integration Matrix from the Florida Center for Instructional Technology. However, all of them have in common the fact that integration is more than substitution of one technology for another. I have developed a lesson as a model of this type of integration. It creates an online model using Google Docs, Google Forms, and Thinglink along with audio and video. I hope that this model is something that other teachers can build on to make their own highly integrated lessons.
This lesson uses the concept of Google Hyperdocs. Hyperdocs are worksheets created in Google Docs, with 21st Century improvements. I have not included standards or rubrics because I think that this lesson is applicable for a variety of levels which would use different rubrics and standards.
I tried this activity in a high beginning ESL conversation class and it was very successful. I modeled it for students, then had them do it in pairs. Students were very excited to be able to take the task sheet home to practice it on their own outside the class.
Materials needed for this lesson
- A Google account
- Google Cardboard
(it is not required but really makes the lesson much more amazing)
- To monitor student performance, duplicate all the documents from the
- You can use my padlet account which is linked to the assignment sheet, or
make your own.
- Students should have a quizlet account to do the bonus assignment
Get the Task Sheet on Google Docs
When you are ready to use this lesson you will want to make a
Explore Interactive Lesson Preview
Editor’s note: We took a moment to create a quick ThingLink to provide
Since this is probably the first time students have used virtual reality, you will need to do extensive modeling and scaffolding. Project the task sheet so that you can show the students how to move around.
- Show the students how to click around by taking them through each task and showing them how to use it. Note: The 360 photo is best viewed on a Google Cardboard. These can be purchased for $10.00 each. I have five for a class with five groups.
- Divide the students into groups. Each group should have one smartphone to view the VR section. All other sections can be done on a phone, tablet, or computer.
- Have them find the task sheet either on their phone, computer or tablet.
- Once they get to the 360 section, it is possible to navigate it on a flat computer surface, however, Google Cardboard takes it into a whole new dimension. Viewing with the cardboard, puts the students into the image. As they move around holding the cardboard to their eyes, the image moves with them. As they move to the hotspots, it automatically opens a video or audio clip. Make sure to have them note that after they finish the 360 image they need to remove the phone from the cardboard and continue with the task sheet. This will teach students navigation basic
Although this activity might seem daunting at first view, it’s worth trying as you and your students will both enjoy it!
About the Author
“Getting certified as a Thinglink instructor changed my life. My ESL students love visiting places and creating 360 Photos. Image tagging is the next step for my students” —Susan Gaer |
Preschool Lesson Plans
If you are trying to teach your preschool child how to grow educationally and emotionally, you need to pay attention to solid preschool lesson plans. A preschool lesson plan allows you to ensure that the subject you want them to learn is actually being taught. Lesson plans are very effective in providing you with a framework and objective for your lessons.
Preschool lesson plans should have a singular objective. What kind of objective? Well, for the objective, you need to decide what it is the preschool students need to learn. There are some very obvious things that they need to learn. On the academic side, they should try to learn their ABC's. They also should know how to count their basic numbers and know the names of the colors. It's also important for preschool students to learn the shapes and how to recognize differences. On the social side, preschool children should learn how to interact with other children in social acceptable ways. This means learning to share and having good manners.
Of course, just like you can't put together a lesson plan to teach several math topics at once, you can't teach several of these objectives at once. That's why you focus on the one topic you want your preschool student to learn. With that objective in mind, you need to come up with activities or games that support the objective you are teaching.
The typical preschool lesson plans have three major components. First you introduce the topic and objective. Next you want to have the students to practice the objective. Finally, you want them to produce it on their own. If you are teaching them the ABC's, then your lesson might consist of showing flash cards for each letter and identifying them. Next you might have students take turns trying to identify the letters. Finally you might end with the students singing the ABC song. A preschool lesson should be a focused plan of action to teach the objective in a clear, concise, but rewarding and varied way. |
Google+ Badge BLACK SOCIAL HISTORY
Saturday, 29 December 2012
BLACK SOCIAL HISTORY: PLANTATIONS - SLAVE RESISTNCE AND REVOLT :
Day to day resistance was the most common form of opposition to slavery, breaking tools, pretending to be ill, staging slow downs and committing acts of arson and sabotage. Runaway was another form of resistance, mostly slaves ran away relatively short distance and weren't trying to permanently escape from slavery. They were temporarily with holding their labor as a form of economic bargaining and negotiation. Slaves would have debates with their masters about their pace of their work, the amount of free time they would enjoy, monetary rewards, access to gardens plots and the freedom to practice burials, marriages, and religious ceremonies, free from whites. |
FDR Lesson Ideas
The FDR Portfolio Project
This project uses the
editorial cartoons of the New Deal Era to satisfy one of the seven parts of the eleventh
grade AP United States History portfolio requirements. The portfolio is a local requirement,
not a College Board requirement. The work is challenging but can be adapted to fit
- Procedures For Teachers
- A guideline for teachers who want to create FDR cartoon portfolios.
- Directions for Students
- The student portfolio assignment.
- Drawing FDR
- Eitan Shapiro's comparison of cartoonist Jerry Doyle and Fred
- The First 100 Days
- Valerie Dorn depiction of the first 100 days of FDR's tenure
- The Casablanca Conference
- Jesse Hoagg explores the significane of the 1943 Casablanca Conference
and the start of the global era.
- Hugo Black
and the KKK
- Avra van der Zee's commentary on the Supreme Court nomination
of Hugo Black and his ties to the KKK.
- An Uncertain Relationship
- Ryan Fagan exaimines the uncertain relationship between FDR and
John l. Lewis.
Project: Bring a Cartoon to Life
incorperated the FDR cartoons into a new text called, Adventures in Time and Place.
The lesson presented is oriented to grade 5 and is used to bring the Great Depression
into context for our younger students. The project is called Bring a Cartoon
to Life and is extremely well planned and organized.
Send comments to
Paul Bachorz at [email protected]
Copyright © 1998 Niskayuna High School E-mail: [email protected] |
Reversible reactions can go fowards or backwards.
The directions depends on conditions such as:
When a reaction reaches Equilibrium All the reactants and products are present in the Equilibrium mixture. The amounts of the reactants and products dont change from there on.
When a reaction reaches Equilibrium it is shown with this symbol:
Different Equilibrium reactions have different points of Equilibrium.
For strong acid Equilibrium lies on the right.
For weak acids Equilibrium lies on the left.
In Dynamic Equilibrium the fowards and backwards reaction happens at the same time.
They both happen at the same rate. This explains why the amounts of chemicals in the reaction do not change.
Strong and Weak acids
In strong acids most molecules from the reaction Ionize. So Equlibrium lies on the right.
In Weak acids very few molecules from the reactant Ionize. So Equilibrium lies on the left. |
How to Make Salt Crystals
Salt crystals are created through a simple process that involves boiling water, measuring out salt until it no longer dissolves in water, and suspending a paper clip or ring a couple of centimeters above the bottom of the container. Form small crystals in a day with help from a science teacher and field biologist in this free video on chemistry.
Promoted By Zergnet
Hi I'm Brian with EricksonTutoring.blogspot.com. Today we're going to run through the steps of how to make salt crystals. So salt crystals are cool little experiments you can do at home and the end result, the crystal itself is pretty fun. So it's a simple process that we'll run through right now. You start out with a jar of some sort with some water in it. Most people suggest that you have boiled water that's right at the point of having just boiled, so hot water. In my opinion it doesn't quite matter whether or not it's hot or cold water because no matter what water you have it's going to dissolve an equal amount of salt. So you have your water in a jar. The next step is you measure out salt, I suggest teaspoon by teaspoon so measure out a teaspoon of salt, pour it into your water, stir thoroughly. You want to repeat this process until the salt is no longer dissolving in your water. So as I said before, there's a little debate, some people say that hot water's going to let you dissolve more salt. Anyway, so once you've dissolved as much salt as you can into your water, take a paper clip or a ring, something like that, attach it to some string and suspend it above your jar with a pencil. So I didn't have a paper clip today so I used a keyring, it's going to work the same. So you then put this into your solution of salt and water and make sure that the paper clip or ring is roughly oh about a couple of centimeters off of the bottom of your jar. Cover it, let it sit and now it's time to wait. In about a day you should have small crystals forming. If you keep track of it and let it continue to grow, it's going to get bigger. Just make sure to switch it into a new jar with new solution if you start to see crystals forming on the sides of the jar because those are going to compete with your crystal that you're growing. So have fun and enjoy the steps to making salt crystals. |
Lycurgus, (flourished 7th century bc?) traditionally, the lawgiver who founded most of the institutions of ancient Sparta.
Scholars have been unable to determine conclusively whether Lycurgus was a historical person and, if he did exist, which institutions should be attributed to him. In surviving ancient sources, he is first mentioned by the Greek writer Herodotus (5th century bc), who claimed that the lawgiver belonged to Sparta’s Agiad house, one of the two houses (the other being the Eurypontid) that held Sparta’s dual kingship. According to Herodotus, the Spartans of his day claimed that Lycurgus’ reforms were inspired by the institutions of Crete. The historian Xenophon, writing in the first half of the 4th century bc, apparently believed that Lycurgus had founded Sparta’s institutions soon after the Dorians invaded Laconia (c. 1000 bc) and reduced the native Achaean population to the status of serfs, or helots.
By the middle of the 4th century bc, it was generally accepted that Lycurgus had belonged to the Eurypontid house and had been regent for the Eurypontid king Charillus. On this basis Hellenistic scholars dated him to the 9th century bc. In his Life of Lycurgus, the Greek biographer Plutarch pieced together popular accounts of Lycurgus’ career. Plutarch described Lycurgus’ journey to Egypt and claimed that the reformer had introduced the poems of Homer to Sparta.
In the light of the conflicting opinions about Lycurgus held by writers before 400 bc, some modern scholars have concluded that Lycurgus was not a real person. They point out that the Greeks tended to discuss the origins of political and social institutions in terms of the personal intentions of a single founder. Nevertheless, many historians believe that a man named Lycurgus should be associated with the drastic reforms that were instituted in Sparta after the revolt of the helots in the second half of the 7th century bc. Those scholars claim that, in order to prevent another helot revolt, Lycurgus devised the highly militarized communal system that made Sparta unique among the city-states of Greece. If that view is correct, it is probable that Lycurgus also delineated the powers of the two traditional organs of the Spartan government, the gerousia (council of elders, including the two kings) and the apella (assembly). |
Learn about this topic in these articles:
...agreement between experiment and theory had to await the development of quantum mechanics. Wavelengths for X rays range from about 0.1 to 200 angstroms, with the range 20 to 200 angstroms known as soft X rays.
...the radiation) is more than 10 orders of magnitude higher than the most powerful rotating anode X-ray machines. The synchrotron sources can also be optimized for the vacuum-ultraviolet portion, the soft (low-energy) X-ray portion (between 20 and 200 angstroms), or the hard (high-energy) X-ray portion (1–20 angstroms) of the electromagnetic spectrum. |
Wind chill can best be described as a sensation that we feel as a result of the effects of wind and temperature. Wind chill is not something that can be measured using a device, so scientists have come up with a mathematical formula that relates wind speed and air temperature to the cooling sensation we feel on human skin.
Prior to 2001, wind chill was calculated based on the time it took for a cylinder of water to freeze in the wind based on experiments conducted in Antartica in 1939. During 2001 a team of scientists from Canada and the US decided to develop a new wind chil index that is based instead on the lost of heat from peoples’ faces, the part of the body most likely to be affected by wind chill. Wind chill does not impact inanimate objects like automobiles or tents because they objects cannot cool below the actual air temperature.
Understanding wind chill is particularly important to the prevention of frostbite and hypothermia. As wind speed increases the body is cooled at a faster rate causing one’s skin termpertaure to drop. If your body is wet, wind can speed up the evaporation process and draws more heat away from you body. Studies show that when your body is wet, it loses heat much more rapidly than when it is dry.
The best way to protect yourself against wind chill is find shelter and get out of the wind. If you are wet, change clothing, or remove clothes if you are sweating during strenuous activities such as hiking. When the wind chill is high, try to cover as much exposed skin as possible, particularly on your head where up to 40% of body heat is lost. Wear a wind resistant outer layer like a shell and cover your hands and feet with mittens that cover your wrists and boots.
Written: 2008. Revised 2013. |
The Laws of Thermodynamics
Life can exist only where molecules and cells remain organized. All cells need energy to maintain organization. Physicists define energy as the ability to do work; in this case, the work is the continuation of life itself.
Energy has been expressed in terms of reliable observations known as the laws of thermodynamics. There are two such laws. The first law of thermodynamics states that energy can neither be created nor destroyed. This law implies that the total amount of energy in a closed system (for example, the universe) remains constant. Energy neither enters nor leaves a closed system.
Within a closed system, energy can change, however. For instance, the chemical energy in gasoline is released when the fuel combines with oxygen and a spark ignites the mixture within a car’s engine. The gasoline’s chemical energy is changed into heat energy, sound energy, and the energy of motion.
The second law of thermodynamics states that the amount of available energy in a closed system is decreasing constantly. Energy becomes unavailable for use by living things because of entropy, which is the degree of disorder or randomness of a system. The entropy of any closed system is constantly increasing. In essence, any closed system tends toward disorganization.
Unfortunately, the transfers of energy in living systems are never completely efficient. Every body movement, every thought, and every chemical reaction in the cells involves a shift of energy and a measurable decrease of energy available to do work in the process. For this reason, considerably more energy must be taken into the system than is necessary to carry out the actions of life. |
Apr 02, 2019 By Team YoungWonks *
In our previous blog (https://www.youngwonks.com/blog/Self-driving-cars:-what-autonomous-driving-technology-can-do-today), you read what autonomous/ automated driving are all about, their advantages and disadvantages and what this technology means for several industries. In this blog, we shall take a look at how this autonomous/ automated driving technology works.
At one point of time, a self-driving (autonomous) car, also known as a driverless car, would have been dismissed as a work of science fiction. But as it happens, with companies such as Tesla, Google/ Alphabet’s Waymo, General Motors (GM), Audi, BMW and even Mercedes Benz investing in autonomous driving, this is fast becoming a reality. In fact, according to The Telegraph, the driverless technology industry is expected to be globally worth £900 billion by 2025 and is said to be growing annually by a whopping 16 per cent.
All of which brings us to the question: how do driverless cars work?
To begin with, there are various systems that work with each other so as to control a driverless car. So if we want to understand how an autonomous vehicle works, we first need to look at the functional architecture of the said vehicle. Now the functional architecture is to the vehicle what the anatomy diagram is to us humans. In other words, it refers to how the major parts of the vehicle work together so as to achieve the mission of self-driving without flouting any legal or ethical codes.
Autonomous car vs traditional car
Now before we look at how an autonomous/ self-driving car functions, let’s look at how a traditional car - that needs a human driver at the wheel - works.
In a traditional car, the control of the car rests with the human driver who sees objects in the surroundings with his/ her eyes, processes this information with his / her brain and then the brain passes this information on to the driver’s limbs (arms and legs) which then carry out the appropriate tasks / commands such as braking or accelerating depending upon the situation.
As opposed to this, in an autonomous car, the human driver is taken out of the equation; here the car is - to put it simply - controlled by a computer. Here it’s the sensors which perform the function of the eyes - i.e. they detect and observe the objects in the surroundings. This information is then passed on to the brain equivalent of the autonomous car - the computer - which then transmits the appropriate instruction to the car’s actuators, which in turn are the arms and legs equivalent here. This means it’s the actuators which carry out the task of braking or accelerating.
This brings us to a significant component of the autonomous car - the sensors. Sensors indeed play a significant role when it comes to the functioning of an autonomous vehicle. They help detect and observe the objects in the surroundings which is a key element when it comes to the smooth operation of such a self-driving vehicle. There are different kinds of sensors that are being used by autonomous cars today; each of them has its own unique features. Here we take a look at them.
Cameras detecting cars on the road (above)
Cameras in autonomous driving work in the same way our human vision works and they use technology similar to the one found in most digital cameras today. Most autonomous cars today have an average of 8 cameras fitted in as this multitude of cameras can scan the environment better and from different angles (read: from the sides and the front and back). Having so many cameras also provides the car with good depth perception that is essential to a good driving experience. Tesla cars, for instance, rely heavily on cameras as they help the cars build a 3D map of their surroundings.
Advantages: Cameras are easier to incorporate in cars as video cameras are already easily available in the market. This means an autonomous carmaker will not need to work from scratch. They also aren’t blind to weather conditions such as fog, rain and snow; all that a human driver can navigate, a camera-fitted autonomous car can too - be it reading street signs or interpreting colors. Moreover, they can be easily hidden within the car’s structures, thus not compromising on the car’s appeal. They are also way less expensive when compared to sensors such as lidars. So using cameras helps bring down costs of self-driving cars.
Disadvantages: Cameras, just like humans, do not produce good results when lighting conditions change and leave objects obfuscated. So strong shadows or bright lights either from the sun or an oncoming car can confuse the cameras. Cameras also typically send what is essentially raw image data back to the system, unlike say a lidar where exact distance and location of an object is provided. That means camera systems will need to depend on powerful machine learning (neural networks or deep learning) computers that can process those images to determine exactly what is where.
LIDAR stands for Light Detection and Ranging and is a remote sensing method that uses light in the form of laser beams so as to measure ranges (variable distances) to the earth. So basically a lidar sends out millions of laser light signals per second and measures how long it takes for them to bounce back.
How does it do it? The answer is rather simple. We know that the speed of light is 299,792,458 meters per second. So when a lidar tracks the time taken for a beam of light to travel and hit an object and bounce back, we can multiply the time taken and the speed of light to calculate the distance travelled by the beam of light. But bear in mind that since the beam has travelled both to the object and back to the lidar, we need to divide this distance by 2 in order to arrive at the exact distance between the lidar of the autonomous vehicle and the object hit by the beam.
This means that a lidar makes it possible to create a very high-resolution picture of a car’s surroundings, and in all directions, if it’s placed in the right spot (like the top / roof of a car where it keeps rotating to scan the surroundings). It continues to be this precise even in the dark since the sensors are their own light source.
Advantages: Lidars are accurate; they offer a 360 degree view and - in 3D imagery at that - of the car’s surroundings as they detect things across a long distance. They are also very good at capturing information in different types of ambient light (night or day) as they are not dependent on external light sources. This is a key advantage because cameras are worse in the dark, and radar and ultrasonic sensors aren’t as precise either. Lidars also save computing power as they can immediately tell the distance to an object and direction of that object. A camera-based system, on the other hand, needs to take images and then analyze them to arrive at the distance and speed of objects, thus taking up far more computational power.
Disadvantages: On the downside, lidars are rather expensive - way more than other sensors, in fact. At one point of time, they were as expensive as 75,000 USD. And while they are cheaper than that (now down to around 7500 USD), they are still not cheap. Plus they are quite bulky, as they involve several moving mechanical parts (for now, at least). In many systems, lidars cannot yet see well through fog, snow and rain; nor do they give information that cameras can easily do, like the words on a sign, or the color of a light.
RADAR, short for Radio Detection and Ranging, is also a remote sensing method, except that instead of using light, it uses radio waves / frequencies so as to measure ranges (variable distances) to the earth.
In other words, radars uses radio waves to detect objects and determine their range, angle, and/or velocity.
And how does it do it? Again, we know that radio waves being electromagnetic waves, their speed is the same as the speed of light, i.e. it is 299,792,458 meters per second. So when a radar tracks the time taken for a radio wave to travel and hit an object and bounce back, we can multiply the time taken and the speed of the radio wave to calculate the distance travelled by the wave. But bear in mind that since the wave has travelled both to the object and back to the radar, we again need to divide this distance by 2 in order to arrive at the exact distance between the radar of the autonomous vehicle and the object hit by the radio wave.
Advantages: Radars are not exactly a new technology, which means they have only become better with time and they are fairly inexpensive, thus allowing autonomous car companies to cut costs. They are quite reliable in the long-term as they do well in weather conditions such as fog, rain, snow and dust. They can typically see a longer distance than lidar which is important for vehicles such as trucks.
Disadvantages: Radars are less angularly accurate than lidar as they lose the sight of the target vehicle on curves. Its object resolution then is poor and it can get confused if multiple objects are placed very close to each other. For instance, it can consider two small cars in the vicinity as one large vehicle.
d. Ultrasonic sensors
The ultrasonic sensor works on a principle similar to radars and lidars, except that it does so using ultrasonic (sound) waves. The sensor head emits an ultrasonic wave and receives the wave reflected back from the target (the object being detected). Then the sensor measures the distance to the object by measuring the time between the emission and reception. Distance to the object is calculated by first multiplying the speed of sound (343 meters per second) into the time taken for the sound wave to hit the object and come back; and then dividing the value by 2 as the time here includes the time for go-and-return.
Advantages: Ultrasonic sensors reflect sound off of objects, so the color or transparency have no impact on the sensor’s reading. Similarly, dark environments have no adverse bearing on the sensor’s object detection abilities. They are also fairly easy to use and easy on the pocket; nor are they highly affected by dust, dirt, or high-moisture environments. Moreover, they can easily interface with microcontrollers.
Disadvantages: Their readings get affected by objects covered in soft fabrics that absorb sound and by changes more than a 5-10 degree variation in the weather condition. But the biggest disadvantage is their limited range; they can only detect objects within the range of three to 10 meters as opposed to lidars and radars which have a higher range (lidars have a range of upto 300 meters). This means ultrasonic sensors are ideal for detecting only nearby objects and not farway ones.
e. Inertial Measurement Unit (IMUs)
An inertial measurement unit (IMU) is an electronic device that calculates and reports an object’s specific force, angular rate, and sometimes the magnetic field surroundings the body, by using a combination of accelerometers and gyroscopes, sometimes also magnetometers. So basically an IMU can detect if the car is already moving or not, in what direction (backward or forward), is it turning and so on. While the accelerometer detects the direction (backward or forward), the gyroscope detects rotational motion. The latter are also used in phones, which is why images and videos change from portrait mode to landscape mode when we turn the phone; it’s the gyroscope that helps in that detection. Similarly, the accelerometer in phones helps the phone wake up when the phone is moved; it also helps change the song when we shake the phone in a certain manner. Together these accelerometers and gyroscopes form the inertial measurement unit/system.
Advantages: IMUs are widely in use. They are typically used not just in self-driving vehicles but also to maneuver aircraft and spacecraft, including satellites and landers. An IMU even lets a GPS receiver work when GPS-signals are unavailable, say in tunnels, inside buildings, or when there is electronic interference.
Disadvantages: A big disadvantage of using IMUs for navigation is that they usually suffer from accumulated error. Since the guidance system is continually integrating acceleration with respect to time to calculate velocity and position , any measurement errors, however tiny, add up over time. This leads to an ever-growing difference between where the system perceives it is located and the actual location.
Autonomous Driving Today - Google vs Tesla
While there are many players in the autonomous driving industry today, it’s Waymo (Google) and Tesla that have pioneered the field so far. Others - such as Audi, Mercedes and General Motors through its arm Cruise Automation - are yet to make as many strides.
Both Tesla and Waymo are aiming to collect and process enough data to create a car that can drive itself. But they are trying to do this in different ways and on different scales.
Tesla is looking to take advantage of the several hundreds of thousands of cars it already has on the road by collecting real-world data about how those vehicles perform (and how they might perform) with Autopilot, its current semi-autonomous system. In fact, its system is said to be based on monocular forward-looking camera technology from Mobileye. This means that Tesla’s system is not really able to localize itself on a map, at least not to the degree needed to achieve lane keeping (the GPS isn’t as good as it is with Google). That said, the forward-looking camera can catch the location and curvature of highway lane makers, which is typically enough to ensure that the car stays in its lane and carries out basic lane change maneuvers. And because cameras alone are not ideal and lidars are rather expensive, Tesla still incorporates a radar at the front of their cars for additional input.
All in all, Tesla’s cars use eight cameras, 12 ultrasonic sensors, and one forward-facing radar. For Tesla, the emphasis has been on achieving a low-cost solution and it is said to be on track to automate driving up to 90 percent in the coming years. This also means that the remaining 10% of driving situations will need a driver and getting around those situations - making them driverless - will continue to be tricky.
Waymo, Google’s self-driving car project, uses powerful computer simulations and feeds what it learns from those into a smaller real-world fleet. It has already simulated 5 billion miles of autonomous driving and it has achieved 5 million self-driven miles on public roads. This means it has achieved a fleet of fully self-driving - yes, driverless - cars on public roads. On December 5, 2018, Waymo launched its first commercial self-driving car service called Waymo One, where users in the Phoenix metropolitan area can use an app to request the service.
And as opposed to Tesla which is relying more on computing power, Waymo’s self-driving minivans also aim to reduce the load on their software load by using several lidars – at the front, on the side and on the roof. All in all, they use three different types of lidar sensors, five radar sensors, and eight cameras. So the autonomous driving system here relies on a 64-beam lidar to localize itself to within 10 cm on a detailed pre-existing map. It uses the lidar data - which is said to be very precise - to make a 360 degree world model that tracks and predicts movements for all nearby vehicles, pedestrians, and other obstacles. This in turn allows it to navigate intelligent paths through a complex highway or urban environments.
Autonomous Driving - The Road Ahead
As seen above, a key difference between Tesla and Waymo is the use of lidar; the latter uses it and the former doesn’t. One of the primary advantages of lidar is accuracy and precision. In December 2017, automotive news website The Drive reported that Waymo’s lidar is so incredibly advanced that it can tell what direction pedestrians are facing and predict their movements. The car model Chrysler Pacificas fitted with Waymo’s lidar is said to see hand signals that bicyclists use to predict which direction the cyclists should turn. Of course, as mentioned earlier, lidars are typically expensive and if Tesla actually makes smoothly driving autonomous cars possible without lidar, it would be a big win for them.
But the fact so far remains that lidars - while still more expensive than other sensors - have become cheaper than earlier and are set to become even more cheaper. At one point, lidars were priced at 75,000 USD thus driving up the cost of the car itself. But with Google’s Waymo making its own lidars and not just for their vehicles but also for other automobile companies, lidar costs have reduced drastically to around 7,500 USD. What’s more, much like electronics, lidars are said to become cheaper in the coming months, even to around 5,000 USD.
V2V and V2I technology
Another future development that is expected in the autonomous driving car industry is the rise of V2X (Vehicle to Everything) / V2I (Vehicle-to-infrastructure) technology. V2V and V2I components will make it possible for the autonomous vehicle to interact with and get information from other machine agents in the environment, such as information transmitted from a traffic light that it has turned green or warnings from an oncoming car.
Electrified roads for charging vehicles
This month last year (2018), the world got its first electrified road for charging vehicles near Stockholm in Sweden. Essentially a stretch of about 2 km (1.2 miles) of a public road, the road has an electric rail embedded in it which then recharges batteries of cars and trucks driving on it. Moreover, the government’s roads agency is said to have already drawn a national map for future expansion. This move is said to help deal with the problems of keeping electric vehicles charged, and the manufacture of their batteries affordable.
Indeed, while such specialised roads may initially exist only in certain parts of the world, they will be a huge blessing for electric vehicles which won’t have to stop for recharging and this would in turn help reduce dependence on fuel. In addition to this, the roads will be a source of revenue generation for those maintaining them and the inductive wires / electric rails embedded in them.
*Contributors: Written by Vidya Prabhu; Lead image by: Leonel Cruz |
Protecting the environment has several benefits for people. Making changes to reduce the amount of energy a person uses at home or at work not only preserves resources for future generations. Reducing energy use and consumption can also help reduce home heating and other energy bills. Using less means less waste, this also has an impact on a person's budget. There are large ways people can reduce their energy use to help protect the environment, such as by driving less or by not flying as often. Small changes also have a big impact on the health of the planet.
A person can make a number of barely noticeable changes to their daily habits to help save the environment. For example, if a person is in the market for a new refrigerator, television or other appliance, they should look for energy efficient models. These models feature the Energy Star label in the US. Energy efficient appliances don't cost more than less efficient appliances. Over time, they will save a person money, since they require less electricity, oil or gas to operate.
Another everyday change people can make to protect the environment is to recycle. The rules for recycling vary from location to location. A person should find out what rules their city has in place. Some cities will pick up every type of plastic, as well as glass, paper and metal at the curb. Others ask that people bring their recyclables to a drop off location. Some places will only recycle certain types of plastic.
Composting is another change people can make in their daily lives to help the Earth. If a person has the space, they can compost at home, either using a three bin system outside or by using a worm bin. If there is no room for a bin at home, they can arrange for compost pick up with a local company. Although not every city offers compost pick up with the trash and recycling, a few do.
Before a person can make even the smallest changes to their lifestyle, it helps to understand the environmental impact of certain choices. Online calculators are particularly helpful for figuring out how many resources a person uses and how big of an environmental footprint they have. Different calculators measure different areas. Some will figure out how much pollution a person's car contributes, while others measure the amount of paper a person uses. Other calculators look at the big picture and measure all areas. |
Australia history dates back to as early as prehistoric times about forty-one thousand years ago, between the earliest human inhabitants of the Australian continent, to the first known unearthing of Australia in 1606 by Europeans. However, since no written confirmation of human dealings in Australia has been found for that period, the era is submitted as prehistory rather than history.
Written Australia history began in 1606, when a Dutch navigator Willem Janszoon, in his ship Duyfken plotted a route to the Gulf of Carpentaria, first sighted and made landfall at the western coast of Cape York Peninsula. In 1616, Dirk Hatog, a Dutch sailor, took a new southern route from his usual course, crossing the Indian Ocean, to where he ended up on the offshore island of Western Australia. He became the first recognized European to enter the Australian soil.
Captain James Cook in 1768 left England for a three-year voyage to the Pacific that also led him to Australia, landing on the eastern coast at Botany Bay on April 29, 1770. Mapping the region, Captain James Cook named the area New South Wales, and it was him and his crew, together with the botanist Sir Joseph Banks, who soon after maintained settlement in Australia, adding another impact on Australia history.
Two more expeditions of Cook in the 1770’s supplemented information on the Australian island and paved Britain’s declaration to the continent. It was during these times that the Aboriginal people refused to accept the influence of the Europeans, resulting to frequent cultural clashes.
Although the Europeans found Australia to be an unappealing and secluded settlement land, it had some social and strategic importance from a homeland that had increasing crime rates and profit-making interests in the Pacific and East Asia. Even their prisoners were sent to penal settlements in Australia because of the congestion in British penitentiaries.
After the end of the American Revolution in 1783, Britain swiftly moved to create its first settlements in Australia as they could no longer transport British convicts to America. More than one-hundred-fifty-thousand prisoners were sent to two colonies in Australia in the middle of the 1800’s, which structured the early terrains of Western Australia and New South Wales.
At the start of 1793, more free settlers arrived and a strong economy commenced to build up. However, Australia history also includes the many clashes and bloodshed of the Aboriginal people who rejected the new settlements. From 1820 to 1880, Australia history experienced significant processes that positioned the foundation for its current society.
The Australian constitution became effective 1901, derived from British parliamentary traditions, including essentials of the United States system. The heart of Australia history in the twentieth century has been the progress of both a national government and a national culture.
Find More Dutch Articles |
What is an enclosed space?
An enclosed space is a space that contains a limited number of openings, entries or exits.
At these spaces the ventilations are unfavorable.
Examples of enclosed spaces include double bottoms, fuel tanks, and empty spaces between bulkheads.
The officer of the watch only gives the authorization of entry after making sure that it is safe.
Before entering on an enclosed space, the officer of the watch needs to make a preliminary inspection to determine the probability of a deficiency of oxygen levels or if there are inflammable or toxic atmospheres.
Only trained personnel should entry these spaces. They must have with them portable VHF so they can communicate with the outside crews.
There must be means to rescue the people who go inside these spaces and a stand-by rescue team.
The atmosphere must be tested before anyone going inside the enclosed space and in regular periods of time during the operations.
Oxygen levels and the reaction they provoke on humans:
- 23.5%- Disorientation, breathing and vision problems;
- 19.5%- Lowest acceptable oxygen levels;
- 15% to 19%- Lower coordination, lower capability of working energetically;
- 12% to 14%- Heavier breathing;
- 10% to 12%- Increased breathing rate and blue lips;
- 8% to 10%- Disorientation, fainting, nausea, loss of conscience;
- 6% to 8%- Permanence for more than 8 minutes is fatal; 6/8 minutes: 50% chances of being fatal; 4/5 minutes: possible recovery.
- 4% to 6%- Coma in 40 seconds; death in 3 minutes.
Precautions after entering the enclosed space:
- Regular atmosphere tests;
- Continuous ventilation;
- When there is any doubt if It’s safe to entry, this should be done only if anyone’s life is at risk or the safety of the ship and crew is on the line;
- Use proper clothes and breathing apparatus.
There must always be a rescue team ready to entry the enclosed space, and must have the following equipment:
- 2 autonomous breathing apparatus;
- 2 bottles of compressed air;
- 2 flashlights;
- Safety harness;
- Safety Lifeline;
- First aid kit;
- Elevation tripod. |
by Yoon Joung Lee
During the American Civil War, there was a woman called “Moses” by hundreds of slaves. Harriet Tubman, a runaway slave from Maryland, was born around 1820. She was an abolitionist, Civil War spy, nurse, humanitarian, and Underground Railroad conductor. She was the fifth of nine children. As her parents were slaves, she was naturally born into slavery. She had a harsh childhood and was whipped even though she was very little. When she was 12 years old, she got a serious injury after she was hit on her head by a two-pound iron weight thrown by one of the slave overseers. The incidence was caused initially because Harriet blocked a doorway to protect another slave who attempted to escape. This event brought her narcolepsy which is a sleep disorder that causes excessive sleepiness and frequent daytime sleep attacks. This condition followed her for the rest of her life.
She married a free African American man named John Tubman in 1844. In 1849 she decided to run away because she was afraid of being sold or sent to the South after the farm owner of her and her family died. With the help of her white neighbor, Harriet was able to reach Underground Railroad that was a secret network of “safe houses” that helped slaves to move safely from one place to another, and eventually their way to the Canadian border. Despite Harriet’s successful escape to the South, she went back to the South a year later to rescue her sister and her kids. On her second return she saved her brother and two other slaves. When she made the dangerous third trip back to the South to rescue her husband, she found out that he had another wife and she instead came back with other slaves who longed for freedom.
After she escaped from slavery, Harriet made total 19 trips to the South to bring slaves to the North where they could gain freedom. As one of the conductors who travels with slaves to escape at Underground Railroad, Harriet had saved approximately 300 slaves from the South, and about 100,000 slaves were able to escape during a 40-year-period with the help of the Underground Railroad.
What made her extremely courageous and confident was her belief that God was aiding and equipping her throughout her journey as a "conductor." There were times when she had to be aggressive to lead the slaves out of the slavery. She used to take out a gun and threaten them, saying "Do you want to die here or gain your freedom?" She required obedience from the slaves, and they also knew they could successfully escape only when they followed her guidance. For them, Harriet was known as "Moses" who saved the people of Israel from Egyptian slavery.
She was a friend with many influential people of the time. In the mid-1850s, she met a US senator and a former New York State Governor William H. Seward. The couple later provided her with various help, and offered her and her family houses to stay in when she was no longer on the road after the Civil War. She also befriended with abolitionists John Brown, Frederick Douglas, Jermin Louguen and Gerrit Smith, and closely worked with them to fight slavery.
While she was guiding a group of African American soldiers in South Carolina after the outbreak of the Civil War, she met her future husband, Nelson Davis who was 10 years younger than Harriet. They got married when she moved to Auburn, New York after the end of the Civil War. They built their house and spent the rest of their lives in Auburn. During the time in Auburn, she continued to passionately work for human rights including womens' and keep in touch with her beloved friends, William and his wife, Frances Seward. Harriet died in 1913 in Auburn, New York, at her age of 93. Her heroic life still inspires many people in the United States. |
Among the influential composers of baroque music, there have been few who have contributed so much in talent, creativity, and style as Johann Sebastian Bach. Bach was a German organist and composer of the baroque era. Bach was born on March 21, 1685 In Sciences, Turning and died July 28,1750. Bach revealed his feelings and his insights in his pieces. Bach’s mastery of all the major forms of baroque music (except opera) resulted not only from his genius talent, but also from his life long quest for knowledge. In some parts of Germany, the name, “Bach” came a synonym with the word, “musician. Extremely talented In the art of baroque composition, Bach placed his heart, soul, and Ingenuity In his music as It Is clearly illustrated in his childhood, throughout his career, and of course through his musical works. Bach’s connection to music Is already evident through his childhood. Bach was born Into a musical family in Sciences. His father, before dying, taught him the basic skills of strings and an organist at a church taught him how to play the organ. When both of his parents died, he continued to devote his early life to music. His brother Ionian Christopher continued to teach him how to play the organ.
Furthermore, he won a scholarship and became part of the school choir of poor boys In Lundeberg. Already seen was his sheer genius and talent that he possessed for music. Clearly, his childhood played a big factor of building a solid foundation for his music. Bach’s heart in music does not end with his childhood but all through his career. As a master of several Instruments, he became a violist In a court orchestra when he was only 18. Later, he became the organist of several churches In Reinstated. Throughout these churches he had developed a reputation of having a brilliant musical talent.
Also, because of his perfectionist tendencies and high expectations of other musicians, he fought many times which Is yet another example of his compassion for music. Furthermore, he had even walked over 200 miles because a great vocalist known as Buxtehude influenced him. As an organist and a choirmaster, Bach continued to devote his life to composing music for churches. He would work under dim light creating these masterpieces. After conducting and composing for he court orchestra at Cotton for seven years, Bach accepted the Job of being a music director for the SST.
Thomas church in Leipzig. His compassion for music went on even further after he became blind. He was still creatively active until the very end. Even Just before his death, he dictated his last musical composition to his son in law. His career was surely a massive Indication of his talent and the heart and soul he put into his music. The effort of his devotion to music seen in his life and career hopefully will never be forgotten but also that one should take notice the sheer genius this composer splayed In his musical works.
Bach’s expressive genius In working counterpoint was a clear Indication of him understanding and using every resource of musical language in the baroque era. He would weave several musical lines of melodies to 1 OFF skill. Plus, through several of his pieces his religion influences him greatly. He even chose to put different cultures in his pieces. He would combine patterns of French dancers, Italian melodies, and German counterpoint all in one when he wished. As Nell because of the influence of a great vocalist, Buxtehude, he incorporated vocal parts in his pieces at one point in his life.
However, later in his works he displayed ‘arioso instruments and he used each instrument’s unique properties of construction and tone quality to perfect his compositions. This was a great characteristic of the baroque. He also wrote music with themes such a representing sea or Christians following the teaching of Jesus. Bach was even able to convey and exploit the media, styles, and genres of pieces in his day, which remarkably allowed him to change the instrument off piece to make it simpler. For instance, he could take a violin concerto and change it to a solo piece such as the harpsichord.
At the name time, he was the supreme master of fugue and solo violinist repertoires. Bach’s complex thinking made him create beautiful and perfect solos and compositions for orchestra or choral ensemble. Surely, through Bach’s hundreds of works, not only shown is his sheer genius but again shown is his true love, heart, soul, and compassion for music. Johann Sebastian Bach was a genius who accomplished a lot throughout his life. Bach was the creator and innovator of the styles of Baroque. He came from a family that produced seven generations of musicians. He may be the greatest master of all classical music.
Throughout all of his life, there may not be another composer who had a deeper compassion for music then Bach. Bach’s heart, soul and ingenuity Nerve all displayed through his hundreds of music works, from his childhood, and throughout his career. He gave a new meaning for true compassion. He found beauty and perfected all his pieces. Others nor any miles of distance of walking did not stop him from keeping the flame in his heart for music. Therefore, to any students “ho are serious in music, one should attain a true Bach-like compassion and dedication if to create the many sheer geniuses the Bach did in a short period of time. |
When neurobiologist Zachary Hall was finishing his PhD thesis on the neurobiology of nest-building in birds, he came across a peculiar issue – hardly any research had been done on why birds make so many radically different nest structures. Surely evolutionary biology would have something to say about it.
Hall and a couple of colleagues decided to start somewhere with this research. They picked and statistically analysed previously published descriptions of Old World Babbler (Timaliidae) nests, which come in two flavours – either cup-shaped and open, or domed with a roof. What they wanted to figure out was the evolutionary history that led to some species of these birds building domed nests, while others are perfectly happy with open ones.
In 1997 biologist Nicholas Collias already proposed a hypothesis – that in terms of their development history, babblers started out building nests high up off the ground, and thus had no need for a dome. However, once space up there ran out and they had to compete for prime nest-estate down on the ground, a domed nest would be more useful for protecting their tiny chicks from predators.
However, at the time there was no direct evidence to support Collias’ hypothesis. Now, with the help of computationally complex mapping of nest heights and types onto evolutionary branches of various babbler species, Hall and colleagues determined that moving a nest closer to the ground would indeed predict a roof addition to the cup-shaped nest.
Their formal analysis was a novel approach, and the authors hope that the way they analysed the evolution of nest-building hand in hand with the ecological differences that bird species face, could be used to figure out other features of bird nest evolution. |
Define the term 'Mobility' of charge carriers in a conductor. Write its S.I. unit
Drift velocity per unit applied electric field is known as the mobility of charge carriers in a conductor.
S.I. unit of mobility: or
Given a uniform electric field E =5 ×103 N/C, find the flux of this field through a square of 10 cm on a side whose plane is parallel to the y-z plane. What would be the flux through the same square if the plane makes a 30° angle with the x-axis?
Given, Electric flux,
We have, Electric flux, ϕ = E.A =
= 50 Weber
When the plane makes a 30° angle with the x-axis, the area vector makes 60° with the x-axis
ϕ = E. A
⇒ ϕ=EA cos θ
⇒ ϕ= (5×103)(10−2) cos 60°
⇒ ϕ = 25 Weber
The carrier wave is given by C (t) = 2sin (8πt) Volt.
The modulating signal is a square wave as shown. Find modulation index.
The generalized equation of a carrier wave is given by,
The generalized equation of a modulating signal is given by,
On comparing the given equation of carrier wave with the generalized equation, we get,
Amplitude of modulating signal, Am = 1 V
Amplitude of carrier wave, Ac = 2 V
Modulation index is the ratio of the amplitude of modulating signal to the amplitude of carrier wave .
It is denoted by,
State Kirchhoff's rules. Explain briefly how these rules are justified.
Kirchhoff’s First Law or Junction Rule states that “The sum of the currents flowing towards a junction is equal to the sum of currents leaving the junction.”
This is in accordance with the conservation of charge which is the basis of Kirchhoff’s current rule.
Here, I1, I2 I3, and I4 are the currents flowing through the respective wires.
Convention: The current flowing towards the junction is taken as positive and the current flowing away from the junction is taken as negative.
I3 + (− I1) + (− I2) + (− I4) = 0
Kirchhoff’s Second Law or Loop Rule states that In a closed loop, the algebraic sum of the emfs is equal to the algebraic sum of the products of the resistances and the currents flowing through them.
“The algebraic sum of all the potential drops and emfs along any closed path in a network is zero.”
For the closed loop BACB:
E1 − E2 = I1R1 + I2R2 − I3R3
For the closed loop CADC:
E2 = I3R3 + I4R4 + I5R5
This law is based on the law of conservation of energy.
Write the expression, in a vector form, for the Lorentz magnetic force due to a charge moving with velocity in a magnetic field B. What is the direction of the magnetic force?
Lorentz magnetic force is given by:
; q is the magnitude of the moving charge.
The direction of the magnetic force is perpendicular to the plane containing the velocity vector and the magnetic field vector.
For any charge configuration, equipotential surface through a point is normal to the electric field. Justify.
Work done (W) in moving a test charge along an equipotential surface is zero.
Work done is given by,
F is the electric force and s is the magnitude of displacement.
For non-zero displacement, this is possible only when cos is equal to 0.
Thus, the force acting on the point charge is perpendicular to the equipotential surface.
Electric field lines give us the direction of electric force on a charge.
Thus, for any charge configuration, equipotential surface through a point is normal to the electric field.
Two spherical bobs, one metallic and the other of glass, of the same size are allowed to fall freely from the same height above the ground. Which of the two would reach earlier and why?
The glass bob would reach the ground earlier. Glass bob which is non-conducting in nature will only experience Earth’s gravitational pull unlike the metallic bob which is conducting.
Since the metallic bob is conducting in nature, eddy current is induced as it falls through the magnetic field of the Earth. As per Lenz’s law, current is induced in a direction opposite to the motion of the metallic bob. Hence, there is a delay.
A capacitor 'C', a variable resistor 'R' and a bulb 'B' are connected in series to the ac mains in circuit as shown. The bulb glows with some brightness. How will the glow of the bulb change if (i) a dielectric slab is introduced between the plates of the capacitor, keeping resistance R to be the same; (ii) the resistance R is increased keeping the same capacitance?
(i) When a dielectric slab is introduced between the plates of the capacitor, its capacitance will increase. Hence, the potential drop across the capacitor will decrease (V= Q/C ) thereby decreasing the potential drop across the bulb because, both the bulb and capacitor is connected in series. So, brightness of the bulb will increase.
(ii) When resistance (R) is increased keeping the capacitance same, the potential drop across the resistor will increases. Therefore, the potential drop across the bulb will decrease because both are connected in series. So, brightness of the bulb will decrease.
Show variation of resistivity of copper as a function of temperature in a graph.
The graph below shows variation of resistivity of copper with temperature. The graph is parabolic in nature.
Out of the two magnetic materials, 'A' has relative permeability slightly greater than unity while 'B' has less than unity. Identify the nature of the materials 'A' and 'B'. Will their susceptibilities be positive or negative?
A is a paramagnetic material because its permeability is greater than unity and its susceptibility is positive. The relative permeability of a paramagnetic material is between and susceptibility is
For a diamagnetic material, the relative permeability lies between 0 ≤ μr < 1 and its susceptibility lies between −1< χ< 0.
Therefore, 'B' is a diamagnetic material and its susceptibility is negative.
This is because its relative permeability is less than unity. |
A documentary which is exploring the impact of racism on a global scale, as part of the season of programmes marking the 200th anniversary of the abolition of slavery. Beginning by assessing the implications of the relationship between Europe, Africa and the Americas in the 15th century, it considers how racist ideas and practices developed in key religious and secular institutions, and how they showed up in writings by European philosophers Aristotle and Immanuel Kant.
Looking at Scientific Racism, invented during the 19th century, an ideology that drew on now discredited practices such as phrenology and provided an ideological justification for racism and slavery. These theories ultimately led to eugenics and Nazi racial policies of the master race. Some upsetting scenes.
The third and final episode of Racism: A History examines the impact of racism in the 20th Century. By 1900, European colonial expansion had reached deep into the heart of Africa. Under the rule of King Leopold II, The Belgian Congo was turned into a vast rubber plantation. Men, women and children who failed to gather their latex quotas would have their limbs dismembered. The country became the scene of one of the century’s greatest racial genocides, as an estimated 10 million Africans perished under colonial rule. Contains scenes which some viewers may find disturbing.
Episodes included: 1. The Color of Money, 2. Fatal Impact, and 3. A Savage Legacy. |
On June 12, 1987, in one of his most famous Cold War speeches, President Ronald Reagan challenges Soviet Leader Mikhail Gorbachev to “tear down” the Berlin Wall, a symbol of the repressive Communist era in a divided Germany.
In 1945, following Germany’s defeat in World War II, the nation’s capital, Berlin, was divided into four sections, with the Americans, British and French controlling the western region and the Soviets gaining power in the eastern region. In May 1949, the three western sections came together as the Federal Republic of Germany (West Germany), with the German Democratic Republic (East Germany) being established in October of that same year. In 1952, the border between the two countries was closed and by the following year East Germans were prosecuted if they left their country without permission. In August 1961, the Berlin Wall was erected by the East German government to prevent its citizens from escaping to the West. Between 1949 and the wall’s inception, it’s estimated that over 2.5 million East Germans fled to the West in search of a less repressive life.
READ MORE: Collapse of the Soviet Union
With the wall as a backdrop, President Reagan declared to a West Berlin crowd in 1987, “There is one sign the Soviets can make that would be unmistakable, that would advance dramatically the cause of freedom and peace.” He then called upon his Soviet counterpart: “Secretary General Gorbachev, if you seek peace–if you seek prosperity for the Soviet Union and Eastern Europe–if you seek liberalization: come here, to this gate. Mr. Gorbachev, open this gate. Mr. Gorbachev, tear down this wall.” Reagan then went on to ask Gorbachev to undertake serious arms reduction talks with the United States.
Most listeners at the time viewed Reagan’s speech as a dramatic appeal to Gorbachev to renew negotiations on nuclear arms reductions. It was also a reminder that despite the Soviet leader’s public statements about a new relationship with the West, the U.S. wanted to see action taken to lessen Cold War tensions. Happily for Berliners, though, the speech also foreshadowed events to come: Two years later, on November 9, 1989, joyful East and West Germans did break down the infamous barrier between East and West Berlin. Germany was officially reunited on October 3, 1990.
Gorbachev, who had been in office since 1985, stepped down from his post as Soviet leader in 1991. Reagan, who served two terms as president, from 1981 to 1989, died on June 5, 2004, at age 93. |
Herd immunity to Covid-19 could be achieved with less people being infected than previously estimated, according to new research.
Mathematicians from the University of Nottingham and University of Stockholm devised a simple model categorising people into groups reflecting age and social activity level. When differences in age and social activity are incorporated in the model, the herd immunity level reduces from 60% to 43%. The figure of 43% should be interpreted as an illustration rather than an exact value or even a best estimate. The research has been published today in Science.
Herd immunity happens when so many people in a community become immune to an infectious disease that is stops the disease from spreading. This happens by people contracting the disease and building up natural immunity and by people receiving a vaccine. When a large percentage of the population becomes immune to a disease, the spread of that disease slows down or stops and the chain of transmission is broken.
This research takes a new mathematical approach to estimating the herd immunity figure for a population to an infectious disease, such as the current COVID-19 pandemic. The herd immunity level is defined as the fraction of the population that must become immune for disease spreading to decline and stop when all preventive measures, such as social distancing, are lifted. For COVID-19 it is often stated that this is around 60%, a figure derived from the fraction of the population that must be vaccinated (in advance of an epidemic) to prevent a large outbreak.
The figure of 60% assumes that each individual in the population is equally likely to be vaccinated, and hence immune. However, that is not the case if immunity arises as a result of disease spreading in a population consisting of people with many different behaviours.
Professor Frank Ball from the University of Nottingham participated in the research and explains: “By taking this new mathematical approach to estimating the level for herd immunity to be achieved we found it could potentially be reduced to 43% and that this reduction is mainly due to activity level rather than age structure. The more socially active individuals are then the more likely they are to get infected than less socially active ones, and they are also more likely to infect people if they become infected. Consequently, the herd immunity level is lower when immunity is caused by disease spreading than when immunity comes from vaccination.
Our findings have potential consequences for the current COVID-19 pandemic and the release of lockdown and suggests that individual variation (e.g. in activity level) is an important feature to include in models that guide policy.”
View original article here Source |
Design and Technology
Our Design and Technology Curriculum aims to enable our pupils to design and make products that solve real and relevant problems within a variety of contexts, using creativity and imagination. They will acquire a broad range of subject knowledge and draw on disciplines such as mathematics, science, engineering, computing and art. Pupils will learn how to take risks, becoming resourceful, innovative, enterprising and capable citizens. Through the evaluation of past and present design and technology, they will develop a critical understanding of its impact on daily life and the wider world.
We aspire to develop the creative, technical and practical expertise that our pupils will need to perform everyday tasks confidently and to participate successfully in an increasingly technological world. Pupils will build and apply a repertoire of knowledge, understanding and skills in order to design and make high-quality prototypes and products for a wide range of users. They will be able to critique, evaluate and test their ideas and products and the work of others. They will be taught to understand and apply the principles of nutrition and learn how to cook
Skills will continue to be taught creatively through links to other curriculum areas and dedicated design technology subject weeks.
During Key Stage One pupils will be taught:
• to design purposeful, functional, appealing products for themselves and other users based on design criteria
• to generate, develop, model and communicate their ideas through talking, drawing, templates, mock-ups and, where appropriate, information and communication technology.
• to select from and use a range of tools and equipment to perform practical tasks.
• to select from and use a wide range of materials and components, including construction materials, textiles and ingredients, according to their characteristics.
• to explore and evaluate a range of existing products.
• to evaluate their ideas and products against design criteria.
• to build structures, exploring how they can be made stronger, stiffer and more stable.
• to explore and use mechanisms in their products.
During Key Stage Two pupils will be taught:
• to use research and develop design criteria to inform the design of innovative, functional, appealing products that are fit for purpose, aimed at particular individuals or groups.
• to generate, develop, model and communicate their ideas through discussion, annotated sketches, cross-sectional and exploded diagrams, prototypes, pattern pieces and computer-aided design.
• to select from and use a wider range of tools and equipment to perform practical tasks accurately.
• to select from and use a wider range of materials and components, including construction materials, textiles and ingredients, according to their functional properties and aesthetic qualities.
• to investigate and analyse a range of existing products. • evaluate their ideas and products against their own design criteria and consider the views of others to improve their work.
• to understand how key events and individuals in design and technology have helped shape the world.
• to apply their understanding of how to strengthen, stiffen and reinforce more complex structures.
• to understand and use mechanical systems in their products .
• to understand and use electrical systems in their products.
• to apply their understanding of computing to program, monitor and control their products.
Cooking and nutrition
As part of their work with food, pupils should be taught how to cook and apply the principles of nutrition and healthy eating. Instilling a love of cooking in pupils will also open a door to one of the great expressions of human creativity. Learning how to cook is a crucial life skill that enables pupils to feed themselves and others affordably and well, now and in later life.
During Key Stage One pupils will be taught:
• to use the basic principles of a healthy and varied diet to prepare dishes.
• to understand where food comes from.
During Key Stage two pupils be taught:
• to understand and apply the principles of a healthy and varied diet.
• to prepare and cook a variety of predominantly savoury dishes using a range of cooking techniques.
• to understand seasonality, and know where and how a variety of ingredients are grown, reared, caught and processed. |
In mathematics, a prime number is a natural number greater than one that can be divided by only two natural numbers, 1 and itself. Prime numbers are a subset of natural numbers. Any other number that is not a prime number is called composite. There's an infinite number of primes.
Prime numbers are important for various reasons;
1. They are important because they play a role in many mathematical proofs and help differentiate between other types of numbers.
2. We can use prime numbers to factorize any composite natural number into its prime factors.
3. Prime numbers are also used in cryptography and other security methods.
4. Prime numbers can be used as a method of generating random numbers. |
Data is stored in the magnetic media, such as hard drives, floppy disks, and magnetic tape, by making very small areas called magnetic domains change their magnetic alignment to be in the direction of an applied magnetic field. This phenomenon occurs in much the same way a compass needle points in the direction of the Earth’s magnetic field. This process is different from Secure Data Erasure process.
Degaussing, commonly called erasure, leaves the domains in random patterns with no preference to orientation, thereby rendering previous data unrecoverable. There are some domains whose magnetic alignment is not randomized after degaussing. The information these domains represent is commonly called magnetic remanence or remanent magnetization. Proper degaussing will ensure there is insufficient magnetic remanence to reconstruct the data.
Media Degaussing can be done only through some specialized hardware tools which are called Degaussers. |
Teachers and education establishments are fast realizing the benefits digital technology can bring to the classroom.
In a typically traditional industry that utilizes time-honored methods and practices to help students learn, educational technology is now helping drive change and digital transformation that can have a profound effect on children’s learning rates, success levels and exam scores.
Technology in the education space has shown significant progress over the last few years. In the US for example, public schools now provide at least one computer for every five students while schools are spending over $3 billion per year on digital content. In addition, 2015 was the first year that more state standardized tests were administered via technology rather than using traditional paper and pencil.
On a global scale, technology has empowered people to be able to learn in ways that would not have been possible without digital tools. For example, using massive open online courses (MOOCs) to learn and gain qualifications would simply not have been an option ten years ago.
Technologies that are driving change
Digital technologies and approaches are having major positive impacts across the full spectrum of disciplines in the education arena.
- Augmented Reality/Virtual Reality/Mixed Reality: Digital technology is helping make learning collaborative and interactive, and by using tech that students encounter in their daily personal lives, it is also creating immersive lessons that are fun and engaging. AR and VR let teachers create more immersive learning experiences and can foster increased participation in class from a technology-powered generation.
- Classroom sets of devices: This is a philosophical and organizational change that is seeing schools move away from the Bring Your Own Device (BYOD) approach and central school technology labs to having classroom sets of computers or iPads and laptops for each individual student. The benefits of this include helping young students engage with academic subjects, making them more eager to learn, and also teaching students digital literacy and the 21st century skills they will need one day when they join the workforce.
- Redesigned learning spaces: In the grown-up workspace, companies have focused on creating more collaborative, digital-based environments and reaped the benefits. Today, school classrooms are also realizing the potential. No more rows of desks and chairs all pointing towards teacher - today’s classrooms are collaborative spaces designed to facilitate student learning and utilize digital tech to enhance the educational experience.
- Artificial Intelligence: AI can have a significant impact on taking learning to the next level. Australia’s Deaken University is using IBM Watson to create a virtual student advisory service which is available 24 hours a day. AI in education helps enable the personalizing of learning, and can be used to facilitate one-on-one tutoring with the use of Intelligent Tutoring Systems. Progressive educational establishments are realizing that digital technology is not there to replace teachers, but to complement them.
- Personalized learning: As with so many areas where digital has impacted, technology is now enabling greater personalization and customization than ever. Blended learning, adaptive learning and other techniques are changing the educational landscape. Adaptive learning is an area where technology could drive substantial change. Adaptive learning immerses students in modular learning environments where every decision they make is captured and analyzed in the context of learning theory and then used to guide and enhance the student’s learning experiences.
- Gamification: Learning through play has long been a popular educational concept, but that was before digital technology was around. Millennials are habitual gamers and gamification in the classroom can be a useful instructional tool. Gaming technology can help make learning complex subject matter a more exciting and interactive experience for the student.
Start them young
Encouraging children to use digital technology in the classroom is essential in the connected world in which we live. It prepares them for a future where they will use technology in their everyday lives, as well as at work.
Orange, for example, recently demonstrated its commitment to helping young people better understand technology with the launch of its #SuperCoders initiative for 9-13 year-olds. The initiative intends to make children more aware of digital culture by offering them an easy and fun introduction to computer coding.
Today’s students are tomorrow’s digital leaders. As more schools make the move to digital tools and techniques, we will see the power technology can bring to the next generation classroom and beyond.
To read more about the Orange #SuperCoders initiative
I’ve been writing about technology for around 15 years and today focus mainly on all things telecoms - next generation networks, mobile, cloud computing and plenty more. For Futurity Media I am based in the Asia-Pacific region and keep a close eye on all things tech happening in that exciting part of the world. |
The USS Macon ZRS-5 and its sister ship (USS Akron ZRS-4) were the largest helium-filled rigid airships ever built, and the largest airships built in the United States. The Hindenburg LZ-129 was 19 feet 10 inches longer and is the largest airship ever built, and the largest hydrogen-filled airship ever built. Because of the size of the Macon and the Akron, German engineers came from Germany to assist with the design and construction of the sister ships.
The Macon was built as a flying aircraft carrier, carrying 5 F9C Sparrowhawk biplanes. The Macon was commissioned on 23 June 1933, and docked its first airplane on 6 July 1933. The planes were stored inside the hull of the airship.
On 24 June 1933, the Macon left Lakehurst, New Jersey, the site of its construction, for its new base near San Francisco, California. The Macon developed the procedures for using airplanes from an airship for scouting purposes. While the airplanes were onboard the Macon, the landing gear was removed and replaced with fuel tanks, increasing the airplane’s range by 30%.
The Macon had a stellar reputation, with an outstanding performance record. Just before the fatal crash of the Macon, the airship was forced to dump 9,000 pounds of ballast and 7,000 pounds of fuel to clear mountains in Arizona. The Macon had to fly up to 6,000 feet, more than twice as high as its maximum flight ceiling altitude of 2,800 feet. Even with the dump of fuel and ballast the airship was still 15,000 pounds heavy and had to be flown at full speed to maintain altitude. As the airship approached a mountain pass near Van Horn, Texas it encountered severe turbulence and had a rapid drop in altitude, damaging the rear of the airship. Fast action by Chief Boatswain’s Mate Robert Davis saved the Macon. Permanent repairs were scheduled for the Macon’s next overhaul, an overhaul that would never happen.
Then on 12 February 1935 while returning to its base, the Macon ran into a storm off Point Sur, California. The Macon was caught in a wind shear and the repaired tail section of the airship failed again, puncturing a gas cell. The crew performed a massive ballast dump to keep the Macon out of the sea. The Macon reached 4,850 feet and then slowly descended as it continued to lose helium, eventually sinking into the Pacific Ocean. Two of the 76 crewman perished. Radioman First Class Ernest Dailey jumped into the sea while the airship was still too high. Mess Attendant First Class Florentiono Edquiba drowned while swimming back into the Macon to retrieve personal belongings. Commander Wiley, the airships commanding officer, was later decorated for attempting to swim the Edquiba’s aid.
It was later determined that if the Macon had not gone above its 2,800 foot ceiling it could have survived the structural failure of the stern section of the airship and could have returned to its base. Also lost with the Macon were the four aircraft it was carrying at the time. The Macon had completed 50 flights since it was commissioned, and was stricken from the register of Navy ships on 26 February 1935. All future airships of the Navy would be non-rigid blimps.
In 1991, the Macon was found. The debris field was explored with sonar, still photography and video; some artifacts were also recovered. In 2005, a side-scan sonar survey was made of the wreck site. In 2006, another expedition went to the wreck site, this time with high-definition video as well. More than 10,000 images of the debris field were taken. The exact location of the Macon, within the Monterey Bay National Marine Sanctuary, remains a secret. However, it is known that the debris field is more than 1,500 feet deep.
The United States used the safe, inert helium gas for its rigid airships. The United States, at that time, had almost all of the known reserves of helium, and refused to allow the export of helium to Nazi Germany. Germany was thus forced to use the highly dangerous and flammable hydrogen gas for all of its rigid airships. Yet, Germany had a much better safety record with its rigid airships. By the time, the United States decided to enter the rigid airship industry Germany had been building rigid airships for more than three decades. Germany used rigid airships during World War One to great effect, greatly increasing the German technology in rigid airship design, construction, and operation. The difference in experience is undoubtedly one of the major factors in the diverse safety records of the two countries.
- Dirigible dawn: New age of the airship? (wdsu.com)
- Dirigible dawn: New age of the airship? (wyff4.com)
- 15 Photographs of the Superstructures That Put Us in Space (gizmodo.co.uk)
- Nevada company launches silent Sky Sentinel UAV airship (gizmag.com)
- Newly uncovered footage emerges of Hindenburg airship, emblazoned with Nazi swastikas, flying over New York just hours before giant fireball killed 35 people (warhistoryonline.com) |
The principle of facilitation in organizing, engagement, and equity work refers to the practice of structuring and guiding dialogues, meetings, events, decision-making processes, and other activities using intentional strategies that help groups converse and collaborate more respectfully and productively. While there are many different styles and philosophies of facilitation, and numerous books, articles, and guides have been written on the topic, the type of facilitation most commonly used in education organizing, engagement, and equity work is grounded in the practice of inclusivity, fairness, mutual respect, and democratic decision-making.
Generally speaking, facilitation is used to create a forum for groups of people to express their ideas, concerns, preferences, or priorities, while also listening to and considering the perspectives of others. Facilitators will support group work in organizations and communities by providing rules and structure, framing topics and issues, posing questions, keeping track of time, and recording the main ideas or outcomes that emerge from a dialogue or process. When needed, facilitators may take a more active role to keep the discussion focused and moving forward, or they may intervene when problematic behaviors derail a discussion or compromise the emotional or physical safety of participants.
Facilitators provide structure, direction, and guidance to a dialogue or process, but they do not manage people, issue commands, control discussions, regulate opinions, or determine outcomes. Although facilitators are actively involved in group discussions and deliberations—they may ask challenging questions, provide background information, redirect unproductive arguments, request that speakers clarify unclear statements, and contribute in other ways—they are not considered “participants.” A facilitator primarily attends to process and behaviors, not discussion topics or decision-making outcomes—although facilitators may work with leaders, organizers, and practitioners in an organization or community to design and organize an event, meeting, dialogue, or decision-making process.
Structured and well-executed facilitation can help organizations, teams, and community groups avoid common social tendencies, behaviors, and styles of interacting that can undermine productive discussion and collaboration. For example, facilitators can help individuals with different values, beliefs, or cultural backgrounds listen to one another in constructive ways—rather than defaulting to argumentation or stereotyping—which can improve mutual understanding and appreciation across difference. Facilitators may also use a variety of techniques to challenge common social biases, conventions, or inequitable dynamics that may cause groups to devalue some perspectives, and over-value others, due to factors such as class, race, ethnicity, gender, age, ability, education, language proficiency, or organizational hierarchies.
While facilitators may help groups resolve difficult problems or contentious issues, and facilitators will call out disruptive, contentious, hurtful, or hostile comments and behaviors, facilitation is not a dispute-mediation or conflict-resolution process. Facilitators typically help groups uncover and articulate areas of both agreement and disagreement, though facilitated discussions and decision-making processes may or may not achieve consensus, compromise, or full participant support for the ultimate outcome or decision.
In organizing, engagement, and equity work, the outcome of a dialogue and decision-making process typically emerges from the process—that is, the process is not manipulated to arrive at an outcome that’s been determined in advance by those in positions of power or authority. By applying rules to everyone equally, treating all participants equitably, and modeling, demonstrating, and explaining the behaviors expected of all participants, facilitators help groups converse and collaborate more productively so that the eventual outcome—whatever it might be—results from a process that participants feel was inclusive, fair, respectful, and democratic.
To learn more about how principles can be applied in education organizing, engagement, and equity work, see HOW PRINCIPLES WORK →
This section describes a selection of representative facilitation strategies that may be used in education organizing, engagement, and equity work:
- Establishing a welcoming, inclusive, and safe environment for participants
- Developing group agreements
- Equalizing power dynamics among participants
- Being intentional and strategic about diversity—or attending to differences that make a difference
- Practicing intentional impartiality
- Providing useful information and context
- Guiding the discussion or process
- Building facilitation capacity in an organization or community
1. Establishing a welcoming, inclusive, and safe environment for participants
Facilitation is frequently used to create more welcoming, inclusive, and non-threatening environments in which community participants feel more confident, relaxed, or comfortable being vulnerable, speaking up, sharing their ideas, or engaging in potentially contentious or emotionally difficult conversations.
- Attending to physical comfort creates conditions that will feel more welcoming, inclusive, or safe for participants. The availability of food, beverages, comfortable seating, natural light, nearby restrooms, and other amenities can alleviate common symptoms of discomfort, whether it’s irritability due to hunger, anxiousness about restroom-related needs, or aches and soreness caused by sitting in uncomfortable chairs for extended periods of time. Physical discomforts and unpleasant spaces can cause people to be more distracted, annoyed, or short-tempered than they would be if they were nourished, hydrated, and at ease.
- Because certain locations may have negative associations for some community members, selecting a neutral, inviting, or familiar location for a meeting, discussion, or event is often essential to creating a context in which participants will feel welcomed, included, and safe. For example, school facilities may be intimidating environments for some community members, such as families who are new to the country, parents who have negative memories of their time in school, or students who want to openly discuss negative experiences they might have had with administrators or teachers. In these cases, a community center, library, or other neutral space will likely feel more secure and less intimidating for participants. In addition, facilitators may design and “co-facilitate” a process with representatives from different community groups as a way to build cultural sensitivity into the discussion, while also modeling inclusivity, power-sharing, and the value of diverse perspectives.
- Because a facilitator’s comments and behaviors can “set the tone” for a group interaction, facilitators often intentionally model the kinds of constructive and respectful behaviors they want participants to engage in. For example, facilitators can demonstrate warmth, openness, curiosity, and a non-judgmental attitude toward all participants. Facilitators may also monitor emotional cues and responses for signs that participants feel upset, anxious, threatened, or otherwise uncomfortable or distressed. In these cases, facilitators may intervene to reestablish safety in a variety of ways, such as by calling a break, pulling a participant aside for a one-on-one conversation, or politely but firmly asking certain participants to refrain from making specific comments or engaging in intimidating behaviors.
- Establishing clear expectations at the outset of a dialogue or process can also help participants feel at ease. When expectations depart significantly from the actual experience, participants are more likely to experience frustration and other negative reactions that make them less open with other participants or less receptive to the experience. Participants may also be anxious about the conversation or process. For example, many people are uncomfortable discussing race in group settings or public forums, and emotionally difficult conversations about racism or privilege can cause them to be apprehensive, worried, irritable, defensive, or even combative. When participants know what they are about to participate in, what the purpose or topic of the discussion will be, and how the process will generally work, they are more likely to feel at ease. Because people tend to feel more relaxed and open when they can visualize and prepare for the experience their about to have, facilitators can, for example, describe the purpose of the event, how it was planned and organized, how the conversation will unfold, the kinds of emotions people typically experience, the importance of confidentiality, or what should or should not be shared outside the group.
- Facilitators are usually trained and prepared to address unproductive conflicts that might arise or behaviors that are disruptive or intimidating. Problematic social behavior can be caused by a wide variety of factors, including a distrust of the facilitators, organizers, or hosts due to negatives experiences they may have had in the past. In addition, some participants may attend an event with the explicit intention of derailing the discussion so it doesn’t arrive at conclusions they may object to; bullying personalities may think they know best and try to forcefully impose their ideas on the group; or self-centered participants may try to make the conversation about them and their personal concerns.
- Facilitators often establish rules that explicitly prohibit certain problematic behaviors, and developing protocols for managing difficult, disruptive, or threatening individuals is often part of the planning process for a community dialogue or process. A range of appropriate and proportionate responses—ranging from friendly reminders directed at the whole group to more pointed requests directed at individuals—will also be used by experienced facilitators, including asking everyone in a group to “self-monitor” their own behaviors, and the behaviors of other participants, to ensure that exchanges remain respectful and no one in the group feels threatened or silenced.
→ For a related discussion, see the Accessibility Principle of organizing, engagement, and equity
2. Developing group agreements
For structured events, activities, and dialogues, facilitators typically establish group agreements—sometimes called “ground rules” or “group norms,” among other terms—before a discussion or process gets underway. If facilitators want to create an inclusive, respectful, equitable, and democratic space, establishing group agreements is widely viewed as an essential strategy, particularly when a discussion is likely to become contentious, when disruptions or bullying behaviors are anticipated, when authority figures may attempt to control the agenda or silence certain viewpoints, or when participants represent a range of socioeconomic backgrounds, cultural identities, or political beliefs.
- Group agreements function similar to the rules used in games and sports: participants agree to follow the same set of rules, and they help participants understand the terms of an interaction, activity, or discussion. Group agreements describe the specific behaviors that will be expected of participants, and they help participants understand how a process will proceed before it begins. Establishing group agreements can significantly improve the quality and productiveness of a dialogue or process, while also decreasing the likelihood of misunderstanding or rudeness—particularly when interactions are likely to become contentious or discussion topics are controversial.
- Group agreements perform a few important functions: (1) group agreements establish a foundation of shared agreement at the outset of a discussion or activity that participants can build on during subsequent interactions; (2) group agreements explicitly bar certain negative behaviors from a group interaction and encourage more constructive behaviors; and (3) group agreements allow facilitators and participants to enforce the agreed-upon rules by reminding others of the agreements they made at the outset of a discussion or process.
- Group agreements are typically established in one of three ways: (1) facilitators will propose a set of agreements, usually by incorporating group agreements that have been effective in other contexts or widely used by professional facilitators, (2) participants co-develop group agreements using a democratic process proposed by facilitators, or (3) facilitators propose a set of group agreements but give participants the opportunity to modify or add to the rules using a democratic process. All three approaches can be effective, and facilitators typically choose an approach based on time constraints, the goals of a process, or the needs of a particular group.
- Participants are usually willing to accept a set of proposed group agreements is they seem fair and reasonable to them, and if facilitators explain why the agreements are important or mention that they are standard rules that have been widely used in other organizations or communities. It is essential that facilitators explain the rationale for using group agreements and why certain agreements are important for the discussion or activity that follows. When additional agreements are suggested by participants, it can be helpful to the group if those who are proposing the new agreement also share their thinking and rationale.
- After participants commit to following the group agreements, facilitators usually make sure they remain prominently displayed for the duration of the dialogue or activity. The agreements can be written on poster paper and handouts or they can be projected on a screen. Visible agreements serve as reminders for participants, and they allow facilitators to reference them more easily when needed. Group agreements also educate participants about the specific behaviors that are expected of them, which becomes particularly valuable if a discussion or interaction becomes disrespectful. In these cases, group agreements provide a non-threatening method for naming and correcting negative group behaviors. When rules have not been proactively established at the outset of a discussion or process, for example, participants may be more likely to get defensive or hostile when their behaviors are called out and challenged.
- Facilitators may utilize a variety of facilitative techniques to ensure that participants follow group agreements, including politely pointing out that an agreement is being broken or directing the group’s attention to the agreed-upon rules when problematic behaviors threaten to disrupt a discussion. Facilitators may also need to call out and challenge disrespectful behavior, harmful language, or threatening mannerisms that might intimidate or silence some participants. In addition to calling out transgressions, facilitators may propose that participants snap their fingers, or use some other signal, if they believe someone has broken an agreement.
→ For a related discussion, see the Dialogue Principle of organizing, engagement, and equity
Discussion: Insensitive Group Agreements
In some cases, facilitators will propose ground rules that may be insensitive or counterproductive in certain circumstances. Agreements such as “assume good intentions” or “trust one another” are two examples. While such rules may be well-intentioned, participants in some communities and organizations may be unable to assume positive intentions, or easily bring trust into a conversation with strangers, due to past personal experiences with bigotry, bullying, discrimination, or violence. For example, “assume positive intentions” is not a productive group agreement if staff members routinely experience workplace bias or discrimination because of their gender identity, race, or sexuality. When establishing group agreements, facilitators should remain mindful of history, identity, culture, and other factors that may influence how participants experience a dialogue, process, or other activity.
3. Equalizing power dynamics among participants
In organizing, engagement, and equity work, facilitators typically take intentional steps to equalize power dynamics in a dialogue or process, and a variety of facilitation strategies will be used to include, recognize, or affirm the voice and influence of community members and groups, especially those who have been historically underrepresented, marginalized, silenced, or excluded.
- Facilitators can help to equalize power dynamics in a variety of ways, such as by applying group agreements to everyone equally, regardless of their position or status in an organization or community; by creating space in a discussion for less vocal or confident participants to speak up by asking talkative participants to speak less, for example; or by structuring a conversation so that everyone in a group is given the same amount of time to express their views. Because group agreements establish foundational behavioral expectations for a discussion or process, the agreements often explicitly address issues of equity and power, which is also why facilitators often use a transparent democratic process to co-develop group agreements with participants. The first step toward equalizing unequal power dynamics typically occurs when organizational leaders, authority figures, public officials, and others with power or influence publicly agree to follow the same rules as everyone else.
- Facilitators typically avoid auditorium-style seating that discourages face-to-face conversation and features such as elevated stages, microphones, and podiums that are associated with institutions of unequal power, especially in contexts in which unequal power and authority may have been abused. Instead, facilitators may arrange seats in circles or u-shapes, for example, so that participants are looking at one another. Room arrangements that encourage participants to see one another as equals, and that foster a sense of togetherness and connection, are typically used in organizing, engagement, and equity work—although adjustments and accommodations may need to be made for personal boundaries or cultural identities that would make particular room and seating arrangements a source of anxiety or stress.
- Equalizing power dynamics can also help community members have more constructive conversations about potentially divisive issues. In recent years, for example, formal public meetings—such as city council or school board meetings—have become increasingly contentious and adversarial in many communities, and activities such as community dialogues offer an alternative space for the respectful exchange of ideas and the exploration of constructive community solutions. Assuring “safety” in a discussion or process can take many forms in organizing, engagement, and equity work, and attending to real and potential misuses of power or authority is an important dimension of safety. For example, facilitators may create “space” for those who have less power in a community, and whose concerns have historically been disregarded or disrespected, by giving equal time, legitimacy, and affirmation to their voices and priorities in a decision-making process.
- Facilitators may also monitor authority figures, and others in positions of power or influence, to ensure they do not dominate discussions, force their viewpoints on others, hijack a process, or otherwise intimidate, manipulate, or coerce participants. In many organizations and communities, there are longstanding patterns of cultural deference toward those holding positions of power, authority, influence, and status, and employees, students, families, and other community members may be hesitant to speak up for fear of public recrimination or professional retaliation. Because a decision-making process, whether it’s an informal staff meeting or formal committee proceeding, can be controlled or co-opted—either intentionally or unintentionally—by powerful figures to protect their interests, validate their opinions, advance their personal agendas, or secure apparent group support for a decision they already made, facilitation can be used to hold power in check and create forums for a more equitable exchange of ideas and viewpoints.
4. Being intentional and strategic about diversity—or attending to differences that make a difference
In organizing, equity, and equity work, achieving a diversity of community representation is typically a central value and an explicit goal. While the term “diversity” is most often associated with race and ethnicity, diversity can encompass the many varied cultural backgrounds, identities, and viewpoints represented in a given organization or community, including diversity of gender, age, ability, socioeconomic status, educational attainment, professional role, or language ability, among other factors. Diversity also extends to less visible internal characteristics, such as diversity of experiences, perspectives, ideas, ideologies, or beliefs.
- In organizing, engagement, and equity work grounded in the practice of inclusion, equity, and democratic decision-making, diverse community representation is often used to challenge or overcome historical patterns of exclusion, inequity, and biased decision-making. For example, diversity of representation in a discussion or process can ensure that formerly ignored, dismissed, or silenced concerns are expressed, heard, listened to, and prioritized, or that community members who have historically been excluded from decision-making are actively involved and given meaningful leadership roles. When diversity of representation is overlooked or neglected, the conditions of any given discussion or process are more likely to result in biased decisions or outcomes that favor the perspective, concerns, and priorities of those who were represented or those who hold positions of power and authority.
- Facilitators also prioritize diversity because it can improve both a process and its outcomes. When diverse perspectives are involved in a discussion of problems affecting an organization or community, for example, the process is more likely to produce a wider range of insights and ideas that are more creative, more innovative, and more likely to result in effective proposals and solutions. In this case, facilitators may make time for those who are most directly or severely impacted by a problem to share their stories and experiences, or they may develop activities that ask participants to consider well-known problems in more imaginative and unconventional ways.
- In certain situations, some differences may be more relevant or important than others, and facilitators may use a wide variety of strategies to ensure that the perspectives of diverse community members heard or that certain perspectives are amplified. For example, the age of the participants involved in a discussion can have a significant influence on the process and its outcomes. Older residents who have lived in a community for a long time, for example, may recall specific stories that illuminate the origins of a given problem, or they may remember past attempts to address a problem that ultimately failed—and the specific reasons why those attempts failed. Or the perspectives of students and young adults may be unintentionally excluded or silenced in schools, even when adults are discussing ways to address problems that adverse affect young people. In these cases, a facilitator might start a conversation by asking the group’s oldest members to share their ideas first, or they may push back if adults start talking over younger participants or treating their perspectives dismissively.
- Facilitators may also monitor and attend to visible differences that affect the dynamics of group discussion or process, such as skin color, language proficiency, religious garments, unconventional hairstyles, visible tattoos, or the condition of someone’s clothing. If facilitators intentionally model full acceptance of all forms of difference in a group, participants are more likely to display acceptance toward those who may look or act differently than them. For example, facilitators may make accommodations for participants who face language-related challenges, whether it’s due to hearing impairments, differences in fluency or dialect, or a lack of exposure to certain words or concepts. In these cases, facilitators may describe important terms in accessible language, repeat comments to make sure everyone heard what was said, or ask participants to take a few minutes to jot down their thoughts before expressing them verbally.
5. Practicing intentional impartiality
When facilitating a discussion or decision-making process, the intentional practice of impartiality can help to create conditions for more respectful interactions, more effective problem-solving, and more productive group collaboration, particularly among parties that are mutually distrustful or in communities experiencing tensions and conflicts. For example, facilitators may refrain from taking sides in a disagreement, expressing ideologically biased viewpoints, or showing favoritism toward certain ideas, individuals, or groups.
- In politically, ideologically, or culturally divided contexts, community members may be unwilling to even consider participating in an organizing, engagement, or equity process due to suspicion and distrust stemming from negative past experiences. For example, families may be suspicious of any event organized by a school they believe has mistreated them or their children, or community groups that have publicly fought over an issue may distrust the individuals and groups they opposed. The promise of impartial facilitation can help to get wary community members “to the table” by offering a context in which mutually distrustful parties are more likely to feel that they will be treated fairly or that their viewpoints will not be criticized, judged, or disparaged.
- To demonstrate impartiality, communities and organizations may use facilitators who have not taken a public position on a controversial topic, who are trusted by different constituencies in a community, or who are unaffiliated with the community or organization—i.e., they are not residents, employees, or paid representatives. In some cases, the perception and anticipation of an impartial process will be as important as the practice of impartial facilitation, given that community members may decline to participate in a process if they believe it will be biased against them.
- At the outset of a process or dialogue, facilitators who are practicing intentional impartiality may share their name and describe their role, but leave out other personal information that might suggest they are partial toward a particular topic, perspective, idea, or group. During group discussions, facilitators may be careful not to show bias for or against any participants or the beliefs they express, which requires facilitators to practice self-awareness and monitor their own comments and behaviors to ensure they don’t inadvertently communicate partiality. For example, behaviors such as leaning toward or away from certain participants, directing follow-up questions to some people while ignoring others, or smiling or nodding in response to some comments but not others could all suggest partiality for certain participants over others.
- Facilitators can also practice what Martin Carcasson and Leah Sprain have called passionate impartiality. Passionately impartial facilitators are “passionate about their community, democracy, and solving problems,” for example, but they are “committed to serving a primarily impartial, process-focused role” to improve communication, engagement, and collaboration in group settings. In dialogues and engagement work, the intentional practice of passionate impartiality can help to address what Carcasson and colleagues call the “neutrality challenge,” which refers to the difficulty of maintaining “politically neutral processes while also working for more equitable outcomes.” When practicing passionate impartiality, facilitators can remain committed to upholding valued principles—such as inclusion, equity, mutual respect, democratic decision-making, or social justice—while also putting aside ideological biases or political preferences for the purpose of facilitating a constructive dialogue or process in which participants may come from different backgrounds, have different cultural identities, or hold competing interests, ideas, or viewpoints.
Discussion: When Impartiality May Not Be Advisable
Impartiality can be an effective facilitation strategy in many situations, but it may not be advisable in every situation. All facilitators bring biases, preferences, and ideological dispositions into their work, of course, because no one is capable of perfect impartiality, neutrality, or objectivity. Partiality is simply part of being human.
In some circumstances, acting authentically or practicing transparency may be more effective facilitation strategies than maintaining the appearance of impartiality. For example, facilitators might discuss their identities or cultural backgrounds to connect with participants on a personal level or encourage them to share their personal stories, or they may discuss their own biases as a way to model self-awareness and intentional self-reflection for participants.
In addition, different engagement goals or community audiences may require different facilitation strategies. A principles-based approach to organizing, engagement, and equity is based on the premise that the fundamental elements of the work—such as facilitation, authenticity, or transparency—can be customized to meet the distinct needs of the moment. A standard strategy that works in most cases may not work in specific cases, and facilitators may need to rely on instinct, judgement calls, or their personal knowledge of participants—rather than prescribed facilitation strategies—given that every community is unique and social dynamics are ever-changing.
6. Providing useful information and context
Community members will enter an organizing, engagement, or equity process with different levels of knowledge about a given topic, different levels of experience with the process being used, and different ideas about how the process should go or what the outcomes should be. At the outset of a process or dialogue, facilitators often provide essential information that helps participants establish a foundation of common understanding.
- For example, facilitators may provide suggested definitions for terms with nuanced or complex meanings—such as “organizing,” “engagement,” or “equity”—so that participants can discuss where their respective definitions either converge or diverge. Facilitators and participants may also co-develop new definitions that reflect the different interpretations and understandings that emerged during the discussion. Suggesting or co-creating definitions is one method facilitators might use to help groups develop a “shared language,” which can minimize confusion, misunderstandings, headstrong debates, and other reactions or behaviors that tend to occur when people are using the same words but defining them differently. In addition, “co-constructive” activities, such as collaboratively developing shared definitions in groups, can also give participants an opportunity to learn from one another and improve their knowledge and understanding of complex or nuanced concepts and practices.
- Facilitators may also provide information or data to establish a set of baseline facts for a discussion or process. When discussions are based on assumptions, misinterpretations, flawed information, or rumors, for example, it can derail productive discussions, cause confusion, and compromise the effectiveness of a problem-solving activity or decision-making process (because the problem is less likely to be solved or the eventual decision is less likely to be effective). Whether it’s the demographic data for a community, disciplinary rates for a school, or the pros and cons of a proposed policy, grounding a discussion or process in a set of agreed-upon facts can help to keep discussions focused and constructive. For example, a foundation of agreed-on facts reduces the likelihood that participants will get into lengthy disagreements about the accuracy or sourcing of factual information, and it can also help participants stay focused on a single issue, rather than lapsing into digressive discussions about multiple unrelated issues. If a dispute arises about the accuracy of particular information, the disagreement can be noted and recorded by the facilitator for fact-checking later on—a facilitation strategy that can help refocus the group on the issue under discussion and keep the process moving forward.
- When groups discuss issues in their particular organization or community, facilitators can provide a larger scope of information that helps participants contextualize or better understand the problem or opportunity being discussed—which can result in more effective proposals and better-informed decisions. For example, participants may rely on their own limited personal experiences and subjective perceptions—rather than on statistical data that illuminates larger trends over time—which might bias or limit proposed ideas in certain circumstances. Or if a community group is discussing student behavior and disciplinary policies in a school, the discussion may be influenced by unconscious bias, negative past experiences with disruptive youth, or limited exposure to alternative approaches to student discipline. Consequently, participants may propose ideas that are not based on what’s actually happening with student behavior in the school, or they may only consider the traditional forms of discipline they’re familiar with. In this example, a facilitator might provide statistics showing disciplinary rates for different student populations in the school, district, state, and country, and descriptions of alternative approaches to discipline that have been effective in other schools.
- Discussion guides are often used to provide the essential information and context that will help participants to engage in a productive discussion or process. Discussion guides include features such as framing questions for a dialogue, relevant data presented in easy-to-understand charts or graphs, and descriptions of the purpose, structure, and timeline of a process. In some cases, discussion guides will be developed by a diverse committee that represents different perspectives, roles, or cultural backgrounds in an organization or community, and the guide will explain who was involved and how the process was organized. A community-constructed discussion guide can provide a variety of advantages. For example, skeptical or distrustful participants may be less suspicious of a process that was developed by a group that included people they feel represent their perspective, or the framing questions may be more relevant to the needs or concerns of participants, and more sensitive to cultural differences, because they were developed by people who know the community well.
7. Guiding the discussion or process
A facilitator’s central role is to guide a discussion or process so that groups, organizations, and communities can achieve self-identified goals or take actions that are in the best interests of their staff, students, families, and other stakeholders. In the execution of that role, facilitators may use an expansive range of strategies that have been developed by facilitators over decades of practice and real-world application. Below are a few illustrative examples of facilitation strategies that are commonly used in organizing, engagement, and equity work:
- Facilitators may try to talk as little as necessary to free up as much time for group discussion and deliberation as possible. As the conversation proceeds, facilitators typically listen more and talk less, which helps the group own the discussion and its outcomes. Facilitators also intentionally monitor their own verbal behaviors to make sure they are modeling the kind of comments and exchanges they want the group to engage in. Participants pick up on a facilitator’s subtle (or not so subtle) social cues, so facilitators who act respectfully, for example, will tend to encourage mutually respectful behavior in the group.
- Rather than standing on a stage or in front of a room—physical positions that convey authority or control—facilitators often assume the same physical position as participants, such as sitting in a group circle or at a discussion table. They may adopt a relaxed attitude or speaking style to help participants feel more at ease, and they may maintain a physical posture that is confident without being assertive. Facilitators typically strive to be in control, but not controlling, and they want the attention to be focused on the group, not themselves.
- Facilitators attend to the flow of the conversation among participants by, for example, making sure that discussions don’t become back-and-forth exchanges between two outspoken individuals, or that quieter and less assertive participants are given opportunities to contribute.
- Periodically in a discussion or process, facilitators usually summarize the main ideas that have emerged from a group discussion, which may either take the form of verbal summaries or the documentation of ideas on poster board or screen projections so that all participants can see and validate the written record of their discussion. In some cases, facilitators may ask the group for a volunteer who would like to take notes, or they may design the process so that note-taking responsibilities may be shared by multiple participants taking turns. Facilitators will often check in with a group at regular intervals in a process to confirm that the main ideas are being captured accurately, and efforts are usually made to record discussions, to the extent possible, in the participants’ own words. When participants can visually see that their specific comments and contributions have been accurately recorded by a facilitator or notetaker, it can help to increase trust and confidence in a process.
- Written records of group discussions typically include points of agreement and disagreement to ensure that all perspectives and contributions are preserved, most commonly in the form of written summary reports that are shared with participants. Recording all sides of a discussion or disagreement not only communicates to participants that their contributions were recognized and valued, but it also ensures that authority figures, majority groups, and other historically dominant voices do not unilaterally control and determine the written record of a proceeding—and therefore the perception of what did or did not happen or what was or was not agreed to.
- In addition, dissenting viewpoints, constructive criticism, and the perceptions of non-majority participants often introduce creative, unexpected, and revealing insights that might otherwise have been ignored, dismissed, or silenced in an organizational or community decision-making process. If only areas of “agreement” are recorded, the record often reflects the majority viewpoint of dominant groups, which can be selectively biased in any number of ways. Dissenting, critical, or non-majority perspectives can also help groups develop a more complete, nuanced, and accurate understanding of a community problem, for example, which can help to bridge cultural or ideological divides and enable groups to develop ideas, plans, or proposals that are more likely to be effective.
- Facilitators typically monitor the focus—or lack of focus—in a group discussion or process. When participants lose focus—such as when individual participants digress from the topic at length or the discussion starts to go in several unrelated directions—facilitators will typically intervene, note that where the discussion went off track, and ask the group if they would like to re-focus on the framing question or topic at hand. In some cases, facilitators will call for a brief “time out” or introduce an activity to help reset the discussion and refocus a group. For example, a break can help to dissipate group tensions in a discussion that’s become heated or contentions, and physical activities can help to re-energize groups that have become visibly distracted or lethargic.
- Facilitators routinely use a variety of questioning strategies to help groups share their personal experiences, talk candidly about difficult issues, clarify their ideas, or “complicate” a discussion by asking participants to reflect on their own biases or consider nuances might otherwise be overlooked. Facilitators will often encourage group participants to ask one another questions, and they may describe effective questions techniques. For example, a facilitator might suggest that participants ask “probing” or “clarifying” follow-up questions when someone expresses a viewpoint they disagree with, rather than immediately challenging the comment based on uninformed assumptions about the other person’s values, beliefs, or motivations.
- Other questioning techniques may be used to introduce community perspectives that are not represented in a group discussion or decision-making process. For example, if a particular cultural perspective or community role is absent, facilitators might ask the group to consider which perspectives are missing and what those individuals might think about the issue at hand if they were present. For example, a facilitator might ask: “If the superintendent was here right now, what might she say about this issue?” or “This group seems to agree that we need to increase the school budget. But what if a few families struggling to pay their property taxes were here tonight? What do you think those parents might say?”
- Another common facilitation technique is creating space for groups to debrief or reflect on a discussion or process. Many formal or public decision-making processes, such as a city council or school board meeting, will conclude without any discussion of the process that was used or the outcomes that resulted, which can be frustrating to community members who might have felt their viewpoints or concerns were excluded from the proceeding. Facilitators will typically build in time both during and at the conclusion of a group dialogue for participants to discuss which elements of the process worked well for them or didn’t work so well, or facilitators may ask each member of the group to share their thoughts on the decision or outcome that resulted from the process. In some cases, facilitators will also take notes during these discussions so that written summaries can be provided to participants. Creating a structured forum for debriefing and group reflection is another way that facilitators support inclusive, fair, and democratic decision-making in organizations and communities.
8. Building facilitation capacity in an organization or community
Building facilitation capacity—that is, increasing the number of skilled facilitators by providing training, practice sessions, and other opportunities that help them acquire or improve their facilitation skills—can be one of the most powerful and transformative organizing, engagement, and equity strategies available to schools, organizations, and communities. Because facilitation helps people converse and collaborate in more respectful and productive ways, facilitators often play an instrumental role in helping groups overcome deeply rooted institutional dysfunction, patterns of abusive behaviors, toxic cultures and interactions, or misuses of power and authority.
- In many schools, organizations, and communities, the only individuals with facilitation experience or skills are certain kinds of professionals—such as educators, school administrators, or public officials—who routinely use facilitation in their work. When facilitation skills are unevenly distributed, facilitation roles often default to those with experience. And if the available facilitators are not intentional about or committed to practicing inclusion, equity, or democratic decision-making, the discussions or decision-making processes they facilitate are less likely to be fully inclusive, genuinely fair, or authentically democratic. For example, the facilitators may select locations that are comfortable for them, such as a school facility or town-hall conference room that may not be welcoming or readily accessible to some community members or groups, or the facilitators may design a process that doesn’t provide enough time, or the right structure, for all participants to contribute equitably.
- The strategic use of facilitation is also a way to build power in a community, particularly among individuals and groups that may have aligned interests but that have not worked together in the past. For example, the success of a community-organizing campaign is often determined by a group’s ability to negotiate different interests and priorities while engaging in a productive and democratic decision-making process that all participants feel is fair and legitimate. If community organizers ask students, families, and interest groups to volunteer their time to attend meetings that are disorganized, combative, and unproductive, the campaign is unlikely to get off the ground, mobilize a sufficient number of stakeholders, or build the kind of passionate, sustained commitment that’s required to execute a successful campaign over weeks, months, or years.
- Because facilitation skills take time and practice to acquire, schools, organizations, and communities may not have enough skilled facilitators available unless they proactively invest in building facilitation capacity well before that capacity is needed. For example, facilitators often help communities come together and heal in the aftermath of a tragedy or crisis, but when unforeseen circumstances suddenly arise, communities may not have a group of facilitators they can call in or rely on. In addition, activities such as facilitated community dialogues often surface concerns or problems that may have long been ignored or dismissed by those in power, but if the dialogues are never held—if there are no facilitators to organize and guide them—those concerns and problems may continue to be ignored. Facilitation can be used strategically to surface community issues that demand action, mobile community actions to address issues, and activate responses to community issues after occur.
- Building facilitation capacity in a school, organization, or community is another way to develop and strengthen youth, family, and community leadership skills and ensure more diverse representation in leadership roles. Confident and skilled facilitation can be a vital leadership ability, and community members who can organize and facilitate a group process often take on or evolve into other leadership roles. Community organizers, for example, might intentionally recruit and train youth and family facilitators from diverse cultural backgrounds or different neighborhoods so that they can be called on when facilitation is needed in a particular cultural community or neighborhood. Having a diverse coalition of facilitators who can alternate leadership and facilitation roles also allows communities and groups to model inclusivity, diversity, and democratic representation in their practice.
Organizing Engagement thanks Bruce Mallory, Kip Holley, and Jon Martinez for their contributions to developing and improving this resource.
This work by Organizing Engagement is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. When excerpting, adapting, or republishing content from this resource, users should reference and link to Organizing Engagement. |
What is artificial intelligence (AI)? We take the term for granted, but how might we phrase a formal definition? And are the technologies that we have today really reflective of all that this term implies?
Traditionally a branch of computer science, AI as a holistic concept has pulled from many areas of academic arenas, from philosophy to physics. Many are aware of the recognized origins of the term – famed computer scientist John McCarthy in 1956 at the “Dartmouth summer research project on Artificial Intelligence.” Since that time and as research and technology in this area has evolved, definitions of AI have shifted across a wide spectrum, and academics, businesspeople, and laypersons have a range of definitions (some better informed and reasoned than others – though again, the utility of such a term can depend on background and objectives).
One of the reasons AI is so difficult to define is because we still don’t have a set definition or one solid concept for intelligence in general. Intelligence is often dependent on context. A traditionalist might define intelligence as level of reasoning power, and this seems one of the reasons why a popular determiner of AI has often been games – man and machine try to ‘outthink’ the other, or in the case of the machine at least match the human, so that it becomes difficult to tell where man begins and machine ends.
But, in the end, mastering a game (like Go) is very different from sealing a successful business deal in the real world, then driving home to have a meal with your family and reading and reflecting on a bit of Plutarch before bed. In any case, researchers Shane Legg and Marcus Hutter have made the case that intelligence includes the following features:
- Intelligence is a property of some entity or agent that interacts with some form of environment
- Intelligence is generally indicative of an entity’s ability to succeed (by a given set of criteria) at a particular task or achieving a stated goal
- When speaking of an “authentic” intelligence, there is an emphasis on learning, adaptation, and flexibility within a wide range of environments and scenarios
Emerj’s Definition of Artificial Intelligence:
*NOTE: Artificial intelligence (AI) can be separated into two branches of entities – that of ‘smart’ computers or systems (such as today’s deep learning), and a still unrealized ‘artificial general intelligence’ or AGI. We include this as a preface for helping to distinguish between the two in our present state of technological development. Our definition attempts to define an entity rather than a field of study, and also utilizes broad or somewhat open terminology to allow room for evolution and growth of the AI field as we know it. Our attempt at an informed, “living” definition of AI is below:
“Artificial intelligence is an entity (or collective set of cooperative entities), able to receive inputs from the environment, interpret and learn from such inputs, and exhibit related and flexible behaviors and actions that help the entity achieve a particular goal or objective over a period of time.”
* How We Arrived at Our Definition:
As with any concept, artificial intelligence may have a slightly different definition, depending on whom you ask. We combed the Internet to find five practical definitions from reputable sources:
1. “It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.” – Stanford
2. “Artificial Intelligence is the study of man-made computational devices and systems which can be made to act in a manner which we would be inclined to call intelligent.” – The University of Louisiana at Lafayette
3. “Defining artificial intelligence isn’t just difficult; it’s impossible, not the least because we don’t really understand human intelligence. Paradoxically, advances in AI will help more to define what human intelligence isn’t than what artificial intelligence is.” – OReilly
4. “The ability of a machine communicating using natural language over a teletype to fool a person into believing it was a human. “AGI” or “artificial general intelligence” extends this idea to require machines to do everything that humans can do, such as understand images, navigate a robot, recognize and respond appropriately to facial expressions, distinguish music genres, and so on.” Matt Mahoney, PhD, Data Compression Expert
5. “The scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines.” AITopics.org
We sent these definitions to experts whom we’ve interviewed and/or included in one of our past research consensuses, and asked them to respond with their favorite definition or to provide their own. Our introductory definition is meant to reflect the varied responses. readers should note, Machine learning and artificial intelligence share the same definition in the minds of many, however, there are some distinct differences readers should recognize as well. Below are some of their responses:
Dr. Andras Kornai, Budapest Institute of Technology
I like them all, except for #3, in that no exact definition is needed to work on a problem. We still don’t fully understand gravity, and can measure it only to one part in ten thousand (most other physical phenomena we can now measure to one part in a billion or even better), yet it would be silly to say “it’s impossible to define gravity”.
Some clarity may be required to distinguish “good old-fashioned” AI from modern AGI. GOFAI was centered on conceptual modeling by symbol manipulation, e.g. the planning required to win a chess game, and took the Turing Test as its central goal. AGI demands more, reaching comparable levels in all forms of human intelligence are seen as goals. This development is well emphasized in #4.
#1, #2, and #5 are right in emphasizing that A(G)I is primarily about creating algorithms that show intelligent behavior, and is not to be confused with cognitive science or brain modeling which aim at explaining how a particular hardware, the human brain, gets there. We may, or may not, be able to steal ideas from nature, this remains to be seen.
Dr. Ashok Goel, Georgia Institute of Technology
Artificial Intelligence is the science of building artificial minds by understanding how natural minds work and understanding how natural minds work by building artificial minds.
Dr. Pei Wang, Temple University
This is a complicated problem. I have a paper on it: What Do You Mean by “AI”?, in which “intelligence” is defined as “adaptation with insufficient knowledge and resources.”
(In response to definitions 1 and 2) – Defining AI using “intelligent” or “intelligence” is a circular definition. The statement is agreeable, but does not provide clear guidance to the research.
(In response to definition 3) – Our understanding of human intelligence is a matter of degree. This opinion is encouraging blind trial-and-error, which is not good advice for any scientific research.
(In response to definition 4) – Too anthropocentric. AGI can be easily distinguishable from human beings, while still being considered as highly intelligent.
(In response to definition 5) – Better than the others, though still uses “intelligent”.
Dr. Vincent Müller, Anatolia College
Definition 1 is okay. Of course it leaves the minor question, ‘what ‘intelligence’ means’, open. I think it’s important to see that AI is about making, and that it is distinct from cognitive science – even though traditionally this was seen otherwise. That’s not a definition, however.
(Dr. Müller provides a link to his related work: New developments in the philosophy of AI)
Dr. Dan Roth, University of Illinois at Urbana-Champaign
Any definition of Artificial Intelligence will have to be vague enough due to our inability to define Human Intelligence. But I would say that this is the scientific field that attempts to understand the foundations of intelligent behavior from a computational perspective. It focuses on developing theories and systems pertaining to intelligent behavior, at the heart of which is the idea that learning, abstraction and inference have a central role in intelligence.
We also found and chose to include a more recent and commonly-accepted textbook definition to build on our perspective. In “Artificial Intelligence: A Modern Approach”, Stuart Russell and Peter Norvig defined AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment.” This definition by its nature unites many different splintered fields – speech recognition, machine vision, learning approaches, etc. – and filters them through a machine that is then able to achieve a given goal.
Strong AI versus Weak AI
Strong AI – Also known as deep AI or what some might deep AGI; the idea that a computer can be made or raised to intelligence levels that match human beings’.
Weak AI – Otherwise known as narrow AI; the idea that computers can be endowed with features that mirror or mimic thought or thinking processes, making them useful tools for figuring out how our own mind works. Narrow AI systems also enhance or augments human “intelligence” by delivering calculations, patterns and analyses more efficiently than can be done by a human brain.
The field of artificial life branches out further from traditional AI to include the study and mimicry of various biological forms and organisms that exhibit a range of “intelligent” behaviors.
One way to categorize AI solutions for commercial and scientific needs is by level of complexity of the application: simple, complex, or very complex (though these are, clearly, also open to interpretation). This is an idea borrowed from the Schloer Consulting Group:
Simple – Solutions and platforms for narrow commercial needs, such as eCommerce, network integration or resource management.
- Examples: Customer Relationship Management (CRM) software, Content Management System (CMS) software, automated agent technology
Complex – Involves the management and analysis of specific functions of a system (domain of predictive analytics); could include optimization of work systems, predictions of events or scenarios based on historical data; security monitoring and management; etc.
- Examples: Financial services, risk management, intelligent traffic management in telecommunication and energy
Very Complex – Working through the entire information collection, analysis, and management processes; the system needs to know where to look for data, how to collect, and how to analyze, and then propose suggested solutions for near and mid-term futures.
- Examples: Global climate analysis, military simulations, coordination and control of multi-agent systems
In similar fashion to types of AI solutions organized by capability, there exists a continuum of AI in regards to level of autonomy:
Assisted Intelligence – Involves the taking over of monotonous, mundane tasks that machines can do more efficiently.
Augmented Intelligence – A step up in a more authentic collaboration of “intelligence”, in which machines and humans learn from the other and in turn refine parallel processes.
- Example: Editor from NyTimes
Autonomous Intelligence – System that can both adapt over time (learn on its own) and take over whole processes within a particular system or entity.
- Example: NASA’s Mars Curiosity Rover
Approaches to Achieving AI:
It seems important that what any definition does not do (as noted by Legg and Hutter in their paper A Formal Definition of Intelligence for Artificial Systems), is limit or restrict the inner workings of AI and/or the approaches used to creating an AI entity.
The methods taken toward achieving a “true AI” or AGI are wide and varied, but some are closer in line achieving an adaptive, flexible and autonomous intelligence that is more characteristic of human beings (and likely intelligences that do/will exist beyond our own).
Approaches that have evolved and continue to receive wide recognition in the media include (though are not isolated in approach or limited to) the following:
- Artificial neural networks
- Reinforcement learning
- Self-supervised learning
- Multi-agent learning
- Machine learning
Limitations in Defining AI:
No matter what we do, we can’t (at present) escape our biological and social trapping as human beings, which means that any definition put forth and/or any test that we conceive to test for a “true” artificial intelligence is at risk from anthropocentrism and subjectivity.
What we can do is cultivate an awareness of our biases and opinions and strive to seek out a broader or more “universal” notion of intelligence that encompasses a range therein; achieving the creation of an artificial intelligence in our likeness may be one of the ultimate challenges of our times, but there are likely thousands or even millions types of artificial intelligences which we could aim to (or may not be capable) of conceiving and creating.
While it’s science’s aim to discover truth and knowledge at the heart of every system and process in the universe, we should recognize that we may not be able to understand the workings of an artificial intelligence that we one day create – in fact, that’s already the case with deep learning and neural networks, similar to our current complete lack in understanding how the human brain works. At some point, we may not be able to keep up with an AI’s processing powers and ways of literally seeing and conceiving of reality as we know it.
NOTE: It’s beyond the scope of this article to give a cohesive, historical overview of AI, or of today’s landscape. Instead, our intent is to provide a jumping off point for understanding and further exploring the history, workings, and consequences of AI in today’s increasingly automated and augmented landscape.
Related Interviews and Articles on Emerj:
If you’re interesting in getting a lay-of-the-land perspective on the implications and applications of artificial intelligence, you might enjoy this curated selection of some of our more popular AI overview topics, listed below:
- Popular Interview: Dr. Nando de Freitas – Deep Learning is Like Building with Lego
Image credit: Nomura Connects |
Wastewater through the ages
For centuries, wastewater seeped directly into the ground or it was discharged into flowing waters. Contaminants entered the soil and polluted drinking water in wells, which caused disease and epidemics. It wasn’t until the second half of the 18th century that people realized the close connection between wastewater seeping into the ground and a city’s sanitary conditions. This triggered the systematic construction of drainage networks.
Ulm's Wastewater gutters
Exosed Wastewater gutter construction
Construction of a Treatment Plant
Construction of the sedimentation basin
1836 - 1900 Sewer system in Ulm
Far into the 19th century, private residences and commercial businesses simply poured their wastewater out onto the road where it collected in open gutters. Or it was directed into sewer drains (Dolen)from where it usually went into the rivers Danube or Blau. The faecal matter of the inhabitants of Ulm usually ended up in larger cisterns made of brick – also called cesspools or pits – which had to be emptied from time to time.
In 1836, the city of Ulm started constructing a number of covered sewer drains that led into the rivers Danube and Blau. By 1900, most of the houses were connected to this sewer network. Faecal matter, however, still had to be disposed of via the cesspools, a job which was increasingly taken over by machines thanks to the technical progress of the age. Back then, “Pfuhl’s artillery” was the commonly used term for the Pfuhl wagons in Ulm that emptied the cesspools because they looked like artillery carriages.
1873 Water Closet
In 1873 the establishment of a central water supply led to the introduction of the progressive “water closet” in Ulm’s households. This, in turn, necessitated the construction of a suitable water-based sewer network that needed to be designed large enough to handle such large wastewater amounts.
1910 The Pits of Ulm
In 1910, the city was ordered to build a wastewater treatment plant. Until such a plant existed, the sewage had to be pretreated in residential treatment systems, the so-called “Ulmer Gruben” (“Pits of Ulm”). These residential treatment systems were based on the following technical principle: Solid particles settled into the pits. The liquid was introduced as wastewater into the sewer system and went into the rivers Danube or Blau.
1912 Neu-Ulm sewer network
The young city of Neu-Ulm was lucky enough to apply state of the art technical knowledge and developments about construction and operation of a sewer network from the beginning. In 1900 a central water supply system was build and, as of 1912, both the eastern part of the city and the city center had a sewer network. Before the wastewater drained into the Danube, it was supposed to undergo at least mechanical treatment. In 1912, the city was asked to build a wastewater treatment plant. But because of the two world wars, it took a few decades before the cities of Ulm and Neu-Ulm actually were able to build a joint wastewater treatment plant.
1930 Main sewer canals in the city of Ulm
It wasn’t until mid-1930 that main sewer canals connected the sewers and managed to collect all the wastewater and drain it into the Danube below Ulm’s Friedrichsau area.
1954 Construction of the Steinhäule Treatment Plan
It wasn’t until the construction of the Danube power plant “Böfinger Halde” that the cities of Ulm and Neu-Ulm had a clear idea where to construct a joint wastewater treatment plant. The power plant’s positioning made it possible to operate using the natural gradient of the new sewer drain. As a result, the plant was build in “Steinhäule” which was equally advantageous for both cities.
1957 The first joint wastewater treatment plant
In 1957 – after two years of construction – the joint mechanical wastewater collection treatment plant for 242,000 (PE) went into operation.
1973 Biological treatment phase and sewage sludge treatment
For waterway protection reasons, the mechanical treatment system was comprehensively expanded by the biological treatment stage in 1973. It increased the treatment performance from 25% to 90%.
In 1973, a system for dewatering and incinerating sewage sludge was built for the eco-friendly disposal of solids captured in the mechanical and biological treatment stages.
1977 - 1980 Capacity Expansions
Demographic and economic development resulted in a capacity expansion of the biological treatment stage to a population equivalent of 330,000 (PE) through the construction of additional aeration tanks and clarifiers.
1984 Establishment of the Administrative Union
The catchment area of the treatment plant gradually increased due to smaller treatment plants being shut down and additional parts of the cities plus municipalities along the rivers Iller, Blau and Weihung were connected. In 1984, this resulted in the establishment of the “Zweckverband Klärwerk Steinhäule” (Steinhäule Treatment Plant Administrative Union), which has operated the plant ever since, and today consists of 12 association members.
1985 - 1988 Construction of a Blower Station and Installation of Turbo Compressors
The aeration tanks were converted from surface aeration to pressure aeration. The oxygen required by the microorganisms in the biological treatment stage was now introduced into the aeration tank using pressurized aeration. Four new turbo compressors improved the aeration in the aeration tank through so-called domes. Thanks to this process, treatment performance increased to 95%.
1989 - 1993 Capacity Expansions
Further expansion of the biological treatment stage in order to cover a population equivalent of 440,000 (PE). In addition a chemical treatment stage was put into operation to eliminate phosphorus (more than 90%).
1994 - 1997 Capacity Expansion and Flue Gas Purification
An additional treatment line was constructed which necessitated reconstruction and new construction measures to the wastewater distribution system within the treatment plant. The sewage sludge incineration system was equipped with a new flue-gas treatment system to reduce emissions.
1999 - 2000 Reconstruction of the Mechanical Treatment Stage
The mechanical treatment stage was comprehensively overhauled: Reconstruction of the screen and grit chamber.
2000 - 2005 Expansions
The treatment plant was expanded with another clarifier and a high-water pump station. The denitrification plant and biological phosphorus elimination were reconstructed.
2007 - 2015 The adsorptive treatment stage
Construction and going live of the adsorptive treatment stage. |
Determining Stoichiometric Ratios:
NaOH and HCl & NaOH and H2SO4 Reactions
Materials & Procedure
Conclusion & Evaluation
Stoichiometry is a critical component in chemistry, and helps in understanding the quantitative relationship between the number of moles of reactants and products in a reaction.
In this experiment, the reactions between sodium hydroxide and hydrochloric acid, and sodium hydroxide and sulfuric acid will be studied.
When will the maximum extent of the reaction occur? Which will the limiting reagent? Which of the two acid-base combinations will absorb/liberate the greatest amount of heat energy?
When NaOH and HCl are mixed, the maximum extent of the reaction should occur approximately when the amount of acid is slightly higher than the base. HCl will be the limiting reagent. When NaOH and H2SO4 are mixed, the maximum extent of the reaction should occur approximately when the amount of base is slightly higher than the acid. NaOH will be the limiting reagent. Of the two combinations, the reaction between NaOH and H2SO4 will likely absorb/release the greatest amount of heat energy, because the mole ratio is 2:1, whereas the mole ratio of NaOH and HCl is 1:1.
The amount of each reagent in acid-base reactions will be systematically varied between the trials, and will total 50 mL when combined. One of the two reagents will begin at 45 mL, and the other at 5 mL. The former will decrease in increments of 5 mL while the latter increases in increments of 5 mL. The independent variable is the varying measurements of reactant, while the dependent variable is the temperature change as result of the reaction taking place. It is crucial that the molarity of the acid and base remain constant.
Materials & Procedure
10 mL graduated cylinder
25 mL graduated cylinder
50 mL graduated cylinder
1 mL volumetric pipet
3 mL volumetric pipet
5 mL volumetric pipet
10 mL volumetric pipet
50 mL volumetric pipet
100 mL volumetric pipet
500 mL volumetric flask
1000 mL volumetric flask
250 mL, 1000 mL beakers
Plastic foam cup
3.0 M solutions of HCl, H2SO4, and NaOH
Safety goggles should be worn to prevent any of the chemicals from accidentally reaching the eyes. Closed-toe shoes are necessary for any potential lab-related accidents. Refrain from eating or drinking in the lab area; food or drinks can be contaminated. Gloves are not necessary for handling NaOH and HCl, but if any skin is to come into contact with the chemicals, it should be washed quickly with soap and water. Gloves and an apron are necessary when handling H2SO4; this acid will burn through clothes and skin, so it is advised to take precaution when handling H2SO4. H2SO4, in large quantities, should be used under a fume hood. If any H2SO4 is spilled, check yourself. Any contaminated material should be immediately removed, and exposed skin or eyes should be washed for at least fifteen minutes. Once you have checked yourself, evaluate the spill amount. If there is a large pool, a hazardous materials team should be called in and evacuate the area; the fumes can be fatal.
If the amount of H2SO4 is small enough, NaHCO3 (baking soda) or soda ash can be thrown on the acid to neutralize it. The remains should be swept up, and the area washed multiple times with water and more NaHCO3.
Procedure: Reaction Between NaOH and HCl
1. Gather all materials.
2. Prepare 500.00 mL of 1.0 M HCl from the 3.0 M solution available in the appropriate volumetric flask, and prepare 500.00 mL of 1.0 M NaOH from the 3.0 M solution available in the appropriate volumetric flask 3. Stir the solutions vigorously for a...
Please join StudyMode to read the full document |
Researchers at the University of Colorado taught mindfulness techniques to a class of Denver 4th graders, who practiced it daily during homeroom check-in for 10 weeks. After comparing those students to a similar class without mindfulness, they reported improvements in pro-social behavior, emotional regulation and academic performance for those who practiced it. Their conclusion?
Mindfulness in urban classroom settings as a feasible option for students to help with personal stress and coping, as well as emotional and behavior regulation in schools and at home.
The methods came from two sources of mindfulness curricula for children:
- MindUp, from the Hawn (as in Goldie Hawn) Foundation
- Mindful Schools, which came out of a project in Oakland, California |
Red Planet Day is a holiday to celebrate our neighbor in the solar system, Mars, affectionately called the red planet. The holiday is celebrated on the date of the launch of the Spacecraft Mariner 4 on November 28, 1964. The spacecraft came closer to Mars than any had ever come before it and gave us our first close up pictures of the planet. More and more information about Mars is being discovered by scientists constantly ever since. Red Planet Day is a great day to educate yourself about what we currently know and what we are doing to learn more. If you have a telescope another way to celebrate this holiday is to locate Mars in the sky and see it first hand. November is often a great time to see it in most locations. |
As a new teacher, one of the resources I found most helpful in shaping my grading practices was Grant Wiggins’s advice on feedback and assessment. Meaningful feedback, he suggests, is much more than assigning a grade or even offering recommendations for improvement. Rather, meaningful feedback is descriptive, “play[ing] back” the student’s performance and connecting it to the learning outcomes of the course.
In the context of my field, freshmen composition, this means that meaningful feedback involves first describing to a student the strengths and weaknesses of her rhetoric and style, then explaining how those strengths and weaknesses affect my ability to follow and be persuaded by her paper. I work hard to provide such feedback, especially since I am convinced that it plays a key role in helping students learn to write.
Yet I face a real challenge in providing it consistently. In the one-inch margins of a printed paper there’s barely enough space to write a few brief words of advice or critique, let alone provide descriptive and meaningful feedback. I have therefore turned to a technological solution: screencasts.
Screencasts for formative assessment
I first encountered screencasts as a feedback tool several years ago in an online course I was taking for professional development. My instructor used screencasts to comment on my work. Inspired, I started using screencasts the following semester to comment on writing submitted in my face-to-face courses. Since then, I have come to rely on screencasts during both formative and summative assessment.
Like most writing teachers, I regularly collect and comment on partial or incomplete copies of student work, such as an outline or a rough draft. I now use screencasts to assess this work. Specifically, during formative assessment I use screencasts to:
- Visually highlight a passage that needs attention.
- Highlight an error repeated throughout the document.
- Describe errors at length and explain how those errors interfere with my ability to follow the paper and be persuaded by it.
- Employ analogy and example to assist students in understanding the error.
- Demonstrate corrections to an error.
Consider, for example, my work with a student last spring. Although she was a top-notch student, she had a wordy, convoluted writing style. Rather than leaving inscrutable advice in the margins, I created a screencast. I highlighted several examples of wordiness, and then, referring to these examples, described several causes of wordiness in her writing and demonstrated possible ways to cut down on the wordiness. Thanks to the screencast, the student was able to literally see what she was doing wrong as a writer and how to improve it, allowing her to make the changes she needed to tighten up her writing.
Screencasts for summative assessment
Screencasts offer the potential for the precise, descriptive feedback students need to make effective changes in their work. But even when students’ work is finished, screencasts still play a role in assessment; their visual nature makes them a valuable part of summative assessment as well.
During grading, I often use screencasts to:
- Match passages of a student paper with the grading rubric
- Explain my reasoning for a grade fully and conversationally
- Provide alternative feedback to audiovisual learners, or students with learning disabilities
The ability to visually match a student’s paper with the rubric and to explain my reasoning for a grade is especially valuable when the situation is sensitive – for instance, when a student receives a low score despite working hard work on an assignment.
I recently had such a situation in my freshmen literature course. A student who was a diligent worker and an aspiring English teacher, revised a project in hopes of earning a higher score. Unfortunately, the revision remained mediocre. To provide him with feedback, I decided to use a screencast. I clearly highlighted several passages where his work remained weak, then visually connected these with specific standards spelled out in the rubric. This allowed the student to see why his paper received the score that it did, as well as hear my sympathetic tone, softening the moment and providing some encouragement.
I use screencasts as the primary means of providing feedback to students with learning disabilities. When they already have difficulty with written text, it seems unfair to ask that they decipher cramped handwritten notes; the visual nature of the screencast better demonstrates for them the strengths and weaknesses of their work. Thus, the screencast is key in differentiating instruction, making room for diverse learners in the classroom.
Numerous programs exist to aid those who want to integrate screencasts into the assessment process. Personally, I use Screencastify, a Google Chrome extension; and Jing, a user-friendly offshoot of TechSmith. Educational writer Andrew Douchy lists several more options at his blog, including the popular Screencast-o-Matic. Most screencasts have a basic free version and several options for paid versions.
Although a web-based tool, screencasts play a valuable role in face-to-face courses, allowing instructors to offer descriptive and, in Grant Wiggins’ words, “actionable” feedback. With this feedback, students are better able to improve their work and the entire writing process becomes a more positive, fulfilling learning experience.
Douch, Andrew. “The Best Screencasting Software for Teachers.” Douchy’s Blog. 13 Feb 2014. https://andrewdouch.wordpress.com/2014/02/13/the-best-screencasting-software-for-teachers. Accessed 1 June 2017.
Wiggins, Grant. “Seven Keys to Effective Feedback.” Educational Leadership. 70:1 (Sept 2012). 10-16. ASCD. 10-16. http://www.ascd.org/publications/educational-leadership/sept12/vol70/num01/Seven-Keys-to-Effective-Feedback.aspx. Accessed 1 June 2017.
Megan Von Bergen serves as the sole writing and literature instructor at Emmaus Bible College. |
- Makes pleasure sounds (cooing, gooing).
- Cries differently for different needs.
- Smiles when sees you.
4 – 6 months
- Babbling sounds more speech-like with many different sounds, including p, b and m.
- Vocalizes excitement and displeasure.
- Makes gurgling sounds when left alone and when playing with you.
7 – 12 months
- Babbling has both long and short groups of sounds such as “tata upup bibibi”.
- Uses speech or non-crying sounds to get and keep attention.
- Imitates different speech sounds.
- Has 1 or 2 words (bye-bye, dada, mama), although they may not be clear.
1 – 2 years
- Says more words every month.
- Uses some 1-2 word questions (“Where kitty?, “Go bye-bye?”, “What that?”).
- Puts two words together (“More cookie”, “No juice”, “Mommy book”).
- Uses many different consonant sounds at the beginning of words.
2 – 3 years
- Has a word for almost everything
- Uses two or three words to talk about and ask for things
- Uses k, g, f, t, d, and n sounds
- Speech is understood by familiar listeners most of the time
- Often asks for or directs attention to objects by naming them.
3 – 4 years
- Talks about activities at school or at friend’s homes.
- People outside the family usually understand child’s speech.
- Uses a lot of sentences that have 4 or more words.
- Usually talks easily without repeating syllables or word.
4 – 5 years
- Voice sounds clear like other children’s.
- Uses sentences that give lots of details (e.g., “I like to read my books”).
- Tells stories that stick to topic.
- Communicates easily with other children and adults.
- Says most sounds correctly except a few like l, s, v, z, j, ch, sh, th.
- Uses the same grammar as the rest of the family. |
The Ordovician period (500 to 440 million years ago) comes after the Cambrian in the early Paleozoic era. The period is named for a Celtic tribe named the Ordovices who once lived in the area of Wales (in Britain) where the rocks were first studied. Ordovician limestones are over 6.4 kilometers (4 miles) thick in places and are found on all continents except Antarctica. The uniformity and thickness of the bed indicates a long period of warm and stable climate that allows them to develop.
In fact, the Ordovician period was as remarkable for the diversity of its species as the Cambrian period was for the appearance of most major phyla . A burst of evolutionary creativity in shape, size, and function tripled the number of marine species that appeared. Specialization became the dominant theme of life, with new forms filling every possible niche .
The appearance of highly efficient predators such as the nautiloids and the lobster-size sea scorpions forced the marine community to evolve protective strategies or disappear. Various species responded by developing larger size, thicker shells, or more elaborate defenses. A proliferation in the shapes of the shells of bivalve mollusks allowed them to burrow deeply into sand or mud. Other mollusks learned to swim freely by rapidly clapping their valves together. And still others developed intricate teeth-and-socket arrangements that allowed them to close so tightly that they were almost impossible to open.
Exploring the oceans of the Ordovician world would have been quite similar to exploring the oceans of today. Sea urchins, starfish, and sea lilies lived in profusion among the rocks. The first great coral reefs appeared and gave shelter to crustaceans of all kinds. Sea mats, sea snails, and sea cucumbers abounded in the tide pools. A huge diversity of bivalve mollusks made their slow way across the muddy ocean floor, leaving their tracks and burrows in the fossil record .
|Era||Period||Epoch||Million Before Years Present|
The very first primitive fishes appeared, slow and heavily armored, without fins or heads with brains. These agnathans (jawless fishes) were the first animals to have a notochord (flexible rod spine), a precursor of a true spinal chord. These chordates were the ancestor of all animals with backbones.
While almost all animals of the Ordovician were marine, another remarkable occurrence is recorded in the rocks of northwest England. There, arthropods (animals with jointed legs) that lived in shallow, freshwater pools left the first tracks in fossilized mud. Scientists speculate that evaporation of their pools forced these centipede-like creatures to adapt to terrestrial conditions. From this point on, the arthropods, a group that includes insects, spiders, and crabs, ruled the land for 40 million years.
The massive Ordivician limestone ends abruptly with a jumble of glacial till, indicating an ice age that so disrupted Earth's climate that more than half of all species became extinct. This first great extinction wiped out huge numbers of trilobites , with their precise and sensitive eyes, brachiopods , crinoids , and other marine invertebrates . The life-forms that survived the cataclysmic end of the Ordovician contributed to the genetic makeup of the animal kingdom to the present.
see also Geological Time Scale.
Fortey, Richard. Life: A Natural History of the First Four Billion Years of Life on Earth. New York: Viking Press, 1998.
Gould, Stephen Jay, ed. The Book of Life. New York: W. W. Norton & Company, 1993.
Lambert, David. The Field Guide to Prehistoric Life. New York: Facts on File, 1985.
In geologic time , the Ordovician Period, the second period of the Paleozoic Era , covers the time roughly 505 million years ago (mya) until 438 mya. The name Ordovician derives from that of the Ordovices, an ancient British tribe.
The Ordovician Period spans three epochs. The Lower Ordovician Epoch is the most ancient, followed in sequence by the Middle Ordovician Epoch, and the Upper Ordovician Epoch. The Ordovician Period is divided chronologically (from the most ancient to the most recent) into the Tremadocian, Arenigian, Llanvirnian, Llandeilian, Caradocian and Ashgillian stages.
Much of the continental crust that exists now had already been formed by the time of the Ordovician Period and the forces driving plate tectonics actively shaped the fusing continental landmasses. Near the margins of the continental landmasses, extensive orogeny (mountain building) allowed the development of mountain chains .
The fossil record provides evidence to support the demarcation of the preceding Cambrian Period from the Ordovician Period. Drastic changes of sea levels resulted in massive extinctions among marine organisms. In accord with a mass extinction, many fossils dated to the Cambrian Period are not found in Ordovician Period formations.
The fossil record establishes that vertebrates existed during the Ordovician Period. As with the Cambrian Period, the Ordovician Period ended with a mass extinction of nearly a third of all species. This mass extinction, approximately 438 mya, marked the end of the Ordovician Period and the start of the Silurian Period .
Although there is no evidence of an occurrence equivalent to the K-T event , it is possible that an impact from a large meteorite may have been responsible for the mass extinction marking the end of the Cambrian Period and start of the Ordovician Period. Impact craters dating to the Ordovician Period have been identified in Australia .
See also Archean; Cenozoic Era; Cretaceous Period; Dating methods; Devonian Period; Eocene Epoch; Evolution, evidence of; Fossils and fossilization; Historical geology; Holocene Epoch; Jurassic Period; Mesozoic Era; Miocene Epoch; Mississippian Period; Oligocene Epoch; Paleocene Epoch; Pennsylvanian Period; Phanerozoic Eon; Pleistocene Epoch; Pliocene Epoch; Proterozoic Era; Quaternary Period; Tertiary Period; Triassic Period |
What is Fifth Disease?
Fifth disease is a mild rash illness that usually affects children. Fifth disease is caused by a virus called Parvovirus B 19 that lives in the nose and throat and can be spread from person to person.
The first stage of the illness consists of headache, body ache, sore throat, low-grade fever, and chills. These symptoms last about 2 to 3 days and are followed by a second stage, lasting about a week, during which the person has no symptoms at all. In children, the third stage involves a bright red rash on the cheeks which gives a "slapped cheek" appearance. This may be followed by a "lacy" rash on the trunk and arms and legs. The rash begins 17 to 18 days after exposure. The rash may appear on and off for several weeks with changes in temperature, sunlight, and emotional stress. Adults may not develop the third-stage rash but may experience joint pain, particularly in the hands and feet. The disease is usually mild and both children and adults recover without problems. However, in rare situations some people, especially those with blood disorders such as sickle cell anemia, may develop more severe symptoms.
Who gets it and how?
Children and adults can get parvovirus B 19. When an infected person coughs, sneezes, or speaks, the virus is sprayed into the air. These contaminated droplets can then be inhaled or touched by another person.
How is it treated?
There is no specific treatment for Fifth disease. Health care providers may suggest treatment to relieve some symptoms. There is no vaccine to prevent Fifth disease.
Must your child stay home?
Children with fifth disease do not have to stay home. By the time they are diagnosed with the rash, they are no longer contagious.
What should you do?
Watch for the symptoms of Fifth disease and call your child's physician if rash occurs.
Always be careful about hand washing, especially after touching discharge from the nose and throat and before eating or handling food.
Notify school in writing if your child has Fifth disease. |
Nutrition is important because it helps individuals attain optimal health throughout life, according to the National Health and Medical Research Council of the Australian Government. Eating a balanced diet improves a person's health and well-being and reduces risks of major causes of death.
Food is a source of energy, vitamins, minerals, protein and essential fats needed by the body to live, grow and function properly, says the NHMRC. People need a large variety of different foods to obtain sufficient amounts of nutrients for a healthy body. With good nutrition, people can reduce the risk of numerous diet-related diseases, including coronary heart disease, stroke, hypertension, obesity, osteoporosis and nutritional anemia.
Proper nutrition plays a key role in maintaining a healthy lifestyle, notes the President's Council on Fitness, Sports and Nutrition. A person's daily food choices considerably affect his overall health. When combined with physical activity, a healthy diet can help people keep a healthy weight and reduce risk of chronic diseases. Unhealthy diets have led to the obesity epidemic in the United States. As of 2011, around 33.8 percent of adults are obese. Poor diet is closely linked to major health risks even for people with a healthy weight. Considering the significant connection between good nutrition and healthy weight, it is thus important for people to make smart food choices. |
“We’re not special” is not a scientific statement. It’s an opinion. It’s a simple way of expressing what has become known as the Copernican principle. Copernicus himself did not explicitly hold this when he realized Earth was likely not the physical center of the universe. In the generation after Copernicus, isolated thinkers like Giordano Bruno interpreted this as a demotion of Earth. It did not become such a widespread view until recently.
Carl Sagan asked in the original Cosmos, “Who are we? We find that we live on an insignificant planet of a humdrum star lost in a galaxy tucked away in some forgotten corner of a universe in which there are far more galaxies than people.” Sagan had requested that, as Voyager 1 left the solar system in 1990, it turn its camera around to take one last picture of Earth. Presenting this image before an audience at Cornell, he reflected on all the people that have lived on Earth, “a mote of dust suspended in a sunbeam.” His speech is worth reading in full. Interwoven in the beautiful prose, however, is the assumption that the Earth is unimportant since it’s just a dot in this image.
Just because the Earth is small when it is compared to the vastness of space does not make it insignificant. Assumptions of the homogeneity and isotropy of the universe lead to accurate scientific cosmological models. Making the step from these principles to the Copernican principle, however, is an unjustified ideological leap.
The fact that “the Earth is the only world known so far to harbor life” rather speaks to me of its importance. If the Earth were not as small as it is, then its surface gravity would be too strong for life as we know it. If the Earth didn’t orbit a G-type star, the circumstellar habitable zone could easily be less stable. If our Sun were a member of an elliptical galaxy or closer to the center of our spiral galaxy, or even in the spiral arms rather than between them, interactions with other stars would be more likely, which could more easily disrupt the stability of our solar system. So many places in the universe are inhospitable to life. We live on a planet where life is possible. I think that is enough to challenge Sagan’s claim that we are deluded to think “we have some privileged position in the universe.” Authors Jay W. Richards and Guillermo Gonzalez of The Privileged Planet challenge Sagan and the Copernican principle with a thesis that not only is our planet well suited for life, but it is also well suited for discovery. Although I am unsatisfied with its presentation of the “intelligent design” view as scientific, I highly recommend watching the thought-provoking documentary and even reading the book.
The fact that in the Pale Blue Dot image the Earth lies in a sunbeam is indeed, as Sagan says, an effect of geometry and optics. Just from that, however, there is no way to prefer one philosophical view over another. The worldviews that clash over the Earth’s significance or insignificance are entrenched enough that I do not expect any amount of scientific discovery to settle the dispute. The science we discover certainly inspires wonder, and we need to keep doing science. We also need to reason more rigorously and integrate the different ways we can come to know things. By doing this, we can sort fact from opinion and identify any unjustified leap before falling for a conclusion that could lead us down a dark or deluded path.
I believe it is pertinent to close this post with a related quote from St. John Paul the Great: “If knowledge of the unmeasured dimensions of the cosmos has erased the dream that our planet or our solar system could be the physical center of the world, not by that is man diminished in his dignity” (translation from Italian). |
Biomass energy is as old as a caveman's fire, and it remains an important source of renewable energy worldwide. Despite its ancient use as a source of heat and energy, however, many people don't know what biomass energy really means, or where biofuels come from.
What Is Biomass Energy?
Biomass energy is any kind of energy that uses a biological organism (plant or animal) as its source.
Because the definition of biomass is so broad, fuels that can be considered "biomass" include a wide variety of items and researchers are discovering new biomass energy sources all the time.
Animal manure, landfill waste, wood pellets, vegetable oil, algae, crops like corn, sugar, switchgrass and other plant material -- even paper and household garbage -- can be used as a biomass fuel source.
Biomass fuel can be converted directly into heat energy through combustion, like the burning of a log in a fireplace. In other cases, biomass is converted into another fuel source; examples include ethanol gasoline made from corn or methane gas derived from animal waste.
How Practical Is Biomass Energy?
Roughly three to four percent of America's energy comes from biomass, while 84 percent comes from fossil fuels like natural gas, coal, and petroleum. Clearly, biomass has a long way to go before it's widely accepted as a source of energy.
Despite these challenges, there are many advantages to the growing use of biomass energy. One obvious advantage that biomass fuels have over other energy sources is that biomass is renewable: We can grow more plants, but nobody can make more oil.
Another advantage is that some sources of biomass, like manure, sawdust, and landfill garbage, use a fuel source that would otherwise go to waste. These sources, therefore, reduce our dependence on fossil fuels and nuclear energy while also reducing the negative impacts -- noise, smell, vermin, declines in property values -- that are associated with landfills.
Biomass Energy and the Environment
Biomass is a source of renewable energy that can be replenished at each crop cycle, wood harvest or manure pile -- but it isn't perfect. Because it comes from a variety of sources, biomass fuel isn't always consistent in quality or energy efficiency, and there isn't yet a well-developed network of biomass refineries and distributors like there is for gasoline and natural gas.
Additionally, the burning of biomass fuels, like the burning of fossil fuels, produces potentially dangerous pollutants like volatile organic compounds, particulate matter, carbon monoxide (CO) and carbon dioxide (CO2). CO2 is a greenhouse gas that is one of the leading causes of global warming and climate change.
The renewable nature of biomass energy, however, can greatly reduce this environmental impact. While burning biomass releases carbon monoxide and CO2 into the atmosphere, trees, and plants that are grown as a biomass energy source also capture carbon from the atmosphere during photosynthesis. This process is often called "carbon sequestering" or "carbon banking."
Is Biomass Energy Eco-Friendly?
There's some controversy over the cost-benefit balance of biomass energy and carbon sequestering.
Some analysts have found that the atmospheric carbon (CO and CO2) released when biomass fuels are burned is roughly equal to the carbon stored in trees and plants grown on biomass "plantations." This analysis makes biomass energy essentially carbon neutral and environmentally friendly.
Other experts, however, have found that industrial-scale biomass energy development is wreaking havoc on the natural environment and on air quality. Greenpeace has published a report, "Fueling a Biomess," that finds large-scale growth in biomass energy has extended beyond waste sources like sawdust and paper mill waste, and whole trees and other important forest habitat are now being destroyed:
"Canada alone releases approximately 40 megatons of CO2 emissions annually from forest bioenergy production, an amount that exceeds the tailpipe emissions of all 2009 Canadian light-duty passenger vehicles.
The CO2 emitted will harm the climate for decades before being captured by re-growing trees."
The Future of Biomass Energy
Though it's an ancient source of energy, biomass energy still has a long way to go before it replaces other energy sources like fossil fuels and nuclear energy.
Nonetheless, the home fireplace isn't going away, and a diversified energy policy is likely to be the best strategy for energy security in the 21st century. As researchers at Oak Ridge National Laboratory have stated, "Studies suggest that the optimal [biomass] strategy will be different from place to place, determined by the quality of the land, its current uses, competing uses, and the demands for energy."
Discover more information on biomass energy sources by reading these articles on wood and woody biomass, waste-to-energy, biogas and liquid biofuels. |
Punic Wars 264 B.C.E to 146 B.C.E:
The three Punic Wars between Carthage and Rome took place over nearly a century, beginning in 264 B.C. and ending with the destruction of Carthage in 146 B.C. By the time the First Punic War broke out, Rome had become the dominant power throughout the Italian peninsula, while Carthage–a powerful city-state in northern Africa–had established itself as the leading maritime power in the world.
Summary of the three Punic Wars:
The first Punic war was fought to establish control over the strategic islands of Corsica and Sicily. In 264 the Carthaginians intervened in a dispute between the two principal cities on the Sicilian west coast, Messana and Syracuse, and so established a presence on the island.
A quick fun read about the First Punic War:
Video Summary of First Punic War:
The Second Punic War broke out in 218 B.C.E. when Hannibal took control of the Greek city and Roman ally, Saguntum (in Spain). Rome thought it would be easy to defeat Hannibal, but Hannibal was full of surprises, including his manner of entering the Italic peninsula from Spain.
Text of the Second Punic War:
Video of Second Punic War:
Hannibal: Known as one of the greatest Generals of ancient times.
Battle of Cannae: Major Battle of the Second Punic War.
Third Punic War: 149 B.C.E to 146 B.C.E.
The third of three wars between the Roman Republic and the Carthaginian (Punic) Empire that resulted in the final destruction of Carthage, and the enslavement of its population.
Useful text explaining the Third Punic War: |
Create Your Own Fog
Type of Lesson: Hands-on Activity
Time Needed: 20 minutes
MEGOSE EAW2 Describe weather conditions and climates.
MEGOSE EAW6 Describe patterns of changing weather and how they are measured.
MEGOSE EAW10 Explain and predict general weather patterns and storms.
Quick Summary of Lesson
Students form fog in a jug. This lesson would be a great intro to looking more deeply at fog during a weather unit.
1. Fill the jar halfway with hot water.
2. Place the wire mesh on top of the jar's mouth. Then place 3-4 ice cubes on top of the wire mesh.
3. After a few minutes, observe the fog that will form inside the jug.
Notes to the Teacher
The reason this activity works is this: the hot water evaporates into the jar. The ice cubes create a cool air mass at the top of the jar. This cool air causes the evaporated water to condense again, so that we see fog. Of course, the evaporated water does need condensation nuclei in order to condense (dust, aerosols, etc), but that's usually not lacking in a normal classroom!
Need More Information? Try Using Windows to the Universe
Please use these links for further ideas or more information:
Earth sky conditions
Weather crossword puzzle
Last modified prior to September, 2000 by the Windows Team
The source of this material is Windows to the Universe, at http://windows2universe.org/ from the National Earth Science Teachers Association (NESTA). The Website was developed in part with the support of UCAR and NCAR, where it resided from 2000 - 2010. © 2010 National Earth Science Teachers Association. Windows to the Universe® is a registered trademark of NESTA. All Rights Reserved. Site policies and disclaimer. |
Download one of our helpful brochures that you can print and keep for future reference.
Bats have been the source of many myths and fears for many years. Dispelling these myths and fears is as simple as knowing the facts
North American bats are invaluable natural resources. As primary predators of night-flying insects, bats play a vital role in maintaining the balance of nature. A single little brown bat can catch hundreds of mosquitoes in an hour. Bats that frequent bat houses eat insects that could damage crops, such as cucumber and June beetles, stink bugs, leafhoppers and corn worm moths. Most likely to inhabit bat houses are little brown bats, big brown bats, eastern pipistrelle and the eastern long-eared bat.
• Providing bat houses can help build the populations of many valuable bat species. Providing houses furnishes places for bats to roost, hibernate and raise young, in addition to, and when the natural sites are not available.
• Little Brown Bats, while hibernating can reduce their heart rate to 20 beats per minute and can stop breathing for 48 minutes at a time. Little Brown Bats can hibernate for more than seven months if left undisturbed.
• Desert eco systems rely on nectar feeding bats as primary pollinators of giant cacti.
• A nursing little brown bat mother can eat more than her body weight nightly (up to 4,500 insects).
• Less than 1% of bats contract rabies, and usually bite in self defense.
• A mother Mexican Free-tailed Bat can produce more than five times as much milk as an average Holstein cow.
• Almost 40% of American bat species are threatened or endangered.
• The loss of bats contributes to an imbalance in nature that helps cause increases in use of toxic pesticides that threaten our heath and environment.
Providing bat houses can help build the populations of many valuable bat species. Providing houses furnishes places for bats to roost, hibernate and raise young. This is, in addition to and when, natural sites are not available.
Most likely to inhabit bat houses are little brown bats, big brown bats, eastern pipistrelle and the eastern long-eared bat.
In the northern two thirds of the U.S. and Canada, most bats migrate south in the winter. Most bats that inhabit bat houses will move to caves, or mines. Tree roosting bats will fly south.
Bats find houses by sight. If a house in the proper location, meets the requirements and is needed, the bats will move in on their own.
The majority of bats that use houses are females using the house as nurseries.
Bat boxes should be hung at least 15’ above the ground-- the higher, the better. Research shows that they are more successful if they have at least 8 hours of sun. The morning sun is most important. Bat houses should face the south or southeast. In northern areas the top third of the house should be painted brown or black with a latex water base paint to aid in warming the box. In southern parts of the country, the boxes can be painted latex water base white, if there is too much direct sun. Bat houses mounted 20’ away from trees are inhabited twice as quickly as those in wooded areas. |
- Define acid.
- Name a simple acid.
There is one other group of compounds that is important to us—acids—and these compounds have interesting chemical properties. Initially, we will define an acid as an ionic compound of the H+ cation dissolved in water. (We will expand on this definition in Chapter 12 “Acids and Bases”.) To indicate that something is dissolved in water, we will use the phase label (aq) next to a chemical formula (where aq stands for “aqueous,” a word that describes something dissolved in water). If the formula does not have this label, then the compound is treated as a molecular compound rather than an acid.
Acids have their own nomenclature system. If an acid is composed of only hydrogen and one other element, the name is hydro- + the stem of the other element + -ic acid. For example, the compound HCl(aq) is hydrochloric acid, while H2S(aq) is hydrosulfuric acid. (If these acids were not dissolved in water, the compounds would be called hydrogen chloride and hydrogen sulfide, respectively. Both of these substances are well known as molecular compounds; when dissolved in water, however, they are treated as acids.)
If a compound is composed of hydrogen ions and a polyatomic anion, then the name of the acid is derived from the stem of the polyatomic ion’s name. Typically, if the anion name ends in -ate, the name of the acid is the stem of the anion name plus -ic acid; if the related anion’s name ends in -ite, the name of the corresponding acid is the stem of the anion name plus -ous acid. Table 3.9 “Names and Formulas of Acids” lists the formulas and names of a variety of acids that you should be familiar with. You should recognize most of the anions in the formulas of the acids.
Table 3.9 Names and Formulas of Acids
|Note: The “aq” label is omitted for clarity.|
Name each acid without consulting Table 3.9 “Names and Formulas of Acids”.
- As a binary acid, the acid’s name is hydro- + stem name + -ic acid. Because this acid contains a bromine atom, the name is hydrobromic acid.
- Because this acid is derived from the sulfate ion, the name of the acid is the stem of the anion name + -ic acid. The name of this acid is sulfuric acid.
Name each acid.
- hydrofluoric acid
- nitrous acid
All acids have some similar properties. For example, acids have a sour taste; in fact, the sour taste of some of our foods, such as citrus fruits and vinegar, is caused by the presence of acids in food. Many acids react with some metallic elements to form metal ions and elemental hydrogen. Acids make certain plant pigments change colors; indeed, the ripening of some fruits and vegetables is caused by the formation or destruction of excess acid in the plant. In Chapter 12 “Acids and Bases”, we will explore the chemical behaviour of acids.
Acids are very prevalent in the world around us. We have already mentioned that citrus fruits contain acid; among other compounds, they contain citric acid, H3C6H5O7(aq). Oxalic acid, H2C2O4(aq), is found in spinach and other green leafy vegetables. Hydrochloric acid not only is found in the stomach (stomach acid) but also can be bought in hardware stores as a cleaner for concrete and masonry. Phosphoric acid is an ingredient in some soft drinks.
- An acid is a compound of the H+ ion dissolved in water.
- Acids have their own naming system.
- Acids have certain chemical properties that distinguish them from other compounds.
Give the formula for each acid.
a) perchloric acid
b) hydriodic acid
2. Give the formula for each acid.
a) hydrosulfuric acid
b) phosphorous acid
3. Name each acid.
4. Name each acid.
5. Name an acid found in food.
6 Name some properties that acids have in common.
a) hydrofluoric acid
b) nitric acid
c) oxalic acid
oxalic acid (answers will vary) |
According to Madeleine Henry's work on Aspasia of Miletus, Prisoner of History, an Alcibiades was ostracized from Athens in 460 B.C. This Alcibiades was the grandfather of the far more famous Alcibiades notorious for his behavior during the Peloponnesian War. Once ostracized, Athenians went into exile. Alcibiades (grandpère) may have spent his exile in Miletus, in Ionia (modern Asia Minor), where he met and married the older daughter of Axiochus of Miletus. Ten years later, at the expiration of his sentence of exile, Alcibiades, his wife, and two sons returned to Athens, along with his young, orphaned sister-in-law, Aspasia.
Female Metics and Citizens
During Alcibiades' exile, Athens passed the Periclean Citizenship Law (451/450 B.C.). According to this new law, no children born to a citizen (by definition, a male) with a foreign born (metic) wife could be Athenian citizens. Her children would be considered illegitimate, making her no better than a concubine. The law wasn't retroactive, so Alcibiades' sons from a marriage made before 451 B.C. were counted as legitimate even though their mother was a metic, but since Aspasia, also a metic in Athens, had not been grandfathered into a pre-Citizenship Law marriage, her marriage prospects were suddenly limited.
Career Choices for Women in Ancient Athens
What choices for relationships did Aspasia of Miletus have? To be a concubine? A prostitute? A madam? Aspasia was accused of all of these. But she was thwarted in the normal aspirations and expectations [see Jill Kleinman article] for aristocratic women like herself whose primary responsibility was to produce legitimate offspring. Since Aspasia could not produce legitimate children, there was no reason for any Athenian male citizen [Brian Arkins, 1994. "Sexuality in Fifth-Century Athens," Classics Ireland Volume 1] to marry her. Thus, any sexual relationship Aspasia entered into could be viewed as improper. That she chose to enter into a relationship with the Athenian leader Pericles put her, too, in a position of power, but also a position particularly vulnerable to criticism.
Critics of Aspasia
- "Even Aspasia, who belonged to the
Socratic circle, imported large numbers of beautiful women, and Greece came to be filled with her prostitutes, as the witty Aristophanes notes in passing, when he says of the Peloponnesian War that Pericles fanned its terrible flame because of his love for Aspasia and the serving-maids who had been stolen from her by Megarians...."
Athenaeus - Deipnosophists
- Aspasia, some say, was courted and caressed by Pericles upon account of her knowledge and skill in politics. Socrates himself would sometimes go to visit her, and some of his acquaintance with him; and those who frequented her company would carry their wives with them to listen to her. Her occupation was anything but
creditable, her house being a home for young courtesans. Aeschines tells us, also, that Lysicles, a sheep-dealer, a man of low birth and character, by keeping Aspasia company after Pericles's death, came to be a chief man in Athens. And in Plato's Menexenus, though we do not take the introduction as quite serious, still thus much seems to be historical, that she had the
repute of being resorted to by many of the Athenians for instruction in the art of speaking.
Plutarch - Life of Pericles
Henry, Madeleine M., Prisoner of History
Also see these Ancient/Classical History series:
- Famous Figures in Ancient History A-Z
- Greek Mythology
- Julius Caesar Study Guide
- Top Myths About Ancient History
- Roman Timelines
- This Day in History
- Quotes Index
Aspasia ResourcesPericlean Citizenchip Law on Perseus, from Thomas R. Martin, An Overview of Classical Greek History from Mycenae to Alexander, 9.3.1 |
Scientists in Canada believe they have just identified the oldest dome-headed dinosaur of its kind.
After three specimens of a small, dog-sized dinosaur turned up near a provincial park in Alberta, Canada, a team of researchers investigated the fossils — only to find out they dated back at least 85 million years. The species, Acrotholus audeti, was an 85-pound, thick-skulled dinosaur. In fact, its skull was more than two inches thick according to a recently published article in the scientific Nature Communications journal.
The bones were found on a farm belonging to resident Roy Audet, and as a result, the species was partially named after the Canadian. Bone-headed dinosaurs, or thick-headed lizards, are often known as pachycephalosaurs in the scientific community.
David Evans, who led the expedition and is a curator at the Royal Ontario Museum, remarked on his findings to the BBC. “What’s interesting about Acrotholus is that it’s the oldest known pachycephalosaur from North America, and it might be the oldest known pachycephalosaur in the world.”
According to the Huffington Post, the fossils are about 5 million years older than the next known pachycephalosaur specimen found on the continent. Another pachycephalosaur was discovered in Mongolia, but it’s unclear which fossil is older.
So why is this find so remarkable in comparison to discoveries of its large and terrifying brethren? Huffington Post reports:
Given the diversity of small animals in modern times, researchers would expect to see that ancient ecosystems had a large share of tiny dinosaurs. But dinosaurs that weighed less than about 220 lbs. (100 kilograms) don’t fossilize well. Any bones that weren’t immediately scattered or weathered into dust were often washed away from the death site, leading to jumbled, confused fossil sites. Big beasts such as long-necked, bus-sized sauropods are easier to unearth.
Evans and his colleagues found that pachycephalosaur diversity in the scientific community was significantly underestimated. “What Acrotholus does is it extends our knowledge of the anatomy of this group early in their evolution — and it’s actually important for understanding the evolution of pachycephalosaurs in general.
The fossils of Acrotholus audeti will go on display at the Royal Ontario Museum in Canada later this month. |
Question: A beaker made of ordinary glass contains a lead sphere
A beaker made of ordinary glass contains a lead sphere of diameter 4.00 cm firmly attached to its bottom. At a uniform temperature of - 10.0°C, the beaker is filled to the brim with 118 cm3 of mercury, which completely covers the sphere. How much mercury overflows from the beaker if the temperature is raised to 30.0°C?
Answer to relevant QuestionsA steel rod undergoes a stretching force of 500 N. Its cross-sectional area is 2.00 cm2. Find the change in temperature that would elongate the rod by the same amount as the 500-N force does. Tables 12.1 and 19.1 are ...A tank having a volume of 0.100 m3 contains helium gas at 150 atm. How many balloons can the tank blow up if each filled balloon is a sphere 0.300 m in diameter at an absolute pressure of 1.20 atm?(a) Find the number of moles in one cubic meter of an ideal gas at 20.0°C and atmospheric pressure. (b) For air, Avogadro’s number of molecules has mass 28.9 g. Calculate the mass of one cubic meter of air. Compare the ...In state-of-the-art vacuum systems, pressures as low as 10-9 Pa are being attained. Calculate the number of molecules in a 1.00-m3 vessel at this pressure if the temperature is 27.0°C.A mercury thermometer is constructed as shown in Figure P19.47. The capillary tube has a diameter of 0.004 00 cm, and the bulb has a diameter of 0.250 cm.
Post your question |
Preparing for School
We hope you find the following information helpful when preparing your child for school. The transition to school is greatly helped when children are well prepared.
Useful skills and attitudes for your children when starting school
- to be able to mix well and cooperate with others
- to be happy to have a try at new things
- to realise that making mistakes is part of learning
- to be able to dress and undress independently
- to be responsible for their own toileting
- to be able to follow directions
Other useful skills
- to be familiar with books and how to handle them
- to clear away after themselves and put things away in the right places
- to hold a pencil correctly
- to use lower case letters for writing
It also helps for children to be able to recognise and then write their own name and to have some understanding of early number concepts. However, most of all, it's important that children are happy to be independent and to 'have a go' at school activities with a positive outlook and smile. |
The settlement at 'Ain Ghazal was a village of farmers, hunters, and herders occupied between 7200 and 5000 B.C. during the Neolithic period (ca. 8500p;4500 B.C.). Its inhabitants made objects for daily use,
such as stone tools and weapons, and objects that seem to have served symbolic functions, such as small clay figurines of animals and humans.
More sophisticated works of art have also been discovered at `Ain Ghazal: large,
human-form statues and busts made of plaster, and faces in plaster, which had been modeled on human skulls. These unique finds have been uncovered, studied, and preserved at the Smithsonian Institution's Conservation Analytical Laboratory in suburban Washington, D.C.
This exhibition seeks to show the important steps in the process of recovery and preservation. In addition to wall texts and object labels, an illustrated brochure and an interactive computer program providing more
information are available within the exhibition space.
The objects in this exhibition have been lent by the Department of Antiquities of Jordan.
The brochure, interactive computer program, and Website were produced by the
Arthur M. Sackler Gallery in consultation with the Smithsonian Institution's Conservation Analytical Laboratory, and are supported by a grant from the James Smithson Society. |
|Unit system||astronomical units|
|1 pc in ...||... is equal to ...|
|metric (SI) units|| ×1016 m3.0857
|imperial & US units||×1013 mi1.9174|
|astronomical units|| 68×105 au2.062
The parsec (symbol: pc) is a unit of length used to measure large distances to astronomical objects outside the Solar System. A parsec was defined as the distance at which one astronomical unit subtends an angle of one arcsecond, but it was redefined in 2015 to exactly 000648/ astronomical units. One parsec is equal to about 3.26 light-years (30 trillion km or 19 trillion miles) in length. The nearest star, Proxima Centauri, is about 1.3 parsecs (4.2 light-years) from the Sun. Most of the stars visible to the unaided eye in the night sky are within 500 parsecs of the Sun.
The parsec unit was probably first suggested in 1913 by the British astronomer Herbert Hall Turner. Named as a portmanteau of the parallax of one arcsecond, it was defined so as to make calculations of astronomical distances quick and easy for astronomers from only their raw observational data. Partly for this reason, it is the unit preferred in astronomy and astrophysics, though the light-year remains prominent in popular science texts and common usage. Although parsecs are used for the shorter distances within the Milky Way, multiples of parsecs are required for the larger scales in the universe, including kiloparsecs (kpc) for the more distant objects within and around the Milky Way, megaparsecs (Mpc) for mid-distance galaxies, and gigaparsecs (Gpc) for many quasars and the most distant galaxies.
In August 2015, the IAU passed Resolution B2, which as part of the definition of a standardized absolute and apparent bolometric magnitude scale, included an explicit definition of the parsec as exactly 000648/ astronomical units, or approximately 67758149137×1016 metres (based on the IAU 2012 exact SI definition of the astronomical unit). This corresponds to the small-angle definition of the parsec found in many contemporary astronomical references. 3.085
History and derivation
The parsec is defined as being equal to the length of the longer leg of an extremely elongated imaginary right triangle in space. The two dimensions on which this triangle is based are its shorter leg, of length one astronomical unit (the average Earth-Sun distance), and the subtended angle of the vertex opposite that leg, measuring one arc second. Applying the rules of trigonometry to these two values, the unit length of the other leg of the triangle (the parsec) can be derived.
One of the oldest methods used by astronomers to calculate the distance to a star is to record the difference in angle between two measurements of the position of the star in the sky. The first measurement is taken from the Earth on one side of the Sun, and the second is taken approximately half a year later, when the Earth is on the opposite side of the Sun. The distance between the two positions of the Earth when the two measurements were taken is twice the distance between the Earth and the Sun. The difference in angle between the two measurements is twice the parallax angle, which is formed by lines from the Sun and Earth to the star at the distant vertex. Then the distance to the star could be calculated using trigonometry. The first successful published direct measurements of an object at interstellar distances were undertaken by German astronomer Friedrich Wilhelm Bessel in 1838, who used this approach to calculate the 3.5-parsec distance of 61 Cygni.
The parallax of a star is defined as half of the angular distance that a star appears to move relative to the celestial sphere as Earth orbits the Sun. Equivalently, it is the subtended angle, from that star's perspective, of the semimajor axis of the Earth's orbit. The star, the Sun and the Earth form the corners of an imaginary right triangle in space: the right angle is the corner at the Sun, and the corner at the star is the parallax angle. The length of the opposite side to the parallax angle is the distance from the Earth to the Sun (defined as one astronomical unit (au), and the length of the adjacent side gives the distance from the sun to the star. Therefore, given a measurement of the parallax angle, along with the rules of trigonometry, the distance from the Sun to the star can be found. A parsec is defined as the length of the side adjacent to the vertex occupied by a star whose parallax angle is one arcsecond.
The use of the parsec as a unit of distance follows naturally from Bessel's method, because the distance in parsecs can be computed simply as the reciprocal of the parallax angle in arcseconds (i.e. if the parallax angle is 1 arcsecond, the object is 1 pc from the Sun; if the parallax angle is 0.5 arcseconds, the object is 2 pc away; etc.). No trigonometric functions are required in this relationship because the very small angles involved mean that the approximate solution of the skinny triangle can be applied.
Though it may have been used before, the term parsec was first mentioned in an astronomical publication in 1913. Astronomer Royal Frank Watson Dyson expressed his concern for the need of a name for that unit of distance. He proposed the name astron, but mentioned that Carl Charlier had suggested siriometer and Herbert Hall Turner had proposed parsec. It was Turner's proposal that stuck.
Calculating the value of a parsec
In the diagram above (not to scale), S represents the Sun, and E the Earth at one point in its orbit. Thus the distance ES is one astronomical unit (au). The angle SDE is one arcsecond (1/ of a degree) so by definition D is a point in space at a distance of one parsec from the Sun. Through trigonometry, the distance SD is calculated as follows:
|Therefore, 1 parsec||≈ 264.806247096 astronomical units 206|
|≈ 677581×1016 3.085metres|
|≈ 511577 trillion 19.173miles|
|≈ 563777 3.261light-years|
A corollary states that a parsec is also the distance from which a disc one astronomical unit in diameter must be viewed for it to have an angular diameter of one arcsecond (by placing the observer at D and a diameter of the disc on ES).
The length of the parsec adopted in IAU 2015 Resolution B2 (exactly 000648/ astronomical units) corresponds exactly to that derived using the small-angle calculation. This differs from the classic inverse-tangent definition by about 200 km, i.e. only after the 11th significant figure. As the astronomical unit was defined by the IAU (2012) as an exact SI length in metres, so now the parsec corresponds to an exact SI length in metres. To the nearest meter, the IAU 2015 parsec corresponds to approximately 30,856,775,814,913,673 m.
Usage and measurement
The parallax method is the fundamental calibration step for distance determination in astrophysics; however, the accuracy of ground-based telescope measurements of parallax angle is limited to about 0.01 arcseconds, and thus to stars no more than 100 pc distant. This is because the Earth's atmosphere limits the sharpness of a star's image. Space-based telescopes are not limited by this effect and can accurately measure distances to objects beyond the limit of ground-based observations. Between 1989 and 1993, the Hipparcos satellite, launched by the European Space Agency (ESA), measured parallaxes for about 000 stars with an 100astrometric precision of about 0.97 milliarcseconds, and obtained accurate measurements for stellar distances of stars up to 1000 pc away.
ESA's Gaia satellite, which launched on 19 December 2013, is intended to measure one billion stellar distances to within 20 microarcseconds, producing errors of 10% in measurements as far as the Galactic Centre, about 8000 pc away in the constellation of Sagittarius. In popular culture, it was misused as a unit of time in Star Wars Episode IV: A New Hope.
Distances in parsecs
Distances less than a parsec
Distances expressed in fractions of a parsec usually involve objects within a single star system. So, for example:
- One astronomical unit (au), the distance from the Sun to the Earth, is just under ×10−6 parsecs. 5
- The most distant space probe, Voyager 1, was 66 parsecs from Earth as of August 2016 0.000[update]. It took Voyager 1 39 years to cover that distance.
- The Oort cloud is estimated to be approximately 0.6 parsecs in diameter
Parsecs and kiloparsecs
Distances expressed in parsecs (pc) include distances between nearby stars, such as those in the same spiral arm or globular cluster. A distance of 1000 parsecs (3262 light-years) is commonly denoted by the kiloparsec (kpc). Astronomers typically use kiloparsecs to express distances between parts of a galaxy, or within groups of galaxies. So, for example:
- One parsec is approximately 3.26 light-years.
- Proxima Centauri, the nearest known star to earth other than the sun, is about 1.30 parsecs (4.24 light-years) away, by direct parallax measurement.
- The distance to the open cluster Pleiades is ±10 pc ( 130±32.6 ly) from us, per 420Hipparcos parallax measurement.
- The centre of the Milky Way is more than 8 kiloparsecs (000 ly) from the Earth, and the Milky Way is roughly 34 kpc ( 26000 ly) across. 110
- The Andromeda Galaxy (M31) is about 780 kpc (2.5 million light-years) away from the Earth.
Megaparsecs and gigaparsecs
Galactic distances are sometimes given in units of Mpc/h (as in "50/h Mpc", also written "50 Mpc h−1"). h is a parameter in the range 0.5 < h < 0.75 reflecting the uncertainty in the value of the Hubble constant H for the rate of expansion of the universe: h = H/. The Hubble constant becomes relevant when converting an observed redshift z into a distance d using the formula d ≈ c/ × z.
One gigaparsec (Gpc) is one billion parsecs — one of the largest units of length commonly used. One gigaparsec is about 3.26 billion light-years, or roughly 1/ of the distance to the horizon of the observable universe (dictated by the cosmic background radiation). Astronomers typically use gigaparsecs to express the sizes of large-scale structures such as the size of, and distance to, the CfA2 Great Wall; the distances between galaxy clusters; and the distance to quasars.
- The Andromeda Galaxy is about 0.78 Mpc (2.5 million light-years) from the Earth.
- The nearest large galaxy cluster, the Virgo Cluster, is about 16.5 Mpc (54 million light-years) from the Earth.
- The galaxy RXJ1242-11, observed to have a supermassive black hole core similar to the Milky Way's, is about 200 Mpc (650 million light-years) from the Earth.
- The galaxy filament Hercules–Corona Borealis Great Wall, currently the largest known structure in the universe, is about 3 Gpc (10 billion light-years) across.
- The particle horizon (the boundary of the observable universe) has a radius of about 14.0 Gpc (46 billion light-years).
To determine the number of stars in the Milky Way, volumes in cubic kiloparsecs[a] (kpc3) are selected in various directions. All the stars in these volumes are counted and the total number of stars statistically determined. The number of globular clusters, dust clouds, and interstellar gas is determined in a similar fashion. To determine the number of galaxies in superclusters, volumes in cubic megaparsecs[a] (Mpc3) are selected. All the galaxies in these volumes are classified and tallied. The total number of galaxies can then be determined statistically. The huge Boötes void is measured in cubic megaparsecs.
In physical cosmology, volumes of cubic gigaparsecs[a] (Gpc3) are selected to determine the distribution of matter in the visible universe and to determine the number of galaxies and quasars. The Sun is the only star in its cubic parsec,[a] (pc3) but in globular clusters the stellar density could be from 100 to 1000 per cubic parsec.
1 pc3 ≈ ×1049 m32.938 1 kpc3 ≈ ×1058 m32.938 1 Mpc3 ≈ ×1067 m32.938 1 Gpc3 ≈ ×1076 m32.938 1 Tpc³ ≈ ×1085 m32.938
- "Cosmic Distance Scales - The Milky Way". Retrieved 24 September 2014.
- Benedict, G. F.; et al. "Astrometric Stability and Precision of Fine Guidance Sensor #3: The Parallax and Proper Motion of Proxima Centauri" (PDF). Proceedings of the HST Calibration Workshop. pp. 380–384. Retrieved 11 July 2007.
- Dyson, F. W. (March 1913). "Stars, Distribution and drift of, The distribution in space of the stars in Carrington's Circumpolar Catalogue". Monthly Notices of the Royal Astronomical Society. 73: 334–342. Bibcode:1913MNRAS..73..334D. doi:10.1093/mnras/73.5.334.
There is a need for a name for this unit of distance. Mr. Charlier has suggested Siriometer ... Professor Turner suggests parsec, which may be taken as an abbreviated form of 'a distance corresponding to a parallax of one second'.
- Cox, Arthur N., ed. (2000). Allen's Astrophysical Quantities (4th ed.). New York: AIP Press / Springer. Bibcode:2000asqu.book.....C. ISBN 0387987460.
- Binney, James; Tremaine, Scott (2008). Galactic Dynamics (2nd ed.). Princeton, NJ: Princeton University Press. Bibcode:2008gady.book.....B. ISBN 978-0-691-13026-2.
- High Energy Astrophysics Science Archive Research Center (HEASARC). "Deriving the Parallax Formula". NASA's Imagine the Universe!. Astrophysics Science Division (ASD) at NASA's Goddard Space Flight Center. Retrieved 26 November 2011.
- Bessel, F. W. (1838). "Bestimmung der Entfernung des 61sten Sterns des Schwans" [Determination of the distance of the 61st star of Cygnus]. Astronomische Nachrichten. 16: 65–96. Bibcode:1838AN.....16...65B. doi:10.1002/asna.18390160502.
- International Astronomical Union, ed. (31 August 2012), "RESOLUTION B2 on the re-definition of the astronomical unit of length" (PDF), RESOLUTION B2, Beijing: International Astronomical Union,
The XXVIII General Assembly of International Astronomical Union recommends [adopted] that the astronomical unit be redefined to be a conventional unit of length equal to exactly 597870700 m, in agreement with the value adopted in IAU 2009 Resolution B2 149
- "Four Resolutions to be Presented for Voting at the IAU XXIX GA".
- Pogge, Richard. "Astronomy 162". Ohio State University.
- "Parallax Measurements". jrank.org.
- "The Hipparcos Space Astrometry Mission". Retrieved 28 August 2007.
- Turon, Catherine. "From Hipparchus to Hipparcos".
- "GAIA". European Space Agency.
- "Galaxy structures: the large scale structure of the nearby universe". Retrieved 22 May 2007.
- Mei, S.; Blakeslee, J. P.; Côté, P.; et al. (2007). "The ACS Virgo Cluster Survey. XIII. SBF Distance Catalog and the Three-dimensional Structure of the Virgo Cluster". The Astrophysical Journal. 655: 144. arXiv: . Bibcode:2007ApJ...655..144M. doi:10.1086/509598.
- Lineweaver, Charles H.; Davis, Tamara M. (2005-03-01). "Misconceptions about the Big Bang". Archived from the original on 2011-08-10. Retrieved 2016-02-04.
- Kirshner, R. P.; Oemler, A., Jr.; Schechter, P. L.; Shectman, S. A. (1981). "A million cubic megaparsec void in Bootes". The Astrophysical Journal. 248: L57. Bibcode:1981ApJ...248L..57K. doi:10.1086/183623. ISSN 0004-637X. |
Published on October 2, 2012 by Casey
The Ghost Dance War was an armed conflict in the United States which occurred between Native Americans and the United States government from 1890 until 1891. It involved the Wounded Knee Massacre wherein the 7th Cavalry massacred 153 Lakota Sioux, including women, children, and other noncombatants, at Wounded Knee on 29 December 1890. It ended when Sioux leader Kicking Bear surrendered on 15 January 1891.
dna testing, dna ancestry testing, ancestry, genealogy, indian genealogy records, paternity testing, turquoise jewelry, native american jewelry
In an effort to remind the nation of this incident, and the historic government program against Native Americans, the American Indian Movement (AIM) occupied the Pine Ridge Reservation near Wounded Knee in protest against the federal government on 27 February 1973. A 71-day standoff between federal authorities and the AIM ensued. The militants surrendered on 8 May .
The Ghost Dance was a Native American religious movement that occurred in the late 1800s, often practiced by the Sioux Indians. It often consisted of a circle dance, invented by the Indian leader Wovoka, or better known by his white name Jack Wilson. Wilson was convinced that God talked to him and told him that by practicing the Ghost Dance, they would wash the evil out of their lives and they would be impervious to disease, famine, and old age. This religion quickly spread throughout the entire west and Native American tribes. This dance was given this name by white settlers who were frightened by this spiritual dance, saying that it had a ghostly aura around it, hence the name. This started the push to bring US troops into the Dakotas where the Sioux were most prominent and where the Ghost Dance was being practiced the most.
In the winter of 1890, the Sioux Indians had been upset over a series of treaty violations by the US involving land divisions among tribes in South Dakota. There were a series of skirmishes over this but the biggest and most important one was the Wounded Knee Massacre. The Sioux had encamped themselves at Wounded Knee Creek and were handing over their weapons to US troops. One deaf Indian refused to give up his weapon, there was a struggle, and someone’s gun discharged in the air. One of the US commanders heard this and ordered his troops to open fire. What remained when the shooting stopped was 153 dead Indians (mostly women and children) and 25 dead US troops most of which was due to friendly fire. There was a public uproar when word of this reached the Eastern US and the Government reestablished the treaty they had broken with the Sioux to avoid any further public backlash
After the Wounded Knee Massacre, there were several other small skirmishes involving the Sioux and the US Government, but the most part hostilities ceased, although tensions are still high to this day. Much to the dismay of Native Americans, twenty US troops were awarded the Medal of Honor for their actions on that day. Native Americans were outraged about this at the time, and have pushed to get these medals rescinded, but nothing has been done to this point. In more recent years, there was a takeover of the Wounded Knee Memorial by militant protesters. There was a standoff between these protesters for several months, but they ended up surrendering peacefully. |
Please explain the relationship between a hypothesis and an experiment; how do independent and dependent variables differ?
4 Answers | Add Yours
The difference between a hypothesis and an experiment is that an experiment is a way to test a hypothesis. A hypothesis is a prediction. You predict that if you change one thing (the independent variable) the other thing (the dependent variable) will change. Then you do the experiment to find out if your hypothesis was right.
Here's an example:
Let's say that you think that something that is heavier will fall faster than something that is lighter. That's your hypothesis -- if we increase the weight of the object, it will fall faster.
The weight is the independent variable. The speed that it falls is the dependent variable. You are trying to test what impact the independent variable has on the dependent variable.
A hypothesis is an explanation for an observed phenomenon. A hypothesis is tested and then either proven or disproven. This is done by conducting an experiment. This is where the variables come in.
Independent variables and dependent variables allow the experimenter to have control over the experiment. Results are measured (quantitatively) and you are able to discover whether your hypothesis was proven or not.
An independent variable is what the experimenter changes in the experiment. This is necessary to perform the experiment. The dependent variable is dependent on the independent variable. In other words, it may change when the independent variable changes.
Scientists use the Scientific Method as a systematic approach to discovery. First, a tentative explanation is made to explain some phenomenon, this is called the hypothesis. Second, the hypothesis must be tested by the process we call experimentation. Simplicity is important to try and avoid influences of variables.
If the results of the experiment support the original hypothesis, then it is accepted as true. If not, then the hypothesis is rejected.
If other researchers can duplicate the results of the experiment, then the hypothesis is widely accepted. A hypothesis that has gained high levels of confidence is considered a law or theory.
A hypothesis can be likened to an educated guess of what's going to happen based on previous knowledge of the scientific principles at hand. An experiment tests out this hypothesis.
Independent variables are what you change. Dependent variables are what you measure.
If we were to conduct an experiment to test the relationship between temperature and density, we'd calculate the density of some test object in different temperature environments. In this case, temperature is the independent variable that we're manipulating. Density is the dependent variable that depends on the temperature and we are measuring it.
Join to answer this question
Join a community of thousands of dedicated teachers and students.Join eNotes |
A group of researchers have made a discovery that could eventually lead to the regrowth of teeth.
A research team in the group of Irma Thesleff at the Institute of Biotechnology in Helsinki, Finland recently discovered a marker for dental stem cells. The discovery was made after locating a transcription factor on the mouse front tooth.
The transcription factor Sox2 is specifically present in the stem cells of the mouse incisor. This tooth grows throughout one’s life thanks to the stem cells located at the base of the tooth.
The research team managed to create a way to record the movement, division and specification of these cells. Sox2 positive stem cells also enable enamel-forming ameloblasts and other lineages of the tooth to exist.
Human teeth are similar to mouse teeth in that the mechanisms to regulate growth is the same, even though human teeth don’t grow continuously. That’s why this could be a pivotal discovery for tooth regeneration.
This finding, however, doesn’t necessarily mean the ability to regenerate tooth is right around the corner. A detailed recipe is necessary and many factors have prevented tooth regeneration from happening at this point in time. |
Activity Plan 3-4: Let's Make an Alphabet-Book
This activity leads to literacy skill building and lively conversation as children come together to create a class book
- Grades: PreK–K
- chart paper and marker
- materials to make alphabet sheets: oaktag, glue sticks, several crayons. Prepare 26 sheets of paper with a letter of the alphabet written on the top of each sheet. Write both the upper- and lowercase letter on each sheet
- bookbinding materials including a hole punch or stapler, yarn or binder rings, and clear contact paper
- an alphabet book like It Begins With an A by Stephanie Calmenson (Scholastic Inc.)
Objective: Children will make a class book to develop their understanding of the alphabet, beginning letter sounds, and concept of print.
1 Read a familiar alphabet book to the class. Then, invite children to make a classroom alphabet book. Review the alphabet to help prepare them for the activity. Write each letter of the alphabet (upper- and lowercase) as they state it, down the left-hand side of a sheet of chart paper.
2 Ask children to think of a word that begins with each letter, starting with A. Say a few words that begin with the letter to help them to hear the first letter sound - apple, alligator, and ant. Record their words beside each letter.
3 Distribute the alphabet sheets. Explain to children that they can either draw pictures of things that begin with the letter or cut out pictures from a magazine, newspaper, or catalog. After distributing the sheets, ask each child to show the letter or letters she has for the activity.
4 Once children have completed the activity, ask them to share their work with each other. Invite them to work together to think of a title and make a front and back cover for their class book. Bind the pages together and cover the book with clear contact paper to preserve it. Place the book in the library or writing area for children to use.
Curriculum Connection: DRAMATIC PLAY
Take a Message, Please. Engage children in a conversation about the different ways people use writing in their homes. Record their responses on chart paper. Tell them that you will place writing materials, including paper and pencils and a dry erase message board, in the dramaticplay area so that they can make grocery lists, write recipes, take telephone messages, or leave notes for each other. During class recall time, invite children to show their classmates the different ways they incorporated writing into their play. Remember that many children will be imitating writing and will not really be able to write actual notes. |
Learn something new every day
More Info... by email
Solar lighting is any type of lighting that uses the radiation from the sun to illuminate a particular area. Since early history, humans have used solar lighting as their main method of illumination. Old laws such as England's Prescription Act of 1832, established minimum rights to illumination through natural methods. In modern times, there are a variety of forms this type of lighting can take such as direct sunlight or even highly-controlled infrastructure installations.
The most common type of solar lighting is called daylighting. This uses passive technologies to light rooms or specific areas. Daylighting takes on a variety of forms, the most common being skylights that are placed in the middle of rooms. An ancient form of this is clerestory windows, which are commonly found in Romanesque and Gothic architecture. They are openings placed at the top of large structures that allow the sun to shine inside. This was very important as the economics of lighting churches and cathedrals with candles was unsustainable. A more modern installation is called the light tube. Light tubes use a series of mirrors which direct the path of sunlight into a room through a tube in a roof.
A recent innovation in this type of lighting is a technologically-hybridized system that uses mirrors which track the sun's movement and follow it as it changes position in the sky. Most of these installations use an optical fiber to transmit the light to the interior of the building. This method is used as a way to supplement or replace existing artificial lighting. The most efficient use of this hybrid system is in single-story buildings.
There are a variety of reasons to use an illumination system that utilizes solar lighting. Many studies have confirmed the health benefits of regular exposure to sunlight, whereas many forms of artificial light can actually cause health problems in some. The human body converts solar radiation that hits the skin into vitamin D, an essential element. Solar lighting also allows the body to get a higher dosage of indirect rather than direct sunlight, giving one the benefits of sun exposure while minimizing the chances of skin cancer.
The majority of solar lighting transmits 50 percent of the direct sunlight it receives. This can offset the need for much of the artificial lighting used by modern society, both in the home and many work environments. A transition from artificial lighting to solar lighting would have obvious repercussions regarding energy consumption and money savings. The U.S. Department of Energy estimates that the average skylight placed in a home pays for itself within five years through lower electric bills. Solar lighting can also cut down on heating costs. During the winter, it keeps the area warmer by harnessing the sunlight. |
(HealthDay News) -- Injuries stemming from electric shock can lead to muscle, nerve and tissue damage, burns, and even cardiac arrest. Such injuries often are serious because the human body is an excellent conductor of electricity.
Duke University Medical Center says about 1,000 people die each year in the United States as a result of electric shock.
Here's a list of preventive tips:
- Use child safety plugs in all outlets.
- Keep electrical cords out of the reach of children.
- Teach children about the dangers of electricity.
- Follow manufacturer safety instructions when using electrical appliances.
- Avoid using electrical appliances when wet.
- Never touch electrical appliances while touching faucets or cold water pipes. |
What Other Parents Are Reading
First Aid: Coughing
Coughing is a healthy reflex that helps clear the airways. A severe or lingering cough requires medical treatment, but many coughs are caused by viruses that just need to run their course.
What to Do
- If your child develops a "barky" or "croupy" cough, sit in a steamy bathroom together for about 20 minutes.
- Offer plenty of fluids (breast milk or formula for babies; cool water and juice for older kids). Avoid carbonated or citrus drinks that may irritate a raw throat.
- Run a cool-mist humidifier in your child's bedroom.
- Use saline (saltwater) nose drops to relieve congestion.
- Never give cough drops (a choking hazard) to young kids or cough or cold medicine to kids under 2 years of age (consult a doctor first for older kids).
Seek Medical Care
If Your Child:
- has severe cough spasms or attacks, wheezing, or stridor (an almost-musical sound when inhaling)
- has a cough that lasts 3 weeks, gets worse, happens the same time every year, or seems caused by something specific (such as pollen, dust, pets, etc.)
- has a persistent fever
- is younger than 3 months old and has fever with the cough
- is breathing fast or working hard to breathe
- has a blue or dusky color in the lips, face, or tongue during or after coughing
- Follow the doctor's treatment plan if your child has asthma or allergies.
- Avoid anyone who smokes or has a cold.
- Make sure your child gets the diphtheria-tetanus-acellular pertussis (DTaP) vaccine or combination booster (Tdap) on time.
- Wash hands well and often.
- Flu Center
- Asthma Center
- Your Child's Immunizations: Diphtheria, Tetanus & Pertussis Vaccine (DTaP)
- Whooping Cough (Pertussis)
- Is It a Cold or the Flu?
- Why Is Hand Washing So Important?
- Common Cold
Note: All information on KidsHealth® is for educational purposes only. For specific medical advice, diagnoses, and treatment, consult your doctor.
© 1995- The Nemours Foundation. All rights reserved.
Images provided by The Nemours Foundation, iStock, Getty Images, Corbis, Veer, Science Photo Library, Science Source Images, Shutterstock, and Clipart.com |
What is Paso Partners - Integrating Mathematics, Science and Language: An Instructional Program?
Integrating Mathematics, Science and Language: An Instructional Program is
a two-volume curriculum and resources guide developed by Paso Partners - a partnership of three public schools, an
institution of higher education, and SEDL specialists.
On this page
The resource is designed to help elementary school
teachers organize their classrooms and instructional activities in order to
increase achievement of Hispanic primary-grade children whose first language is
not English. The guide offers a curriculum plan, instructional strategies and
activities, suggested teacher and student materials and assessment procedures
that focus on the acquisition of:
- Assumptions Underlying the Materials
- Structure of the Guide
- Language Activities Related the Mathematics and Science Processes
- List and Recommended Sequence of K-3 Integrated Units
- higher-order thinking skills to apply newly learned knowledge and
- understanding of relations between mathematics and science concepts;
- knowledge, i.e., specific items of information and understanding of relevant
- language to gain and communicate knowledge and understanding.
Motivational strategies and materials compatible with the students' own social
and cultural environment are incorporated into the instructional materials to
develop and enhance positive attitudes and values toward mathematics, science and
Spanish language translation: Accompanying each complete unit in English is a Spanish version of background information for the teacher, as well as a Spanish version of the formal introductory portion of the lesson cycle.
A number of assumptions about teaching and learning have guided the development
of the materials.
Assumptions about Learning
Assumptions about Teaching
- All children, even the very young, learn mathematics and science concepts
by developing cognitive structures through interactions with the environment.
- In the process of learning mathematics and science, students "experience"
instructional activities as an integrated whole, i.e., as an affective, cognitive
and relevant activity.
- Language development is an integral aspect of the acquisition of
mathematics and science concepts and skills. It becomes an even greater factor in
cognitive growth and development for children whose first language is not the
same as the language of school instruction. Effective learning occurs when the
student acquires language in the context of academic instruction as well as in
- Children learn mathematics and science constructively, i.e., children build
or construct meaning by using their own experience and previous knowledge as a
- Children acquire language within the context of everyday experience.
Language concepts and skills are not learned in isolation, but rather as a
consequence of interaction within a setting that is compatible with the
experiential and cultural background of the students.
- Students construct concepts through experiences that involve using
manipulatives, pictures, verbal interactions and other models representing the
concepts to learn.
- Mental structures effectively develop through educational activities that
allow students to explore, investigate, apply and solve problems related to
"tentative constructs" that students modify during the learning process.
- In learning mathematics and science, as well as in acquiring and developing
language, the students assimilate experiences into a construct that is available
to them through subjective representation. However, the meaning of the
representation must be consistent with experience, with the meaning of related
constructs and with conventional meanings constructed by others.
- The design and the implementation of an effective instruction activity
include cognitive, affective and relevant aspects of the social and cultural
context in which the science, mathematics and language concepts develop.
- Teachers help create effective and appropriate mathematics, science and
language constructs through a variety of approaches that include:
- spontaneous opportunities that provide and provoke suitable questions,
conflicts, material and explanations to induce inquiry;
- inductive and deductive sequences that provide students relevant examples to
help them extract the common features and important ideas of a concept or
- pragmatic or practical opportunities for students to grapple with and solve
real-world problems that students discuss with their peers and the teacher in
order to verify and affirm their thinking.
- To assist students in developing mathematics, science and language
constructs, teachers provide many carefully selected and structured examples that
facilitate abstraction of common features to form a concept. Also, teachers
present interesting and challenging problems. Teachers use manipulatives,
pictures, graphs and verbal interactions to support and encourage learning.
- Teachers facilitate acquisition of mathematics and science concepts by
children whose first language is not English through appropriate language
development strategies that assume a language-rich environment in which students
may use either the home language (e.g., Spanish) or English or both to
communicate knowledge and understanding.
- For children whose first language is not English, teachers give specific
attention to the development of specific concepts (science and mathematics, in
this case) within the overall context of both Spanish and English language
The guide is bound into two volumes. Volume One contains materials for use in
Kindergarten and Grade One. Materials in Volume Two are for use with students in
Grades Two and Three. Depending on the students' academic backgrounds and local
curriculum expectations, the materials for each grade level may provide a full
academic year of instruction. Each volume contains an introductory section and
three units for each grade level.
Structure of each Unit
Each unit is designed to assist teachers in offering up-to-date science and
mathematics content, along with appropriate language usage, through teaching and
learning strategies that will excite children about the world of mathematics,
science and language. The selection and arrangement of the material is planned to
engage children's natural inquisitive nature and to stimulate them to
investigate, explore and learn. Teachers are helped to create dissonance in
familiar situations in order to stimulate questioning, hypothesizing, exploring
and problem solving.
Each unit contains three types of materials: (1) unit overview materials and
background information for the teacher, (2) the lessons and (3) an annotated
bibliography and list of teacher reference/resource materials.
Spanish language translation. Preceding each complete unit in English is a
Spanish version of background information for the teacher, as well as a Spanish
version of the formal introductory portion of the lesson cycle.
Unit overview materials and background information for the teacher. Presented
first in the unit is a recommended list of content and/or skills students should
have as Prior Knowledge before initiating unit activities. Next Specific
Mathematics, Science and Language Objectives are listed followed by a Topic
Concept Web. The web shows relationships among the various science content
elements that teachers will present in the unit. In turn, the web prompts the
identification of two major ideas, one in science and one in mathematics, that
the class will develop in each lesson. It also encourages teachers to view
teaching as providing children opportunities to develop cognitive structures that
are more global and complex than those that students can demonstrate by
performance on objective-defined tasks. Therefore, the application, or
problem-solving, phase of the lessons takes on a specific character and increased
importance - it allows the student and the teacher to look for dimensions in
understanding that go beyond the level that can be universally required of all
students. There is no vertical or horizontal "cap" or "ceiling" in thinking that
circumscribes the students' progress.
Next is a list of key Vocabulary items, in both English and Spanish, that the
teacher will use in presenting the unit. The students will gain an understanding
of the terms and may incorporate some, or most, of them into their active
The Teacher Background Information section, which follows the Vocabulary section,
contains science and mathematics content. This content, also in both English and
Spanish, is provided as a ready reference for teachers to draw upon as they
implement the unit.
Next is The Lesson Focus that lists each of the Big Ideas presented in each of
the lessons. Each Big Idea is stated as an overarching concept, or principle, in
science and/or mathematics that generates the lesson activities. The Big Idea is
what each student is to construct. The construct has many other ideas that relate
to it, both in mathematics and science, thus forming a web of ideas. The
construct, however, develops within a language context - either in English or
Spanish - in order to formalize the concept. Once assimilated, the Big Idea
can facilitate students' future learning in related content areas. Thus, the Lesson
Focus, together with the array of objectives, gives the teacher a view of the
extent and direction of development of the Big Idea in each lesson.
Following The Lesson Focus is an Objectives Grid displaying the unit objectives
by content area and by lesson activity. Objectives, in and of themselves, cannot
dictate the scope of the instruction. Learning takes place when the students
"experience" instructional activities as an integrated whole, i.e., as an
affective, cognitive and relevant activity. Thus, the grid serves to provide
direction and indicators of student progress. The objectives are used to develop
assessment procedures by which to measure, in part, student achievement.
Each lesson design assists the teacher in developing the Big Ideas selected for a
given lesson. The term "lesson" as used in this guide means a set of activities
selected to teach the Big Ideas. It is not meant to convey the notion that the
material included in a "lesson" is to be taught within a single period of time on
any given day. One "lesson" may extend over several days.
Each lesson provides the instructional context and the activities for the
students to acquire the concepts, or build the constructs, contained in the
lesson's Big Ideas. The lesson does suggest a sequence in which to implement the
activities, but there is no "single" sequence or a given time limit in which to
present the unit. Indeed, a number of the units require previous preparation on
the part of the teacher, and in some cases on the part of the students. Some
units, for example, require the students to collect, organize and summarize data
and then to apply their findings. This process may require a period of three or
four weeks. Nonetheless, prior to initiating the unit, teachers should construct
an overall and day-to-day schedule for the implementation of the unit.
The lesson's content develops through a process that reflects a cycle. The
process moves through various phases of the learning cycle. Learning cycles to
facilitate the organization of science and mathematics instruction have been
proposed for some years; many cycles incorporate an inquiry approach to learning
with emphasis on problem solving. Typically, a learning cycle includes an
experimentation phase during which the learner actively experiments with concrete
materials to develop, or "construct", an idea. Although scholars vary in their
opinions as to the required nature, design and number of such phases, all include
at least three phases: experimentation, concept introduction and development, and
The Lesson Cycle
For the purpose of this guide, a five-phase lesson cycle has been employed:
Each phase of the cycle is described briefly below.
- Encountering the Idea
- Exploring the Idea
- Getting the Idea
- Organizing the Idea
- Applying the Idea
Encountering the Idea, or developing a "readiness" state, is the first phase in
the cycle. During this time the teacher provides a background, or enabling
structures, to facilitate the development of "new constructs." This phase of the
teaching cycle is important for students whose early childhood experiences may
not have been sufficiently varied to provide them with some of the necessary
underlying concepts on which to build the Big Ideas that the lesson promotes.
Therefore, this cycle shapes a backdrop on which to develop the new ideas.
Additionally, the readiness activities alert the students to the direction of the
lesson by providing provocative questions and conflicting situations designed to
bring the students into an exploration perspective.
Because language development is a fundamental co-requisite for learning
mathematics and science concepts, processes and skills, many of the lessons begin
with literature (e.g., oral stories, children's books) and discussion activities
that set the stage for posing questions and presenting conflicting situations
related to the mathematics and science Big Ideas that are the focus of the
lesson. The use of well-selected literature, in addition to being an effective
tool in language development, is an effective motivational strategy. Other
language development strategies are presented below in the section, Language
Activities Related to Mathematics and Science Processes.
Exploring the Idea, or experimentation, is the phase in which learners are
involved with concrete or familiar materials in activities designed to have them
encounter new information that they can assimilate in their attempt to find
responses to the questions posed earlier and/or to hypothesize a resolution to
the conflicting situation presented. During this stage, the learner explores the
new ideas through the use of materials in learning centers, with the teacher
providing relatively little structure. As students realize that there are new
ideas they have not dealt with previously and that produce some confusion, doubt
or interest, they discuss among themselves and with the teacher what these ideas
may mean. At this point, the teacher moves the students into the next phase of
Getting the Idea, or concept introduction and development, is the phase in which
the teacher helps the learners assimilate and accommodate the new information
into a new structure that signifies the development of a new understanding. The
students begin to work with new words conveying the new concepts. They work with
new ideas in many different ways to ensure that a new idea is valid. The main
emphasis during this phase is to see what is happening. What do we know? How do
we know this is true? How can we explain this? Students may want to brainstorm
and ask related questions, or they may choose to go back to the exploration or
experimentation phase to validate the new ideas.
Organizing the Idea is the phase in which the students consciously consider the
new ideas in their own right. They attempt to understand a new idea as a whole.
New terminology, notation and symbols are introduced at this time. Students may
then express their ideas and opinions through a variety of activities.
During this phase, the students may relate the new ideas to associated ideas in
other areas of subject matter. They make new connections, generalizations and
abstractions. They may decide that the best manner to organize and communicate
the new ideas is through charts, tables, number sentences, graphs, diagrams or
verbal and written explanations. Thus, the information is organized in a logical
and quantitative manner. The students may report the results of their
experiments, observations, conclusions and interpretations to the class. Students
may to do additional reading or listening to tapes. Once the students have
grasped the concepts, they are ready for the application phase of the lesson.
Applying the Idea is the phase in which students develop a broad grasp of the
concepts. In this phase the students relate the new ideas to their own world - to
something "real" - and to associated ideas in other areas of subject matter. They
are then able to solve problems and answer related questions. They may also
formulate their own problems.
Assessment of Student Achievement is ongoing on an informal basis throughout the
lesson through teacher observation of the students' interactions and behaviors.
Assessment strategies are provided in the final phase of each lesson or unit to
assist the teacher in determining the extent to which the students have grasped
the Big Ideas presented in a given lesson and/or unit.
Because language development is a fundamental co-requisite for learning
mathematics and science concepts, processes and skills, the lessons in many
instances begin with literature (e.g., stories, books) and discussion activities
that set the stage for posing questions and presenting conflicting situations
related to the Big Ideas in mathematics and science that are the focus of the
Language development strategies specifically related to mathematic and science
processes were incorporated into the lessons. Some examples of these are
described briefly below.
Sequencing. The students tell or write a story, indicating the sequence of events
by using ordinal numbers. They may also use such words as "then", "next", and
"finally" to show sequence. The students may take a nature walk around the school
and report their observations in order of occurrence.
Questioning. In the initial stage of a unit the students may list, in the form of
questions, information that they would like to have about the topic. As they
proceed through the unit and gather further information, they may record answers
to the questions that they formulated.
Comparing/contrasting. Student may design and make charts, graphs or diagrams
that compare or contrast two concepts. For example, the students may use Venn
diagrams to compare and contrast spiders with insects.
One-to-one correspondence/counting. In comparing objects, students use
comparative adjectives (e.g., "longer", "shorter", "bigger", "smaller"). In
comparing groups or sets in preparation for counting, the students begin to use
the notion of "more than" and "less than." In making these comparisons, they may
compare two groups physically by laying them side by side. In increasing the
accuracy of their statements, students can say, for example, "The tiger cage in
the zoo has three tigers, and the bear cage has six bears; the zoo has more bears
than tigers." They can put three tigers alongside six bears, show that the three
tigers are "tied" with three bears and that there are three extra bears. They
conclude that there are three more bears than tigers, and that six is three more
Predicting/hypothesizing. During the initial stage of a unit, and after the
students have listed the questions that they would like to answer, they
hypothesize answers or solutions to as many of the questions or problems as they
can. During the implementation of the unit, they explore hypotheses and confirm
or reject them as they gather evidence. The students verbalize their reasons for
confirming or rejecting the hypotheses.
Validating/persuading. During problem-solving sessions, the students study the
nature or character of the evidence they can use to confirm or reject a
hypothesis. They suggest reasons why in some cases one negative example is
sufficient to reject a hypothesis, while in other cases several positive examples
are not sufficient to confirm or reject a hypothesis.
Conferring. Students ask for a conference with the teacher and/or other students
to discuss or exchange opinions about an important, a difficult or a complex
matter. For example, a student is preparing to write in her journal but needs
clarification about an idea. She asks the teacher to meet her at the "conference
table" (which is inaccessible to other students for the duration of the
conference) in order to discuss her ideas prior to writing about them in her
journal. The student may ask that another student join the conference,
particularly if the students have done the work collaboratively. The student
initiates the conference, gives it direction and decides when the purpose of the
conference has been met. A student may also request a conference for the purpose
of assessing her achievement or progress.
List and Recommended Sequence of K- 3 Integrated Units
Grade K and 1 Integrated Units
Plants and Seeds
The Human Body
Grade 2 and 3 Integrated Units
Sun and Stars |
All Souls’ Day was first instituted at the monastery in Cluny in 993 CE(Common Era)and quickly spread throughout the Christian world. People held festivals for the dead long before Christianity. It was Saint Odilo, the abbot of Cluny in France, who in the 10th century, proposed that the day after All Saints’ Day be set aside to honor the departed, particularly those whose souls were still in purgatory. Today the souls of the faithful departed are commemorated. Although All Souls’ Day is observed informally by some Protestants, it is primarily a Roman Catholic, Anglican and Orthodox holy day.
The Day of the Dead celebrations can be traced back to the various indigenous groups, such as the Aztecs and other pre-Hispanic civilizations, from as far back as 3000 years ago. Skulls were collected and used during rituals to symbolize death and rebirth.
All Souls’ Day in the United States is dedicated to prayers for the dead. The Day of the Dead is also celebrated on this day. Many western churches annually observe All Souls’ Day on November 2 and many eastern churches celebrate it prior to Lent and the day before Pentecost.
Interesting!!! Happy all souls day!! |
Earth formed from heavy elements produced in at least one prior generation star, but there could be more than just one prior generation (It's very, very unlikely that all or nearly all the medium weight elements above lithium, in our solar system, came from just one older star, and pretty unlikely even that all the heavies above Iron were cooked up in just one supernova).
It's not a safe assumption that stars last an average of 10 Billion years. The most numerous types, red dwarfs, make up 80-90% of all stars, last a lot longer than that, and probably stay stable on the main sequence for 100-200 billion years (American Billions). They also shouldn't spread elements around much when they finally do leave the main sequence. Stars about the size of our Sun, spectral class G2, typically live about 10 Billion years, but make up only about 2% of stars. Big stars, type O, B, and A, burn more quickly, and it's possible to get enough hydrogen together for a star to burn through all its fuel and supernova in mere hundreds of thousands of years, or possibly even a blazing fast 10's of thousands. Those stars are rare, but they are so massive that even a few produce enough heavy elements and push enough gas around when they supernova, to create hundreds of sun sized and smaller stars and all the heavy elements to give such stars the solid, rocky planets we now think are practically ubiquitous.
The supernova explosions are a common source for two effects - heavy element formation, and compressive shock waves that trigger new stars forming in nearby interstellar gas clouds. Many of these gas clouds are already enriched with heavy metals from previous supernovae, Spiral galaxies tend to get regions of new star formation, and quiet regions. But, the high and low density regions in spirals like our Milky Way exist on larger scales than the star forming "nursery" clouds, and this is largely because gas clouds are not just compressed by novae - both the dense star forming clouds and very large but more difuse clouds colide with other clouds, including clouds that were part of dwarf galaxies being captured by the big spirals. So, it's a partial coincidence - Older generation stars have some influence on the shapes of spiral galaxy features, but dwarf galaxy capture has more, and the rare colisions of spirals with other big galaxies show just how much influence the large scale objects can have, producing wildly twisted galaxys such as
If anyone wants to read up on this sort of thing, please remember, because astronomers named them before they knew anything about why there were multiple distinct types of stars in the same mass ranges, Population II stars are actually older than Population I, and Population III older than II. A given population usually includes multiple generations of stars. As an exception, the very oldest, massive stars that novaed within the first million years or so after star formation began, and produced so many heavy elements are called Population III, and most probably represent just a single generation and possibly only the largest types.
and, for those people wanting more than just the Wikipedia versions, a little real source material: |
The Middle Ear
The middle ear is an air-filled chamber that lies behind the eardrum. Pressure in the middle ear changes to match air pressure outside of the eardrum. When inside and outside pressures are balanced, the eardrum is flexible and normal hearing is more likely. Problems occur when air pressure in the middle ear drops. This is usually due to a block in the eustachian (u-STA-shun) tube, the narrow channel connecting the ear with the back of the throat.
An Open Tube
As the link between the middle ear and the throat, the eustachian tube has two roles. It helps drain normal, cleansing moisture from the middle ear. It also controls air pressure inside the middle ear chamber. When you swallow, the eustachian tube opens. This balances the air pressure in the middle ear with the pressure outside the eardrum. In infants and young children, the eustachian tube is short and almost level with the ear canal. By about age 7, however, the eustachian tube has become longer and steeper. This improves how well it works.
The eardrum and middle ear are important to normal hearing. Together, they pass sound from the outer to the inner ear. When sound from the outer ear hits a flexible eardrum, the eardrum vibrates. The small bones in the middle ear pick up these vibrations and pass them along to the inner ear. There, the vibrations become electrical signals, which are sent along nerve pathways to the brain. |
Hukbalahap Rebellion, also called Huk Rebellion, (1946–54), Communist-led peasant uprising in central Luzon, Philippines. The name of the movement is a Tagalog acronym for Hukbo ng Bayan Laban sa Hapon, which means “People’s Anti-Japanese Army.” The Huks came close to victory in 1950 but were subsequently defeated by a combination of advanced U.S. weaponry supplied to the Philippine government and administrative reforms under the charismatic Philippine president Ramon Magsaysay.
The central Luzon plain is a rich agricultural area where a large peasant population worked as tenant farmers on vast estates. The visible contrast between the wealthy few and the poverty-stricken masses was responsible for periodic peasant revolts during the Spanish period of Philippine history. During the 1930s central Luzon became a focus for Communist and Socialist organizational activities.
World War II brought matters to a head. Unlike many other Southeast Asians, the Filipinos offered strong resistance against the Japanese. After the fall of Bataan to the Japanese (April 1942), organized guerrilla bands carried on the fight for the remainder of the occupation period. The Hukbalahap organization proved highly successful as a guerrilla group and killed many Japanese troops. The Huks regarded wealthy Filipinos who collaborated with the Japanese as fair targets for assassination, and by the end of the war they had seized most of the large estates in central Luzon. They established a regional government, collected taxes, and administered their own laws.
The returning U.S. Army was suspicious of the Huks because of their Communist leadership. Tension between the Huks and the Philippine government immediately arose over the issue of surrender of arms. The Huks had gathered an estimated 500,000 rifles and were reluctant to turn them over to a government they regarded as oligarchic.
Philippine independence from the United States was scheduled for July 4, 1946. An election was held in April for positions in the new government. The Hukbalahap participated, and the Huk leader Luis Taruc won a seat in Congress but—along with some other Huk candidates—was unseated by the victorious Liberal Party. The Huks then retreated to the jungle and began their rebellion. Immediately after independence, Philippine president Manuel Roxas announced his “mailed fist” policy toward the Huks. The morale of government troops was low, however, and their indiscriminate retaliations against villagers only strengthened Huk appeal. During the next four years, the Manila government steadily slipped in prestige while Huk strength increased. By 1950 the guerrillas were approaching Manila, and the Communist leadership decided the time was ripe for a seizure of power.
The Huks suffered a crucial setback when government agents raided their secret headquarters in Manila. The entire Huk political leadership was arrested in a single night. At the same time, Huk strength was dealt another blow when U.S. President Harry Truman, alarmed at the worldwide expansion of Communist power, authorized large shipments of military supplies to the Manila government.
Another factor in the Huk defeat was the rise to power of the popular Ramon Magsaysay. His election as president in 1953 signaled a swing of popular support back to the Manila government. In 1954 Taruc emerged from the jungle to surrender, and the Hukbalahap Rebellion, for all practical purposes, came to an end.
The Huk movement and its leadership persisted, however, operating primarily from a stronghold in Pampanga province on Luzon Island. With the failure of subsequent Philippine administrations to implement the long-promised land reforms, the Huks—although split into factions and, in some areas, merged with new insurgent groups—continued into the 1970s as an active antigovernment organization. |
Hypoglycemia or low blood sugar refers to when blood levels of glucose drop to a below normal level.
Insulin deficiency is one of the main causes of diabetes. When food enters the bloodstream, insulin moves glucose out of blood and into cells where it is used as a source of energy. In type 1 diabetes, the cells responsible for producing insulin are damaged and no insulin is produced, leading to a rise in blood sugar levels. In type 2 diabetes, the blood sugar levels remain high because the amount of insulin produced is inadequate.
On the other hand, people with diabetes may also have an impaired glucagon response to blood sugar levels that have become low, meaning that the usual signal to the liver to break down glycogen and provide glucose is not made. People who take insulin or other antidiabetic medications to lower their blood sugar may therefore lack the ability to normalize their blood sugar once it has become low, putting them at an increased risk of severe hypoglycemia and coma.
Causes of hypoglycemia
The causes of hypoglyaemia in diabetic patients include:
Hypoglycemia most commonly occurs as a side effect of antidiabetic medications. Insulin use is the most common cause of hypoglycemia which is particularly likely to occur if an overdose of insulin has been taken or if the drug is administered without food intake.
Examples of oral medications that work by increasing insulin production include chlorpropamide, glimepiride, glipizide and glyburide. Other oral drugs that may cause a fall in blood sugar include repaglinide, nateglinide and sitagliptin. A combination of these diabetes medications may also cause hypoglycemia. However, some antidiabetic treatments such as metformin, acarbose, pioglitazone and miglitol do not cause hypoglycemia.
The timing of meals and eating is very important among people taking insulin and other blood sugar lowering medications. The amount of insulin taken needs to be balanced with the amount of food eaten if a normal or near-normal level of blood glucose is to be maintained. Hypoglycemia is likely if insulin is not taken as advised and if there is little or no carbohydrate intake. Most commonly, it occurs due to delayed or missed meals or snacks.
The risk of hypoglycemia is raised in individuals who have drunk excess amounts of alcohol, especially without food.
Vigorous exercise, especially without adequate food intake, depletes the glycogen levels and there may be a severe fall in blood sugar.
Reviewed by Sally Robertson, BSc |
Robotic surgery is the latest development that uses robots and computer aided apparatus to aid in normal surgical procedures. It is a new technology and mostly used in well-developed countries. With robotic surgery a single surgeon is able not only to perform multiple surgeries but also do his/her work from any part of the world (McConnell, Schneeberger & Michler, 2003). Robotic surgery is a type of procedure that is similar to laparoscopic surgery. It also can be performed through smaller surgical cuts than traditional open surgery. There are small precise movements that are possible with this type of surgery. It gives some advantages over standard endoscopic techniques. Sometimes robotic-assisted laparoscopy can allow a surgeon to perform a less-invasive procedure that was once only possible with more invasive open surgery. Once it is placed in the abdomen, a robotic arm is easier for the surgeon to use than the instruments in endoscopic surgery. The robot reduces the surgeon’s movements.
The robot assistance reduces some of the hand tremors and movements that might otherwise make the surgery less precise. Robotic instruments can access hard-to-reach areas of your body more easily through smaller incisions compared to traditional open and laparoscopic surgery. This procedure is done under general anesthesia where you are asleep and pain free. The surgeon sits at a computer station nearby and directs the movements of a robot. Small instruments are attached to the robot’s arms. Under the surgeon’s direction, the robot matches the doctor’s hand movements to perform the procedure using the tiny instruments. A thin tube with a camera attached to the end of it called an endoscope, allows the surgeon to view highly magnified three-dimensional images of your body on a monitor in real time.
I.What is Robotic Surgery? A.Robotic Surgery, computer-assisted surgery, and robotically assisted surgery are terms for technological developments that use robotic systems to aid in surgical procedures. B.Robotically assisted surgery was developed to overcome both the limitations of minimally invasive surgery or to enhance the capabilities of surgeons performing open surgery. II.History of Robotic Surgery
A.In 1985 a robot, the PUMA 560 was used to place a needle for a brain biopsy using CT guidance. B.In 1988, the PROBOT, developed at Imperial College London, was used to perform prostatic surgery. C.In 1992 to mill out precise fittings in the femur for hip replacement. III.Associated Science with Robotic Surgery
A.Robots are or will assist urologists with transurethral resection of the prostate, percutaneous renal access, laparoscopy, and brachytherapy. B.The goals of the robots are to facilitate and improve the techniques and procedures of urological surgery. IV.Political Point of View of Robotic Surgery
A.Communication and engagement
B.Regulation and governance
D.Looking for applications
E.The wider landscape
V.Legal Influences of Robotic Surgery
A.Legal aspects are likely to pose obstacles to the developments
B.Promotional materials overestimate the benefits of Robotic Surgery C.Should a surgeon be held liable for errors related
While surgical robotics will have an important impact on surgical practice, it presents challenges are so much in the monarchy of Political and legal laws as of medicine and health care.
VI.Economical considerations for the use of robotic surgery A.Brief economic definition/overview.
1.The use of scarce resources to do more.
B.The economics of robotic surgery.
1.The goal: “Keeping patients ‘whole’ and fully functional after surgery”
while reducing complications and costs.
VII.The costs of robotic surgery equipment
C.Higher cost per surgery.
D.Annual cost of equipment maintenance.
E.New and unproven technology.
VIII.The benefits of robotic surgery
A.Smaller incision = less physical damage from surgery.
B.Heart surgery via small incision vs. opening the chest.
C.Shorter recovery time = less days in hospital bed and less cost. D.Reduces risk of complications by 20 percent or more from traditional surgery. E.Newer technology provides better operating precision and 3D camera view to surgeon, allowing better and more precise surgery to patient. IX.Economic conclusions of robotic surgery
X.Description of the Technology
A.Science that drove the technology
The science that drove the development of this new technology is the latest development in artificial intelligence, Internet and computer science (Lee, 2009). B.Applications of the technology Robotic surgery is gaining a wide acceptance in the medical fraternity in the recent past. This is because it has proved to be effective, efficient, reliable and above all accurate. This technology has found numerous applications in general surgery, neurosurgery, gynecology, radio surgery, urology, pediatrics, cardiothoracic, orthopedics, vascular surgery among many others. In fact it is applicable virtually in all surgery fields. XI.History of the Technology
A.A brief timeline
The development of robotic surgery can be traced back to 1997 in Cleveland when it was used to reconnect fallopian tubes. This enlisted a lot of interest from scientists’ world over who began working towards fully realization of this new technology. This continued till late 2010 when a true robot performed the first operation at Ljubljana University medical center. (McConnell, Schneeberger & Michler, 2003). B.An analysis of social factors that drove the technology
The major social factors that drove development of robotic surgery was to minimize invasive surgery, enhance surgeon’s capabilities, and need to technologically advance. Using this technology, the surgeon uses a computer PR telemanipulator to move the instruments. By this the surgeon is able to perform minimal invasive surgery. A telemanipulator is an instrument that allows the surgeon to remotely manipulate the hands of the robot to perform the surgery instead of using his physical hands (Ahmed et al, 2009). XII.Technology and the environment
A.Technological innovations specifically aimed at reducing pollution-from cleaner manufacturing processes to flue gas scrubbers to catalytic converters-now figure prominently in mitigating some of the growing pains of an increasingly technological world 1.Energy- All the world’s economies continue to face big challenges in using energy-the lifeblood of the industrial age-while maintaining environmental quality. Although U.S. energy efficiency is much greater than ever before, growth in the economy has assured rising energy consumption 2.Climate- When energy is used, there is an effect on the climate as well. Local generation by smaller plants can not only reduce transmission losses, but also improve air quality since they can be fueled by hydrogen and natural gas-much cleaner than coal on a per kilowatt hour basis. 3.Waste- Naturally occurring microorganisms have long been used to break down human, agricultural, industrial, and municipal organic wastes. Now, genetically engineered organisms are being used to treat not only industrial effluent, but also wastewater, contaminated soil, and petroleum spills. A.Morals and Ethics – Basic definitions
B.Moral: what is known to be right or wrong.
C.Ethical: doing right according to professional standards. D.Robotic surgery is the use of robotics by a surgeon to minimize more invasive procedures. XIII.Moral and Ethical Implications of robotic surgery
A.Risks and benefits associated with robotic surgery.
1.Statistics of successful surgeries vs. malpractice.
2.Long term success
B.Surgeons training and experience with the technology associated with robotic surgery. 1.Amount of training a surgeon needs in order to be considered adequate in robotics C.Ethical and moral stand points
1.Surgical code of ethics and scopes of practice
2.Pros and cons of robotics
In a society where we are relying more and more on technology, it is no surprise that we have opted for a better, less invasive way of performing surgeries. The use of robotics has become increasingly more popular. Is it though the right way the go?
Ahmed, K., Khan, MS., Vats, A., Nagpal, K., Priest, O., Patel, V., Vecht, JA. & Ashrafian, H., (2009). Current status of robotic assisted pelvic surgery and future developments. Int J Surg. 7:431–44. Austin, D & Macauley, M (2012). “Cutting Through Environmental Issues: Technology as a double-edged sword” Retrieved from http://www.brookings.edu/articles/2001/winter_environment_and.aspx Kwoh, Y. S., Hou, J., Jonckheere, E. A. and Hayall, S. A (2005). ‘Robot with improved absolute positioning accuracy for CT guided stereotactic brain surgery’. IEEE Trans. Biomed. Engng, February. Lee, DI. (Apr 2009). “Robotic prostatectomy: what we have learned and where we are going.” Yonsei Med J 50 (2): 177–81. McConnell, PI; Schneeberger, EW; Michler, RE (2003). “History and development of robotic cardiac surgery”. Problems in General Surgery 20 (2): 20–30. |
NOAA-supported scientists working in the Hawaiian Archipelago are calling some of the deep coral reefs found in the region's so-called oceanic "twilight zone" the most extensive on record, with several large areas of 100 percent coral cover. They also found that the deep coral reefs studied have twice as many species that are unique to Hawaii than their shallow-water counterparts.
This extensive study of the Hawaiian deep coral reefs, known as mesophotic coral ecosystems, led to some incredible finds published recently in the scientific journal PeerJ. These mesophotic coral ecosystems, the deepest of the light-dependent coral reef communities found between 100 and 500 feet below the ocean's surface, lie well beyond the limits of conventional scuba diving and are among the most poorly explored marine habitats on Earth. Scientists used a combination of submersibles, remotely operated vehicles, and technical diving to study these difficult-to-reach environments.
Of the fish species documented on mesophotic reefs, 43 percent were unique to the Hawaiian Islands, which is more than double the 17 percent of unique species found on shallow Hawaiian reefs.
At the northern end of the archipelago, in the recently expanded Papahanaumokuakea Marine National Monument, nearly all of the species are unique to the region, the highest level recorded from any marine ecosystem on Earth. These findings could offer further insight into the monument's management.
In Maui's 'Au'au Channel, scientists discovered the largest uninterrupted mesophotic coral ecosystem ever recorded, extending more than three square miles at approximately 160 to 300 feet deep and including areas of 100 percent coral cover.
"The waters off Maui present the perfect environment for these mesophotic reefs to exist," said Richard Pyle, Bishop Museum scientist and lead author on the publication. "The area combines clear water, which allows light to reach the corals; good water flow enhancing food availability; shelter from major north and south swells, and a submerged terrace between the islands at the right depth."
Because of the challenges associated with working at such depths, mesophotic coral ecosystems are less understood and often not considered in coral reef management efforts. Overfishing, pollution, coastal development and climate change threaten coral reef ecosystems worldwide, and increased knowledge of mesophotic coral ecosystems will help characterize the health of coral reefs in general, particularly in the face of increasing stress.
"With coral reefs facing a myriad of threats, these findings are important for understanding, managing and protecting coral-reef habitat and the organisms that live on them," said Kimberly Puglise, an oceanographer with NOAA's National Centers for Coastal Ocean Science. "Some species studied can live in both shallow and mesophotic reefs, and the species could potentially replenish each other if one population is overexploited."
"There is still so much of our ocean that is unexplored," said W. Russell Callender, assistant NOAA administrator for the National Ocean Service "Working with academic partners and using innovative technology will enhance our scientific understanding of these important habitats and increase the resiliency of these valuable ecosystems."
This paper, led by Bishop Museum, represents a collaboration of 16 scientists from five institutions and two federal agencies. The research was supported by NOAA's National Centers for Coastal Ocean Science, Coral Reef Conservation Program, Pacific Islands Fisheries Science Center and Papahanaumokuakea Marine National Monument, as well as University of Hawaii's Hawaii Undersea Research Laboratory and the State of Hawaii.
NOAA's mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Twitter, Facebook, Instagram and our other social media channels. |
Every Child Can Learn
In the 1940’s, Japanese violinist Shinichi Suzuki had the realization that all children everywhere learn to speak their native tongue at a young age. From his insight, Suzuki developed a method of teaching music based on the principles of learning a language as a small child. In this approach, which Suzuki called the “mother tongue method,” everything that goes into teaching a young child to speak is applied to teaching the child a musical instrument.
The Suzuki Method centers on the Suzuki Triangle (teacher, student and parent), requiring equal involvement from every side. Parents attend the lessons with the child and act as “home teachers” during the week. This begins the course of nurturing children into becoming good people and respectable human beings; no matter which direction their careers take them.
A child learns his native tongue starting from the moment of birth by means of constant repetition. Words are repeated hundreds, maybe thousands of times before they start to become a part of a child’s vocabulary. Repetition is also a source of development in Suzuki’s method. Like learning a word in one’s language, a child doesn’t just learn a song and stop playing it; the child repeats it and continues to expand his understanding of the piece.
Every child learns at his or her own rate. Praise and encouragement for the child’s efforts to learn are very important to his or her growth as a person and should not be thought of as small achievements. Dr. Suzuki’s method of teaching utilizes this interaction as a key part of every child’s development.
Creating The Suzuki Method, Dr. Suzuki’s goal wasn’t to create violin prodigies or virtuoso performers, but to build and nurture good people. He once proclaimed, “Man is a son of his environment.” Dr. Suzuki’s method is intended to influence the person as a whole and not just a child learning to play the violin. Through the work of the teacher and parent, along with cooperation and willingness of the child, a strong foundation is laid for discipline, character development, and artistic expression. |
The smallest unit of space on the hard disk that any software can access is the sector, which contains 512 bytes. It is possible to have an allocation system for the disk where each file is assigned as many individual sectors as it needs. For example, a 1 MB file would require approximately 2,048 individual sectors to store its data.
Under the FAT file system (and in fact, most file systems) individual sectors are not used. There are several performance reasons for this. It can get cumbersome to manage the disk when files are broken into 512-byte pieces. A 2 GB disk volume using 512 byte sectors managed individually would contain over 4 million individual sectors, and keeping track of this many pieces of information is time- and resource-consuming. Some operating systems
do allocate space to files by the sector, but they require some advanced intelligence to do this properly. FAT was designed many years ago and is a simple file system, and is not capable of managing individual sectors.
What FAT does instead is to group sectors into larger blocks that are called clusters, or allocation units. The cluster size is determined primarily by the size of the disk volume: generally speaking, larger volumes use larger cluster sizes. For hard disk volumes, each cluster ranges in size from 4 sectors (2,048 bytes) to 64 sectors (32,768 bytes). Floppy disks use much smaller clusters, and in some cases use a cluster of size of just 1 sector. The sectors in a cluster are continuous, so each cluster is a continuous block of space on the disk.
Cluster sizing (and hence partition or volume size, since they are directly related) has an important impact on performance and disk utilization. The cluster size is determined when the disk volume is partitioned. Certain utilities (like Partition Magic) can alter the cluster size of an existing partition (within limits) but for the mostpart, once the partition size is selected it is fixed.
Every file must be allocated an integer number of clusters–a cluster is the smallest unit of disk space that can be allocated to a file, which is why clusters are often called allocation units. This means that if a volume uses clusters that contain 8,192 bytes, an 8,000 byte file uses one cluster (8,192 bytes on the disk) but a 9,000 byte file uses two clusters (16,384 bytes on the disk). This is why cluster size is so important in making sure you maximize the efficient use of the disk–larger cluster sizes result in more wasted space.
Data recovery Salon welcomes your comments and share with us your ideas, suggestions and experience. Data recovery salon is dedicated in sharing the most useful data recovery information with our users and only if you are good at data recovery or related knowledge, please kindly drop us an email and we will publish your article here. We need to make data recovery Salon to be the most professional and free data recovery E-book online. |
Also known as Psittacine Circovirus Disease, PBFD, which is incurable, has been identified in over 60 species of wild and captive parrots. It has been much in the news lately, and the questions I’ve received indicate that some of the coverage has been confusing to bird owners. Today I’d like to summarize what we know, and what remains to be done in the battle against PBFD.
PBFD Natural History
The virus that causes PBFD was first described in 1987, when it was discovered in a captive group Orange-Bellied Parrots, a highly endangered species. Further study revealed that the virus occurred naturally in Australia, and likely was endemic there (found nowhere else). The disease is now established worldwide, apparently having been spread by the legal and illegal trade in parrots.
The PBFD virus is an extremely hardy organism, and likely survives for many years in nest hollows and roosting/feeding areas. To date, only one disinfectant, Virkon S, has proven able to kill it. The virus has been found in feather dust, feces and the crop lining of infected birds. Transmission seems to occur in several ways – direct contact with sick birds, inhalation of the virus from dust and feces and via food passed to chicks by parents.
PBFB may incubate within a parrot for 3 weeks to 12 months, during which time symptoms will not be visible. Birds incubating the virus will, however, shed it in the feces and feather dust, and thus infect others. In rare cases, adult parrots may survive PBFD. Unfortunately, they continue to shed the virus even after full recovery.
The Various Forms of PBFD
Three forms on PBFD have been identified. Peracute PBFD affects newly-hatched chicks and is usually fatal within 2-3 weeks. As feather abnormalities are not visible, this form is usually diagnosed only upon necropsy.
Acute PBFD is seen among nestlings that are developing their first feathers, and usually causes death within weeks. Infected birds become lethargic, and may vomit and exhibit abnormal feather growth (please see below).
Adult parrots afflicted with Chronic PBFD exhibit feather abnormalities such as the loss of powder down, curled feathers, retained sheaths and color changes. The beak, especially in cockatoos, may flake and crack, and nails may curl as they grow. Diarrhea, lethargy and vomiting may are often present.
PBFD is most accurately diagnosed via a blood test.
Immune System Effects
In addition to causing feather, nail and beak destruction, PBFD depresses the immune system. Death often results from secondary infections (i.e. septicemia and pneumonia) caused by opportunistic bacteria. Cracks in the beak, and skin wounds caused by abnormal feather growth, likely worsen the situation by providing an easy route for bacterial infection.
Managing PBFD in the Wild
PBFD is considered to be a serious threat to the survival of several rare Australian species, including the Swift, Orange-Bellied and Norfolk Island Green Parrots. Australia’s Environmental Protection and Biodiversity Act provides for a PBFD management program (please see article below).
Research into the development of a vaccine is ongoing, but success is not expected in the near future.
Managing PBFD in Captivity
While there is as yet no cure for PBFD, there are some steps that can be taken to increase the quality of life for infected pets. As is true for all creatures, a proper environment and diet will strengthen the immune system and possibly reduce the severity of the disease or its symptoms. Exposure to sunlight or artificial UVB, a natural photo-period (day/night cycle) and an appropriate diet have been found useful (please see article below).
If you maintain a parrot collection, newly-received individuals should be kept in isolation until they have been checked for PBFD. Due to the severity of the symptoms, one may need to consider euthanasia as the disease progresses.
Arizona Exotic Animal Hospital Information, with tips on caring for infected Parrots
PFBD Infected Cockatoo image referenced from wikipedia and originally posted by S B and Snowmanradio |
Volcanoes may have greater influence on climate than previously thoughtby Jonathan DuHamel on Jul. 12, 2011, under Climate change, Geology
A newly published French study of last year’s eruption of the Eyjafjallajökull Volcano in Iceland suggests that models have underestimated the aerosol formation and hence cooling effect of volcanic eruptions “by 7 to 8 orders of magnitude.”
The Abstract reads:
Volcanic eruptions caused major weather and climatic changes on timescales ranging from hours to centuries in the past. Volcanic particles are injected in the atmosphere both as primary particles rapidly deposited due to their large sizes on time scales of minutes to a few weeks in the troposphere, and secondary particles mainly derived from the oxidation of sulfur dioxide. These particles are responsible for the atmospheric cooling observed at both regional and global scales following large volcanic eruptions. However, large condensational sinks due to preexisting particles within the plume, and unknown nucleation mechanisms under these circumstances make the assumption of new secondary particle formation still uncertain because the phenomenon has never been observed in a volcanic plume. In this work, we report the first observation of nucleation and new secondary particle formation events in a volcanic plume. These measurements were performed at the puy de Dôme atmospheric research station in central France during the Eyjafjallajokull volcano eruption in Spring 2010. We show that the nucleation is indeed linked to exceptionally high concentrations of sulfuric acid and present an unusual high particle formation rate. In addition we demonstrate that the binary H2SO4 – H2O nucleation scheme, as it is usually considered in modeling studies, underestimates by 7 to 8 orders of magnitude the observed particle formation rate and, therefore, should not be applied in tropospheric conditions. These results may help to revisit all past simulations of the impact of volcanic eruptions on climate.
Besides primary ash, the researchers say that sulfur dioxide, which oxidizes to sulfuric acid, can act as cloud-forming nuclei that can change the precipitation over a region. The clouds would also partially reflect solar irradiance and therefore contribute to cooling.
UPDATE: New NASA paper says volcanoes primarily responsible for increased SO2:
Recently, the trend, based on ground-based lidar measurements, has been tentatively attributed to an increase of SO(2) entering the stratosphere associated with coal burning in Southeast Asia. However, we demonstrate with these satellite measurements that the observed trend is mainly driven by a series of moderate but increasingly intense volcanic eruptions primarily at tropical latitudes. |
John Dalton proposed the first atomic theory, J.J. Thomson discovered the electron, Ernest Rutherford discovered the nucleus and Niels Bohr is known for the Bohr model in which electrons are in orbits outside the nucleus.
Here's some more detail:
In In 1803 John Dalton proposed the first atomic theory, based on the idea from the early Greek philosopher Democritus that matter is made of tiny particles. Dalton described atoms as tiny, indivisible and indestructible particles.
In 1897 J.J. Thomson, while experimenting with a cathode ray tube, identified the negatively charged electron. From this he proposed what later became known as the Plum Pudding model in which the electrons are scattered throughout a sphere of positive charge like the raisins in a plum pudding.
In 1911 Ernest Rutherford conducted the famous Gold Foil Experiment in which he directed positively charged alpha particles at a sheet of gold foil and observed their paths using a detecting screen. He found that nearly all of the alpha particles passed through with their paths unaltered, while a few were deflected to one side or straight back. From this he concluded that the positive charge, which repelled the alpha particles, was concentrated in very small areas in the centers of atoms. He revised Thomson's atomic model, producing the first nuclear model of the atom.
In 1913 Niels Bohr observed the light given off by hydrogen atoms when electrified. He observed that this light, when passed through a spectrum, produced only a few lines of specific wavelength rather than a rainbow-like continuous spectrum. His explanation was that the light was produced by electrons absorbing energy while in the ground state, then moving farther from the nucleus, then giving off the energy in the form of light when they returned to the ground state. He further explained that electrons' energy is "quantized", meaning that they can only be certain specific distances from the nucleus. From this he developed the Bohr Model of the atom, in which electrons have specific paths he called orbits.
The more recent Quantum Mechanical model still accounts for the observation that electrons have energy that is quantized, but describes regions of probability of finding an electron rather than specific orbits. These probability functions are called wave equations. |
Unlike the old SAT, the new SAT will assess conventions of punctuation. You will be asked to observe standard punctuation practices, including within-sentence punctuation. Within-sentence punctuation includes colons, semicolons, dashes, and parentheses. This month’s focus is on dashes and parentheses.
The rule is:
Dashes and parentheses both separate information that is less relevant than an aside (a clause that is related, but not essential, to the sentence). The separated information will be an interesting but unnecessary explanation or an interesting but unnecessary detail.
Here is an example of an aside:
These dogs, all of which are cuddly and cute, have a ferocious bark.
The phrase “all of which are cuddly and cute” is an aside; it is related to the sentence, but it is not essential to the sentence.
Here are examples of appropriate uses of dashes and parentheses:
Abraham Lincoln – the tallest president in American history – supported the ratification of the Thirteenth Amendment.
Abraham Lincoln (the tallest president in American history) supported the ratification of the Thirteenth Amendment.
The phrase “the tallest president in American history” should be separated by either dashes or parentheses; it is an interesting piece of information about Abraham Lincoln, but it is an unnecessary detail in the context of the complete sentence.
The use of either dashes or parentheses depends on your personal preference! There is no rule that states dashes should be used in certain contexts and parentheses in other contexts.
Now try this:
Where would you place the dashes or parentheses in the following sentence?
The Great Gatsby which I’ve read twice is a classic of American literature. |
Proper and frequent stretching is responsible for multiple body adaptations, including an increase in the spinal stretch reflex, muscle mass, flexibility and control. Resistive and free active stretches that isolate a single muscle instead of a full range of motion is helpful in bolstering flexibility to overcome muscle weaknesses, according to Dr. Donald DeFabio. This type of stretching, however, is not ideal for everyone or all activities.
How It's Done
Active stretching works by activating the reciprocal inhibition reflex. An actively contracting muscle is accommodated through an adequate relaxation of its opposite muscle -- the antagonist. Since the antagonist muscle usually contracts in resistance to a stretch through the action of muscle spindles and the nervous system, active stretching seeks to cushion the antagonist muscle from any forces, allowing it to relax.
Stretch Reflex Initiation
If you've tried doing splits in your bedroom, you must swear ballet dancing and gymnastics are witchcraft, and if not you must wonder how much pain those brave enough to do it for a living must go through. However, that was just the stretch reflex at the behest of the nervous system, which is not convinced you have the stability and strength to do splits. According to Michael Alter, a gymnastics coach and judge, a stretch reflex is a protective muscle contraction that regulates the length of a skeletal muscle. The nerve activity rises when a muscle spindle is stretched, increasing the alpha motor neuron, forcing the muscle to contract in resistance to stretching. This raises the muscle tension that renders connective tissues more difficult to stretch.
Counterproductive In Warm-Ups
Stretching is a critical part of any routine. It should follow a general warm-up and precedes sport-specific activities, according to a study published in the April 2009 issue of the BMC Musculoskeletal Disorders Journal. It should raise the body’s temperatures, loosen stiff muscles, bolster coordination, awareness, muscle contractibility and elasticity, and cardiovascular and respiratory systems’ efficiency, while also making for a better performance. Improper warm-ups heighten the risk of injury, and active stretching is just one way of how not to warm up. Alter says that active stretching is likely to result in tiring out the stretched muscles, reducing their ability to perform in subsequent physical activities.
Ineffective With Injuries
Active stretching may not be very effective in the presence of some injuries and dysfunctions like serious inflammations, fractures and sprains. If you suffer from any dysfunction, you are best off with external assistance stretches, which produce sufficiently longer stretches necessary for the tissues and the entire body to adapt to a certain range of stretch. Without help and care, active stretching is likely to exacerbate pre-existing physical dysfunctions through muscle damage, soreness, further injury and even fatigue.
For effectiveness, active stretching requires you to adopt the right stretching position and hold a stretch for a certain time, which allows the muscles to adapt to the range. There is really no time standard since each stretch is supposed to vary with every individual’s specific flexibility. This means that for every stretch, you are forced to strike a balance between doing too little and wasting your time, or doing too much, at the risk of injury.
- Stockbyte/Stockbyte/Getty Images |
As we all know, bird poop is a powerful substance. Splotches of it are easily visible on pretty much any surface. Once it has dried, it is hard to clean off without a good deal of scrubbing. If left to accumulate, it can corrode a surface or become a health hazard. On at least one occasion, bird poop arrived in space when several splotches of it were stuck to the right wing of the space shuttle Discovery. In places where birds congregate, guano can accumulate in very large quantities.
These qualities make bird poop very useful for anyone seeking out bird colonies in remote areas. And, in fact, some scientists have started looking for penguin poop from space. While orbiting satellites are unable to track individual emperor penguins, they can detect the poop-stained ice that marks a breeding site.
"We were mapping one of our bases on an ice shelf, and we knew there was a penguin colony close to there," said Peter Fretwell, a geographer at the British Antarctic Survey.By studying satellite imagery, the team found 10 unknown colonies and 6 colonies that changed locations, while showing that another 6 that had disappeared. The research, while perhaps ignoble, provides needed information to track how climate change and other threats are affecting emperor penguins.
"I was using a satellite image as a backdrop for the map and it happened to have a reddish-brown stain on one of the creeks that was a possible location for the emperor penguin colony."
"It was quite a lucky find because just a few months beforehand, we had made a mosaic of these satellite images of the whole of Antarctica, so we could go round and track all the colonies." |
Atmospheric greenhouse gases have continued their steady increase in the new century. Logically, one would expect that global mean surface temperature (GMST) would also continue to increase in the same fashion as experienced in the latter decades of the 20th century. However, between 1998 and 2013 GMST actually plateaued with much smaller increases than the average over the last 60 years and labeled the “global warming hiatus.” The fact that this slowdown in GMST increase was not predicted by most climate models has led some to question the steady increase in heat predicted under increased greenhouse gas conditions.
A new paper in Earth’s Future documents the work of many researchers showing that GMST, while an important climate indicator, is a measure of the Earth’s surface warming, not a measure of total accumulated heat energy in the Earth’s system. The paper notes that the amount of missing heat that could cause the slowdown in GMST increase would be but a small fraction of the total heat entering the ocean. So the slowdown in GMST increase is most likely a redistribution of excess heat into and within the ocean. Thus, the overall Earth continued to warm with the ocean absorbing the large majority of excess heat. Present and future research activities are interested in where and under what conditions the ocean experienced increased heat uptake. An important component in achieving this is support of the subsurface ocean observing system – mainly by Argo profiling floats, both in its present form and with more complete global coverage – and subsurface remote sensing. Improvements in modeling ocean heat uptake are possible and already underway.
2NOAA National Centers for Environmental Information
3National Center for Atmospheric Research
5Scripps Institution of Oceanography
6NASA Jet Propulsion Laboratory and the University of California, Los Angeles |
PhD. in Mathematics
Norm was 4th at the 2004 USA Weightlifting Nationals! He still trains and competes occasionally, despite his busy schedule.
Recall that the slope of a tangent line of a point is the same as the derivative of the function at that point. So, to find an equation of the tangent line at a certain point, calculate the derivative of the function at that point to find its slope. Then, using the point at which you wound the tangent line, you can use point-slope form to find the y-intercept. Remember that point-slope form is y-y1=m(x-x1), where (x1,y1) is the point where you are finding the tangent line, and m is the slope calculated by using the derivative.
So one problem that you might see on your homework is; find the equation of a line tangent to a curve at a certain point. This is something you can do with the derivative, because the slope of a tangent line at a given point comes from the derivative. But just remember that when you want to find the equation of a line, you need two things. You need a point that the line passes through, and the slope.
So let's take a look at this example of a problem here. It says write an equation of the line tangent to the graph of. Then I have this big polynomial function f(x) equals 2x³ minus 5x² plus 3x minus 5. The point of tangency is going to be at x equals 1. So first I want to find both the x and the y coordinate of the point of tangency. So this is the x coordinate. The y coordinate is going to be f(1). So that's 2 times 1³, 2 times 1minus 5 times 1², 5 times 1 plus 3 times 1 minus 5. So I have 2 minus 5, -3, plus 3, 0, -5. That means that the point of tangency is going to be 1,-5.
So all I have to do now is find the slope. The slope comes from the derivative.
The derivative of this thing is going to be, the derivative with respect to x of all of this 2x³ minus 5x² plus 3x minus 5. So first, I want to break this part using the sum rule. So I have the derivative with respect to x of 2x³ plus the derivative with respect to x, of -5x², plus the derivative with respect to x of 3x minus 5. Three pieces.
Then I'm going to use the constant multiple rule to pull the constants out. So you get 2 times the derivative with respect to x of x³, plus -5, this constant, times the derivative of x². I don't really need to pull anything out here. This is just a linear function. I know the derivative of linear function is going to be the slope of that function 3.
Just going through, let me continue this up here. So I've got 2 times the derivative of x³. That's going to be 3x². 2 times 3x² plus -5 times the derivative of x², 2x. So -5 times 2x plus 3. So this is f'(x).
Let me just simplify that. 6x² minus 10x plus 3. So what you need is the slope of the tangent line. You need the slope specifically at x equals 1. So I'm going to need to calculate f'(1). That's going to be 6 times 1 minus 10 times 1 plus 3. So 6 minus 10, -4 plus 3, -1. That's your slope.
Now finally use the point slope formula to find the equation of the line. So you have y minus, don't forget the coordinates of our point of tangency 1, -5. So y minus -5, y plus 5 equals the slope -1 times x minus 1. This is actually a perfectly good answer in point slope form. But if your teacher wants slope-intercept form, you'd have -x plus 1, and then subtract 5 from that minus 4. Y equals -x minus 4. That's the equation of the line tangent to our polynomial at x equals 1. |
Read Across America is a reading awareness program created to encourage children to celebrate reading on March 2nd in memory of Dr. Seuss’s Birthday.
Create a Classroom Library
2. Talk about the stories with your child.
3. Don't ask questions. Make it a fun day.
1. Plan with the librarian for a special activity.
2. Have a Read Aloud time.
3. Give a small prize, like a sticker or pencil, after reading.
Make Green Eggs and Ham
1. Read Green Eggs and Ham.
2. Plan with the Cafeteria to have Green Eggs and Ham.
3. Bring or make Green Eggs and Ham in the classroom if possible.
FREE Activities with Dr. Seuss Books
Download activities: http://www.weareteachers.com/blogs/post/2016/02/24/7-dr.-seuss-books-and-activities-for-read-across-america-day
More Read Across America Activities
For More Activities Visit:
Maritza Martinez Mejia Social Media:
Facebook Author Page: https://www.facebook.com/MaritzaMartinezMejia |
2009 Mock DWITE by A.J.: Problem 3
Being the avid bear hunter he is, Brian is hot on the trail of yet another bear. While chasing after the bear (well, mainly the bear's excretions…), Brian remembered that almost all the bears he had chased follow a set pattern:
- Their excretions are found on the border of their respective 'territories' (which conveniently tend to be perfectly circular in shape)
- Their caves are usually located at the centers of their respective territories.
Assuming these facts, help Brian locate the bear's cave, given the locations of three droppings known to belong to the bear.
InputThe input will contain five cases, each of which spans three lines. Each case describes a different bear. Within an individual test case, each line will contain the x-coordinate and the y-coordinate (integers of absolute value not greater than 1000) of one of the bear's droppings, in that order, separated by a space.
OutputFor each line given in input, in the order given, print one line containing two space-separated real numbers, the coordinates of the bear (first x-coordinate then y-coordinate), rounded to exactly two digits after the decimal place. The answer is guaranteed to exist and be unique.
Sample Input (only two cases shown)
0 0 0 1 1 0 1 2 2 4 4 10
0.50 0.50 -25.50 16.50
Note on roundingThere are two different conventions regarding how numbers should be rounded when the first non-significant digit is a five and all following digits are zeroes. In this particular problem, for example, what would you do if one of the coordinates in the output were 2.345? Would you round it up to 2.35 or down to 2.34? Luckily, this situation never arises in the official test data, so it is safe to use your language's built-in rounding functionality, if it exists.
Point Value: 7
Time Limit: 5.00s
Memory Limit: 256M
Added: Sep 23, 2009
C++03, PAS, C, HASK, ASM, RUBY, PYTH2, JAVA, PHP, SCM, CAML, PERL, C#, C++11, PYTH3 |
(nasturtium) family contains just one genus, tropaeolum, of which a single (introduced) species is found in the US, mostly along the California coast. Plants are native to central and south America; they are annual or perennial herbs, with fleshy stems and leaves, prostrate stems and simple leaves. Flowers are large and colorful, usually bilaterally symmetric, formed of five sepals, two projecting backwards into a spur, and five petals, narrow at the base. The upper two petals are different in shape to the lower three. At the center are eight stamens, unequal in size. |
How do animals use their bodies? Could an elephant drink water without its trunk? Could a red panda climb trees without its claws? Could a penguin swim without its flippers? Do these animals really need special parts on their bodies to do things? The answer is yes. Animals (and humans!) use body parts that are adapted to help them survive in the world.
In the April 2012 Alive magazine, the Kids Alive section introduced you to many of those special body parts. The activities encouraged you to use your own body. For example, you had to use your opposable thumbs and fingers to grip a pencil and complete the Anatomy Word Search. Your brain helped you think of ways animals use certain body parts in our Function Fill-In. And for the Guess Who? activity, you needed your eyes to see patterns and color. Just like an elephant needs its trunk to feed itself, you need many of your body parts to help you in your daily life.
Animals have special parts that help them do tasks like eating, digging and protecting their young. Print your word search then use your special opposable thumbs and fingers to circle these words in the word search. Words can be found vertically, horizontally, diagonally and backward within the grid.
When you're finished, view the answers to see if you got them all correct!
- A polar bear uses its front paws and back legs to swim, walk, run, climb, eat and hunt.
- A kangaroo uses its pouch to store babies, called joeys.
- A rattlesnake uses its fangs to protect itself and inject venom into its prey.
- A bonobo uses its hands and feet to climb, walk, run, eat, groom, play and fight.
- Chacoan Horned Frog –Their eyes help them see predators. Chacoan horned frogs even rest with their eyes open!
- Dall sheep – Males, called rams, use their horns to fight during mating season.
- Flamingo – These pink contour feathers protect a flamingo’s skin and help it fly.
- Rhino – Rhinos use their ears like radar because their eyesight is poor. Their funnel-shaped ears draw sound in from far away and can rotate in many directions.
- Marabou stork – Marabou storks use their large, sharp-edged beaks to tear meat. These scavengers prevent the spread of disease on the African savanna by eating animal remains.
- Grizzly bear – Grizzlies use their long claws to dig their winter dens and forage for food; they break apart logs to find insects and pull up roots to eat.
- Jaguar – Jaguars use their spots as camouflage from predators. The spots, which are actually broken rosettes (rose-shaped circles), help the jaguar hide among grasses and bushes in its Central and South American habitats.
- Peahen – The peahen’s and peacock’s feet are adapted for cold weather. Peafowl feet are mostly tendons and bone and don’t have many blood vessels or nerves to feel the cold. Birds also have what’s called a counter-current heat exchange; heat from the warm blood leaving the body is transferred to the cold blood leaving the feet. This heat exchange keeps the feet from getting too cold.
- Grevy’s zebra – The narrow stripes of Grevy’s zebras help them hide from predators; the stripes make them hard to see in bushes and grasses. When a herd of Grevy’s zebras is running, the stripes make the animals blend together so predators can’t pick out just one to chase.
- Snake-necked turtle – This turtle’s webbed feet help it swim in the water. These wide feet also make the turtle more stable while walking on land. Snake-necked turtles have claws on their front webbed feet to rip up larger prey.
Click here to see bigger pictures of the animals showing their full heads or bodies.
Grab a few friends and get wild with this body-parts boogie. Have each person pick a different animal to be. You can draw animal names out of a hat or choose them yourself. Then get in a circle and do the hokey pokey with animal body parts! Start with the youngest person calling out the first body part. Then go clockwise around the circle until everyone gets a turn. If your animal has the body part called out, put it in and shake it all about! That means: If your “body part” is an arm, you reach your arm into the circle. If it’s your head, you tilt your head into the circle. If your animal does not have the body part that is called, boogie in place until the next part is called out. Make your dance party even more creature-crazy with costumes or masks.
Activities by Liz Mauritz |
Chemguide: Support for CIE A level Chemistry
Learning outcome 9.2(g)
This statement asks you to deduce the bonding in an oxide or chloride from its chemical or physical properties. It doesn't say that these are restricted to the oxides or chlorides of Period 3 elements, and so it is safer not to assume that.
Before you go on, you should find and read the statement in your copy of the syllabus.
This is no different from deducing the bonding in any other sort of compound from its physical properties.
If a substance is a high melting point solid, then it will be a giant structure of some kind - either giant ionic or giant covalent (CIE use the term "giant molecular"). To decide whether it is giant ionic (such as NaCl or MgO) or giant covalent (such as SiO2) needs more information.
For example: If you have a high melting point compound which doesn't undergo electrolysis when it is molten (assuming you can melt it!), then it can't contain ions, and so must be giant covalent.
If a substance is a gas, or liquid, or low melting point solid, then it will be a simple molecule with covalent bonding.
Effect of water on chlorides
If a chloride just dissolves in water to give a simple solution with a pH close to 7, then it is ionic. Most, but not all, ionic chlorides are soluble in water.
Note: But care! As the chloride dissolves in water, the metal ions become hydrated. Water molecules attach to them. As the charge on the positive ion increases, these ions become more and more acidic. If you are following this section of the syllabus through in order, you will already have seen this with regards to the hydrated magnesium and aluminium ions.
If a chloride reacts with water to produce an acidic solution, and steamy fumes of hydrogen chloride gas, then it is covalent. Most, but not all covalent chlorides behave in this way. The only simple one I can think of off-hand which doesn't is CCl4.
Effect of water on oxides
If an oxide reacts with water to give an acidic solution, then it is covalent. (There are exceptions if you go beyond Period 3. For example, there are a few neutral covalent oxides like CO and H2O.)
If an oxide reacts with water to give an alkaline solution, then it is ionic. However, not all ionic oxides will react with water.
Effect of acids or bases on oxides
Ionic oxides tend to be basic; covalent oxides tend to be acidic.
If an oxide reacts with a base to form a salt, then obviously it is an acidic oxide, and must be covalent.
If an oxide reacts with an acid to form a salt, then it is a basic oxide, and must be ionic.
Obviously, if an oxide reacts with both an acid and a base, then it must be amphoteric - such as aluminium oxide. Amphoteric oxides will tend to be ionic, but with a fair amount of covalent character.
Whether you have a chloride or an oxide, if the molten compound undergoes electrolysis, then it must contain ions. If it doesn't undergo electrolysis, then it is covalent.
A final comment
A lot of this isn't very clear-cut, and it isn't difficult to find exceptions. It is important that you look to see what sort of questions the examiners are asking by looking at recent questions papers and mark schemes. You should be doing that for every topic you study, anyway, but it is particularly essential in this messy periodicity section.
© Jim Clark 2010 (last modified May 2014) |
Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
You can change this under Settings & Account at any time.
Trig In Computer Graphics
Transcript of Trig In Computer Graphics
Trig is used in many different ways in computer programs.
Primarily to: create, move, and position objects around the plane.
Trig is used to create the objects it can manipulate later through simple shape building.
The program needs trig to build the shapes so that it can figure out the correct proportion for the size of them.
Cyclical processes on computers are used to create swinging motion and circular motion on the screen using sinusoidal functions being entered into the program.
Sinusoidal functions are great at representing repeating motion.
On a standard sine wave one period is 2 pi radians in length or 360 degrees.
Lets start off with the simplest example: triangles.
Traingles are everywhere in games.
Triangles are generally used to measure the distance between two objects on the screen.
Ex: Spaceship Game (shown below)
Lets say you want to find the distance between the two spaceships i.e. the hypotenuse.
You have the coordinates of these two ships on the grid and then can use those to find sides a and b of the triangle.
From there it is a simple Pythagorean equation.
In first person games they use this same idea to measure objective distances.
In a first person game you'll usually have an objective tracker that tells you how far away you are from the quest objective. To measure the distance they measure how far away it is in the x and y direction then use Pythagorean Theorem.
Collision Detection On Screen
Collision detection on screen uses trig as well with the use of circles.
When designing a video game like the spaceship game, without a physics engine the spaceships would just fly through each other.
Creating a collision system allow the ships to bounce off one another and their surroundings.
Cyclical Processes Example
Ex: A pendulum swing=one period or one 360 degree rotation around a circle.
The distance the pendulum moves during its period can be altered by changing the amplitude of the function, the speed at which the pendulum swings can be altered by changing the period of the function.
In a pendulum computer program the distance the pendulum takes to swing from one side to dead center is the amplitude.
The period of a pendulum is the time it takes to make one full back and forth swing.
The equation for this program: pendulum angle= range * sin(time*speed)
The speed value is not mph it's full swings per second.
A number less than 1 slows the pendulum down while a number higher than 1 speeds it up.
Trig In Computer Programming
Set up circles
Set up circles around each object you want to be able to collide with other objects.
Pythagorean Theorem is needed to find the radius of each circle.
The system then determines if the distance between the two objects is less than the radius of the two circles combined.
If the distance is less then its a hit.
While trig can be used to calculate far more complex shapes to determine collision areas, circles are generally close enough for the task at hand.
They are also much easier to calculate.
Set up the circles around each object you want to collide against one another.
All that is necessary to know is the radius and that can be found using the Pythagorean Theorem.
In the program a collision only happens when the distance between the two objects is less than or equal to the two radii added together.
Trig can be used to calculate more complex objects but circles work best because they are generally close enough and easier to calculate than a complex object.
Pendulum Wave Video |
RGBa is a way that color gets declared in CSS3 and it includes the red, green, and blue channels of the color model, along with the alpha channel for transparency settings which enables alpha blending and compositing. The alpha channel sets the opacity, or transparency value of the pixel value. An alpha value of 1 sets the channel to fully opaque, while a value of 0 sets the channel to fully translucent or fully transparent. According to the W3C specification for PNG Data Representation, the alpha channel is really the degree of opacity of the pixel, but most people refer to alpha as providing transparency information, not opacity information.
Simple RGBa example
An example of an RGBa style property would look something like this:
background: rgba(121, 7, 242, 0.25);
}And this CSS style code snippet above displays a shade of purple as shown in Figure A (Chrome 20.0 is the browser used for these figures):
Fun with RGBa
The real fun with manipulation of the RGBa color model starts when you layer several defined properties, creating some interesting effects on your web pages. The example below demonstrates three boxes with individually defined RGBa properties, and their overlapping transparencies combine to create an interesting color combination.
The following properties are defined in the styles for the three boxes:
background: rgba(9, 53, 135, .5);
background: rgba(17, 191, 083, .25);
background: rgba(227, 188, 32, .15);
}The example of three boxes overlapping each other with a different RGBa property respectively, is displayed in Figure C:
Setting a hover RGBa color change
Taking it a step further, how about we set a particular element to change the background color upon hover? In this example, we will set any list item with a defined style property as shown in the CSS snippet below:
background: rgba(227, 188, 32, .15);
background: rgba(171, 83, 10, .75);
}The first list item displays with the original defined background color, then the second list item on hover displays the resulting RGBa background color as shown in Figure D:
You can also assign the RGBa styling as inline styles, as in the example HTML code shown below:
<div style="background: rgba(11,156,49,0.2);">rgba(11,156,49,0.2)</div>
<div style="background: rgba(11,156,49,0.4);">rgba(11,156,49,0.4)</div>
<div style="background: rgba(11,156,49,0.6);">rgba(11,156,49,0.6)</div>
<div style="background: rgba(11,156,49,0.8);">rgba(11,156,49,0.8)</div>
<div style="background: rgba(11,156,49,1);">rgba(11,156,49,1)</div>The code above displays the spectrum of RGBa colors in Figure E:
Fallback for IEFor IE the alpha channel is not recognized, so a fallback means you have to normalize the alpha channel to the setting you want for each of the color channels. There are several ways to convert the RGBa color channels to RGB color channels using algorithms and fancy calculations; however, my quick cheat is to just use the color picker in PhotoShop to find the RGB color channel properties of the resulting display, as shown in Figure F.
The following are the converted values for each instance of the partial green spectrum listed below:
rgba(11,156,49,0.2) converts to rgb(205,234,212)
rgba(11,156,49,0.4) converts to rgb(156,214,171)
rgba(11,156,49,0.6) converts to rgb(107, 194, 130)
rgba(11,156,49,0.8) converts to rgb(58, 174, 89)And of course the last rgba(11,156,49,1) is set to full opacity, therefore the only change for the fallback is to remove the alpha channel and is represented as rgb(11, 156, 49). And the fallback as displayed in IE 8 is shown in Figure G:
Ryan has performed in a broad range of technology support roles for electric-generation utilities, including nuclear power plants, and for the telecommunications industry. He has worked in web development for the restaurant industry and the Federal government. |
What is meant by the term "Canine Bloat"?
This is a term that is synonymous with the more scientific term "Gastric Dilatation/Volvulus." It is often called GDV. That means that a dog's stomach twists on its long axis and distends with air to the point where the dog goes into shock and may die.
Dilatation means that the stomach is distended with air, but it is located in the abdomen in its correct place (has not twisted). Volvulus means that the distention is associated with a twisting of the stomach on its longitudinal axis.
How or why does this occur?
We really do not know the answer to either of these questions. Original theories suggested that it occurred when a dog ate a large meal and then engaged in strenuous exercise. However, there is no scientific evidence to support this theory. In most cases, the cause is undeterminable. No specific diet or dietary ingredient has been shown to lead to bloat.
The most commonly affected breeds are those that are “deep-chested,” meaning the length of the chest is relatively longer in proportion to its width. Examples of deep-chested breeds are Great Danes, Setters, Boxers, and Greyhounds. Studies have proven that purebred dogs are more than 3 times more likely to bloat than mix-breed dogs.
Why is it so serious?
When the stomach becomes excessively distended, it causes severe abdominal pain. When the stomach twists, it cuts off its own blood supply as well as the escape routes for the trapped air. Often, the twisting of the stomach leads to rotation of the spleen and compromise to its blood flow. Furthermore, the size and location of the enlarged stomach reduces return blood flow to the heart leading to shock and death if untreated.
When the stomach is distended, digestion stops. This results in the accumulation of toxins that are normally removed from the intestinal tract. These toxins activate several chemicals that cause inflammation, and the toxins are absorbed into circulation. This causes problems with the blood clotting factors so that inappropriate clotting occurs within blood vessels. This is called disseminated intravascular coagulation (DIC) and is usually fatal.
Another complication is cardiac arrhythmia. This abnormal heart rhythm occurs when the heart is deprived of an adequate blood flow and some of its cells begin to die. Dogs must be monitored carefully for this complication. If it occurs and is not treated, it can be fatal.
How can I tell if my dog has bloat?
Your dog‘s abdomen will likely become taut and distended. This is usually visible near the ribs, but depends on the dog’s conformation. The dog may appear depressed and/or painful, and often adopts a “praying position” with the front legs extended fully. The biggest clue is that your dog may have “dry-heaves.” The dog retches continuously, but no vomit is produced. This indicates a life-threatening emergency and your dog should be brought to the hospital immediately.
How is Bloat diagnosed?
The first step is to establish that the stomach is distended with air. The presence of a rapidly developing distended abdomen in a large breed dog is often enough evidence to make a tentative diagnosis of GDV. A radiograph (x-ray) is used to confirm that the diagnosis is dilatation. In most cases, it can also identify the presence of volvulus. Some dogs experience a chronic form of the disease in which the stomach is partially twisted. Distention with air does not occur because the partial twist permits air that accumulates to be expelled out the mouth or into the small intestines. Repeated vomiting is the most common sign. It is diagnosed with radiographs (x-rays) of the stomach that show an abnormal shape to the stomach.
What is done to save the dog's life?
There are several important steps that must be taken quickly.
1) Shock must be treated with administration of large quantities of intravenous fluids. They must be given quickly; some dogs require more than one intravenous line.
2) Pressure must be removed from within the stomach. In some cases, this may be done with a tube that is passed from the mouth to the stomach. However, if the stomach is twisted, the tube cannot enter it. Instead, a large bore needle is inserted through the skin into the stomach and the trapped air is released. A third method is to make an incision through the skin into the stomach and to temporarily suture the opened stomach to the skin. The last method is usually done when the dog's condition is so grave that anesthesia and abdominal surgery is not possible.
3) The stomach must be returned to its proper position. This requires abdominal surgery that can be risky because of the dog's condition.
4) The stomach wall must be inspected for areas that may be dead due to compromised blood supply. Although this is a very bad prognostic sign, these area(s) of the stomach should be surgically removed. The spleen must also be assessed for viability, and a splenectomy may be necessary.
5) The stomach must be attached to the abdominal wall (gastropexy) to prevent recurrence of GDV. This procedure greatly reduces the likelihood of recurrence, but does not completely eliminate it.
6) Abnormalities in the rhythm of the heart (arrhythmias) must be diagnosed and treated. Severe arrhythmias can become life threatening at the time of surgery and for several days after surgery. An electrocardiogram (ECG) is the best method for monitoring the heart's rhythm.
What are the survival and recurrence rates?
These are largely determined by the severity of the distention, the degree of shock, how quickly treatment is begun, and the presence of complications, especially those involving the heart. Approximately 60 to 70% of dogs survive. This survival rate drops drastically to approximately 20% if surgery is not performed. Following successful surgery, the recurrence rate is 6%. Approximately 75% of dogs that do not undergo surgery have another bloat episode.
What can be done to prevent it from occurring again?
The most effective means of prevention is gastropexy, the surgical attachment of the stomach to the body wall. This will not prevent dilatation (bloat), but it will prevent volvulus in most cases. |
The ethnos was a category of Greek state which existed alongside the polis. Ethnē (pl.) are diverse, with no single form of constitution. In ethne, by contrast with poleis (which retained autonomy), individual communities surrendered some political powers (usually control of warfare and foreign relations) to a common assembly. By contrast with poleis, the role of urban centres in ethne varied greatly; settlement structures range from a high degree of urbanization and local autonomy (e.g. Boeotia, which was tantamount to a collection of small poleis) to scattered villages with little urban development (e.g. Aetolia). Although the ethnos is sometimes equated with primitive tribalism, social and political developments from the 8th cent. bc onwards (e.g. in religion and colonization) often bear comparison with evidence from poleis, and the ethnos was a long‐lived phenomenon.
Subjects: Classical Studies. |
Researchers have found that global changes, including warming temperatures and increased levels of carbon dioxide in the atmosphere, are causing a decrease in the availability of a key nutrient for terrestrial plants. This could affect the ability of forests to absorb carbon dioxide from the atmosphere and reduce the amount of nutrients available for the creatures that eat them.
"Even if atmospheric carbon dioxide is stabilized at low enough levels to mitigate the most serious impacts of climate change, many terrestrial ecosystems will increasingly display signs of too little nitrogen as opposed to too much," said study co-author Andrew Elmore of the University of Maryland Center for Environmental Science. "Preventing these declines in nitrogen availability further emphasizes the need to reduce human-caused carbon dioxide emissions."
Although the focus on nitrogen availability is often on developed, coastal regions, such as the Chesapeake Bay, that struggle with eutrophication -- runoff of nitrogen pollution from fertilized farms and lawns that feeds algae blooms and leads to the reduction in oxygen in the waters -- the story is very different on less developed land, such as the mountains of western Maryland.
"This idea that the world is awash in nitrogen and that nitrogen pollution is causing all these environmental effects has been the focus of conversations in the scientific literature and popular press for decades," said Elmore. "What we're finding is that it has hidden this long-term trend in unamended systems that is caused by rising carbon dioxide and longer growing seasons."
Researchers studied a database of leaf chemistry of hundreds of species that had been collected from around the world from 1980-2017 and found a global trend in decreasing nitrogen availability. They found that most terrestrial ecosystems, such as forests and land that has not been treated with fertilizers, are becoming more oligotrophic, meaning too little nutrients are available.
"If nitrogen is less available it has the potential to decrease the productivity of the forest. We call that oligotrophication," said Elmore. "In the forested watershed, it's not a word used a lot for terrestrial systems, but it indicates the direction things are going."
Nitrogen is essential for the growth and development of plants. On the forest floor, microbes break down organic matter such as down fallen leaves and release nitrogen to the soil. The tree retrieves that nitrogen to build proteins and grow. However, as trees have access to more carbon, more and more microbes are becoming nitrogen limited and releasing less nutrients to the trees.
"This new study adds to a growing body of knowledge that forests will not be able to sequester as much carbon from the atmospheric as many models predict because forest growth is limited by nitrogen," said Eric Davidson, director of the University of Maryland Center for Environmental Science's Appalachian Laboratory. "These new insights using novel isotopic analyses provide a new line of evidence that decreases in carbon emissions are urgently needed."
In the U.S. and Europe, regulations on coal-fired power plants have reduced the amount of nitrogen deposition as a consequence of clean air regulations trying to combat acid rain. At the same time, increasing carbon dioxide levels in the atmosphere and longer growing seasons are increasing the nitrogen demand for plants to grow.
"There are now multiple lines of evidence that support the oligotrophication hypothesis," said study co-author Joseph Craine, an ecologist with Jonah Ventures. "Beyond declines in leaf chemistry, we are seeing grazing cattle become more protein limited, pollen protein concentrations decline, and reductions of nitrogen in many streams. These dots are starting to connect into a comprehensive picture of too much carbon flowing through ecosystems."
Materials provided by University of Maryland Center for Environmental Science. Note: Content may be edited for style and length.
Cite This Page: |
THE EARLY CIVILIZATIONSDownload
Connect with a representative to create a custom curriculum for your district.
Lesson titles: "The Nile: How Did It Make Ancient Egypt Great?," "Hammurabi’s Code: What Can It Teach Us about Ancient Mesopotamia?," "The Mystery of India’s First Civilization: What Happened to Harappa?," and "The Great Wall: Why Did Ancient China Build It?"
This title is part of the series: THE EARLY CIVILIZATIONS
8½" x 11" |
Free Science Worksheets & Printables
Science4Us provides many hands on activities as part of the complete, K-2nd grade science curriculum. These free offline activities can be used in your class as a whole group project, as a student center activity, or you can assign them as homework.
Students identify and label examples of physical and chemical changes found in pictures.
Students use this strategy to record definitions, details, examples and non-examples of concepts being studied.
Students use this prepared worksheet to analyze word parts and sort suffixes.
Students use this prepared worksheet to sort words by inflectional endings.
Students demonstrate knowledge of key vocabulary concepts using this cooperative learning strategy.
Students draw pictures in response to teacher-read clues about particular objects or substances that are being studied.
Students use this writing template to summarize experiments in a narrative format.
Generate small groups of objects that have a particular characteristic in common. Add one object to each group that does not fit. Use words or pictures to represent each object, and arrange them in rows. Students identify the object from each group that does not belong. |
We thought young gas giant planets would be large and low-density, but the gas giants around a star that is just 20 million years old don’t fit this model
2 December 2021
Two Jupiter-like planets that orbit a young star are much smaller than expected, which may suggest we need to rethink our ideas of the early evolution of gas giant planets.
Our current understanding of giant planetary evolution predicts that these worlds start out as large, low-density objects. “We expect them to be like very giant, fluffy balls of gas,” says Alejandro Suárez Mascareño at the Institute of Astrophysics of the Canary Islands in Spain.
Then, over the course of a few hundred million years, the planets are expected to slowly contract until they reach their final size, roughly similar to the size of Jupiter or Saturn in our solar system. However, due to the difficulty in monitoring infant planetary systems, these predictions have remained untested until now.
A young star – just 20 million years old – known as V1298 Tau has given astronomers a rare window into the formation of gas giants. In the early 2010s, the Kepler space telescope observed that V1298 Tau is orbited by four gas giants. Suárez Mascareño and his colleagues followed up on the discovery by monitoring the star and its planets between April 2019 and April 2020.
Of the four planets, the researchers found that the two outermost planets – V1298 Tau b and V1298 Tau e – had features they hadn’t predicted. They are around 0.64 and 1.16 times the mass of Jupiter respectively, while their radii are 0.868 and 0.735 times that of Jupiter. This means the two planets are much smaller and denser than the researchers had expected, which suggests they contracted faster than our current ideas indicate.
As this is one of the first detailed studies of such a young planetary system, it is unclear whether these features are normal or strange, says Suárez Mascareño. It may be that our understanding of giant planet evolution is wrong. Alternatively, the planets may be unusual gas giants with cores that are abnormally massive, which would accelerate the contraction process.
“Our understanding of the early stages of planetary evolution and planetary systems evolution might actually be very limited,” says Suárez Mascareño. “Right now, this case contradicts our previous knowledge. But it’s one case – you cannot do a generalisation from one case.”
By collecting more data on more infant planetary systems, Suárez Mascareño hopes to shine a light on the formation of our own solar system.
Journal reference: Nature Astronomy, DOI: 10.1038/s41550-021-01533-7
Sign up to Lost in Space-Time, a free monthly newsletter on the weirdness of reality
More on these topics: |
Honey bees are important insects in many of the ecosystems around us. However, they are constantly exposed to many threats. They are particularly vulnerable to disease and parasites because they live in colonies. Once a member of the colony gets infected, it is very easy for the disease to spread. This is why honey bees have some specific behaviors to help them fight infections. In this study we wanted to see how honey bee behavior changes when Varroa destructor mites come into the colony.
We found that in infected colonies there was more social distancing between different types of bees. We also discovered that the infection led to higher interactions between bees of the same type. We think this helps bees fight the spread of the parasite. |
What is the relationship between moral autonomy and authority?
Illustrating the incompatibility of the concept of authority with the rationale of autonomy, was similarly expressed by the political philosopher Raz, who pointed out that authority sometimes requires action against one’s own judgment and consequently it requires abandoning one’s moral autonomy and since all practical …
What is autonomy and authority?
(1) Authority is the right to be obeyed. (2) Obedience is doing something because someone tells you to do it. Page 3. COMMENTS AND CRITICISM I77. (3) Autonomy is self-legislation-never doing something because someone.
What is the difference between authority and autonomy?
Autonomy refers to the ability to make one’s own decisions and do what a person likes. It means self-direction while authority involves direction from outside. Authority is a sense of power over others while autonomy is the inherent power of individuals or organizations to make decisions and do what they want.
How can you explain the term moral autonomy?
Moral Autonomy is the philosophy which is self-governing or self-determining, i.e., acting independently without the influence or distortion of others. The concept of moral autonomy helps in improving self-determination. Moral Autonomy is concerned with independent attitude of a person related to moral/ethical issues.
What is conflict between authority and autonomy?
If there is an authority which is legitimate, then its subjects are duty bound to obey it whether they agree with it or not. Such a duty is inconsistent with autonomy, with the right and the duty to act responsibly, in the light of reason. Hence, Wolff’s denial of the moral possibility of legitimate authority.
Is the ability to think critically and independently about moral issues?
Thinking critically about moral issues will provide you with the opportunity to refine and enrich your own moral compass, so that you will be better equipped to successfully deal with the moral dilemmas that we all encounter in the course of living.
How does autonomy affect behavior?
Because autonomy concerns regulating behavior through the self, it is enhanced by a person’s capacity to reflect and evaluate his or her own actions. One can learn to engage in reflection that is free, relaxed, or interested, which can help one to avoid acting from impulse or from external or internal compulsion.
Is authority an autonomy?
As nouns the difference between authority and autonomy is that authority is (label) the power to enforce rules or give orders while autonomy is self-government; freedom to act or function independently.
What is the conflict between authority and autonomy according to Wolff?
In the first chapter of his book In Defense of Anarchism, Wolff argues that legitimate authority and autonomy are incompatible and that because autonomy is necessary, legitimate authority must not be possible. |
Your kidneys are two bean-shaped organs that regulate important functions in your body, such as:
- removing waste from your blood
- balancing bodily fluids
- forming urine
Each kidney typically has one vein that carries blood filtered by the kidney into the circulatory system. These are called the renal veins. Usually there’s one on the right and one on the left. However, there can be variations.
In nutcracker syndrome, symptoms are most often caused when the left renal vein coming from the left kidney becomes compressed and blood can’t flow normally through it. Instead, blood flows backwards into other veins and causes them to swell. This can also increase pressure in your kidney and cause symptoms such as
There are two main types of nutcracker syndrome: anterior and posterior. There are also several subtypes. Some experts put these subtypes into a third category known as “mixed.”
In anterior nutcracker syndrome, the left renal vein is compressed between the aorta and another abdominal artery. This is the most common type of nutcracker syndrome.
In posterior nutcracker syndrome, the left renal vein is typically compressed between the aorta and the spine. In the mixed type, there’s a wide range of blood vessel changes that can cause symptoms.
Nutcracker syndrome got its name because the compression of the renal vein is like a nutcracker cracking a nut.
When the condition shows no symptoms, it’s usually known as nutcracker phenomenon. Once symptoms occur it’s called nutcracker syndrome. Common signs and symptoms include:
- blood in your urine
- pelvic pain
- pain in your side or abdomen
- protein in your urine, which can be determined by a doctor
- pain during intercourse
- enlarged veins in testicles
- lightheadedness while standing, but not while sitting
The specific causes of nutcracker syndrome can vary.
Some conditions that may increase the chance of developing nutcracker syndrome include:
- pancreatic tumors
- tumors in the tissue lining your abdominal wall
- a severe lower spine curve
- nephroptosis, when your kidney drops into your pelvis when you stand up
- an aneurysm in your abdominal aorta
- rapid changes in height or weight
- low body mass index
- enlarged lymph nodes in your abdomen
In children, rapid growth during puberty can lead to nutcracker syndrome. As body proportions change, the renal vein can become compressed. Children are more likely to have fewer symptoms compared with adults. Nutcracker syndrome isn’t inherited.
First, your doctor will perform a physical exam. Next, they’ll take a medical history and ask about your symptoms to help them narrow down a possible diagnosis.
If they suspect nutcracker syndrome, your doctor will take urine samples to look for blood, protein, and bacteria. Blood samples can be used to check blood cell counts and kidney function. This will help them narrow down your diagnosis even further.
Next, your doctor may recommend a Doppler ultrasound of your kidney area to see if you have abnormal blood flow through your veins and arteries.
Depending on your anatomy and symptoms, your doctor also may recommend a CT scan or MRI to look more closely at your kidney, blood vessels, and other organs to see exactly where and why the vein is compressed. They might also recommend a kidney biopsy to help rule out other conditions that can cause similar symptoms.
In many cases, if your symptoms are mild, your doctor will likely recommend observation of your nutcracker syndrome. This is because it can sometimes go away on its own, particularly in children. In children under 18, studies show that the symptoms of nutcracker syndrome may resolve themselves approximately
If your doctor does recommend observation, they’ll do regular urine tests to track your condition’s progression.
If your symptoms are more severe or don’t improve after an observation period of 18 to 24 months, you might need treatment. There are a variety of options.
A stent is a small mesh tube that holds the compressed vein open and allows blood to flow normally. This procedure has been used for nearly 20 years for the treatment of this condition.
Your doctor can insert it by cutting a small slit in your leg and using a catheter to move the stent into the proper position inside your vein. However, like any procedure, there are risks.
- blood clots
- blood vessel injury
- severe tears in the blood vessel wall
Stent placement requires an overnight hospital stay and full recovery can take several months. You and your doctor should discuss the risks and benefits of this procedure, as well as other treatment options.
Blood vessel surgery
If you have more severe symptoms, blood vessel surgery may be a better option for you. Your doctor might recommend a variety of surgical procedures to relieve pressure on the vein. Options can include moving the vein and reattaching it, so it’s no longer in an area where it would be compressed.
Another option is bypass surgery, in which a vein taken from elsewhere in your body is attached to replace the compressed vein.
Recovery from surgery depends on the type of surgery and your overall health. It generally takes several months.
Nutcracker syndrome can be hard for doctors to diagnose, but once it’s diagnosed, the outlook is often good. Correcting the condition depends upon the cause.
In many cases in children, nutcracker syndrome with mild symptoms will resolve itself within two years. If you have more severe symptoms, a variety of options may be available to correct the affected vein and have good results for short- and long-term relief.
In those with nutcracker syndrome due to certain medical conditions or tumors, correcting the blood flow problem requires correcting or treating the underlying cause. |
Soil pollution is a condition where the soil is polluted in the surface area or even underground. This pollution is caused by pollutants or contaminants that contaminate the soil.
Soil pollution can occur due to many factors, but generally caused by human actions and also due to nature. So, there is natural soil pollution and there is artificial soil pollution.
The presence of highly toxic and dangerous chemicals, such as pollutants and contaminants that cause soil pollution to occur. Soil pollution is one of the causes of damage to human health to the health of flora and fauna.
Reviewing the cases of soil pollution contaminants, many are caused by natural causes within the soil itself. This occurs when the levels of these contaminants are high and can still cause contamination of the soil and remain at risk.
Even though, it is caused by natural processes of contaminants in the soil, they are still dangerous and have the potential to cause many negative impacts on life.
The same is true for air pollution and water pollution. Soil pollution is not a simple matter. In other words, soil pollution is a problem that needs to be given great attention. Especially in the last few decades, soil pollution has become a serious problem from time to time.
While other environmental problems that are also a serious problem are climate change, global warming or global warming and animal extinction.
Pollution that occurs in the soil is caused by precipitation due to waste materials, both in the form of solid waste and liquid waste. The deposition occurs on the ground surface to underground. This deposition process can contaminate the soil and of course groundwater as well.
This definition is explained based on the explanation from the Britannica dictionary . Soil pollution is also not the only pollution that exists.
Causes of Soil Pollution
Soil pollution, as previously mentioned, is very dangerous for human life, and has many negative impacts. Therefore, we need to know the causes of soil pollution so that we can reduce or prevent soil pollution.
Thus, the causes of soil pollution must be known by many people. By doing this, it can also be a hope that human life can change in a better direction.
1. Caused by Organic and Inorganic Waste
The cause of the first soil pollution is caused by organic waste and also inorganic waste. The definition of organic waste itself is waste which in the decomposition process does not take a long time and process, only with a short time this waste is easily decomposed. Meanwhile, inorganic waste takes a long time and a long process to decompose.
If a comparison is made, organic waste is still at a much better level than inorganic waste which is very dangerous. The process and time of decomposition also affect the hazardous levels of the two wastes.
However, that does not mean that organic waste does not have a negative impact. If the soil has an excess of organic waste, it will affect the growth and development of plants in its environment.
Organic waste is usually found in household waste, small industries, and others. Meanwhile, examples of organic waste, such as used food waste, leftover vegetables, and also rotting leaves.
In contrast to organic waste which is not too dangerous, inorganic waste is waste that is very dangerous for the environment. Inorganic waste which is very difficult to decompose causes soil contamination more easily.
An example of inorganic waste is all kinds of used plastic, cans, bottles and so on that don’t come from organic materials.
In addition to making the soil polluted, inorganic waste also plays a role in making the environment dirty. Thus, a place that has a lot of inorganic waste will become a mosquito nest. Why is that? because inorganic waste can be a container for collecting rainwater which is preferred by mosquitoes to be used as nests.
This waste may be removed by burning, but this burning can also cause damage and pollution to the air.
2. Liquid Waste and Solid Waste
The cause of the next soil pollution is liquid waste and solid waste. Often people take lightly the existence of liquid waste that comes from the rest of the manufacture of a particular product. The majority of this liquid waste is produced by the industrial sector, both large and small industries.
The majority of this liquid waste is produced from factory waste. However, liquid waste originating from households and domestic affairs is still the largest contributor to soil pollution.
Examples of liquid waste originating from households and domestic affairs, such as water used for washing clothes, washing dishes, water used for detergents, or water used for carbolic acid for mopping floors.
This liquid waste greatly pollutes the soil because it dissolves and absorbs into the soil. In other words, this liquid waste plays a major role in destroying the content of substances in the soil.
Not only liquid waste, solid waste is equally dangerous in soil pollution. Solid waste is waste from residual activities in the form of production or consumption that has a solid form.
Similar to liquid waste, solid waste is also produced by the industrial sector in the form of pulp mills. In addition, domestic activities also contribute a lot of solid waste, such as leaves, plastic, paper, and so on.
Human life can never be separated from solid waste. In the end, this solid waste contaminates and pollutes the soil, disrupting the life cycle. In addition, the impact that is very visible from solid waste is not interesting to look at.
3. Agricultural Waste
Without us realizing that activities involving agricultural activities can also cause soil pollution. This farm produces a large amount of hazardous waste. Hazardous substances produced for agricultural activities are chemical fertilizers and pest repellents, namely pesticides.
Pesticides and fertilizers also contain many harmful chemicals, if used they will seep into the soil. Materials, these chemicals can damage structures and networks in the soil.
If things like this happen continuously, the soil will turn out to be infertile so that it will be polluted and no longer suitable for farming and farming activities.
4. Forest Fire
Apart from being caused by waste, human activities can also create pollution to the soil, for example forest fires which can be one of the causes of soil pollution. When the forest has been burned, it will be difficult for the forest to grow again with plants.
Soil contamination due to forest fires occurs because the important substances contained in the soil have died due to being burned by the fire.
5. Natural Disasters
Natural factors that can cause soil pollution are natural disasters. Naturally, natural disasters can contaminate the soil, especially when floods occur.
Floods can cause the layers of nutrients in the soil to slowly disappear because they are carried away by the currents of the water. The loss of nutrients causes the soil to become polluted.
In addition to flooding, volcanic eruptions can also create pollution to the soil. Soil covered with volcanic ash, sand, and other hazardous materials released by volcanoes can dry out the soil.
Although, volcanic ash and other hazardous materials can damage the soil, but after things change to normal again, the land that was covered earlier will turn out to be more fertile and loose over time.
Impact of Soil Pollution
If we already know and understand the causes of soil pollution, then we must understand the impact caused by soil pollution. The following are the impacts caused by soil pollution:
1. Impact on Health
Soil pollution can cause disturbance to human health. The impact on health is one of the most dangerous effects.
Many health problems are caused by soil pollution, one of which is gas inhalation. This gas comes from the ground which moves slowly upwards or also through inhalation caused by objects that are transported by various kinds of activities carried out by humans.
Soil contamination can cause various kinds of illnesses, such as headaches, nausea, relatively mild skin rashes, eye irritation, and respiratory problems. Other serious conditions caused by soil contamination are blockages in the neuromuscular , then damage to the kidneys, damage to the liver, and also cancer which can be caused by soil contamination.
Short-term illness caused by soil contamination
3. Vomiting and Nausea
4. Chest pain
5. Rashes on the skin
6. Eye irritation
7. Problems with breathing, especially the lungs
Apart from attacking humans and creating short-term illnesses, soil contamination can also create long-term illnesses in the body. This is because inhalation of soil particulate matter and also food contamination can be a cause of poor health conditions and require serious treatment.
Long-term disease caused by soil contamination
Various types of cancer, including the most dangerous, namely leukemia. This is caused by contact between bodies with polluted soil because it is contaminated with various kinds of harmful chemicals, for example gasoline and benzene.
2. Nervous System Damage
Soil pollution can also damage the nervous system in our bodies. This is due to the presence of a hazardous substance in the form of lead (Pb) that enters the soil. Pollution due to lead needs to be considered because it has the potential to attack children who still like to play with soil.
3. Neuromuscular blockage
Soil contamination can also be fatal to neuromuscular blockage. If we experience this blockage, we will experience depression which is suffered by the nervous system at the center.
4. Kidney Damage
Soil that is polluted and has levels of mercury. Mercury is a dangerous substance and has the potential to cause kidney damage.
5. Liver Damage
Just like the kidneys, the liver can also be damaged because of the mercury content contained in the soil, both on the surface and underground soil.
2. Impact on the Ecosystem
After understanding the impact of soil pollution on the ecosystem on earth. Next to be discussed is the impact of soil pollution on the ecosystem.
Soil is the element of the earth that is the easiest to make changes to chemical content. In fact, it’s not just the chemical content, the fact is that even the structure in the soil is very volatile.
With changes in the structure and content contained in the soil, of course it will have an effect on changes in the metabolic system of all organisms that live and live in the soil. If the organisms in the soil are reduced, it will affect the ecosystem which will eventually break the cycle of the food chain.
3. Lowers Soil Fertility
The next impact caused by soil pollution is the loss or absence of soil biota or microflora in the soil. Of course, losing these biota is very detrimental because it makes the soil not as loose and fertile as before.
Book Recommendations About Environmental Knowledge
This is an article that explains soil pollution, its causes, impacts and also solutions to overcome it. If you want to learn more about the environment, you can immediately have the books we recommend, below!
Methods and Studies of Biological & Environmental Resources
Sustainable Environmental Management : Jatna Supriatna
Environmental Cartoons (2021)
Soil Pollution Solutions
This dangerous soil pollution can still be overcome in many ways, some solutions to tackle soil pollution are as follows:
1. Avoiding Excessive Agricultural Activities
Farming is a normal thing and can be done, but it shouldn’t be too much. Because with excessive activity in planting and removing grass on the ground it will have an impact on natural disasters, causing erosion of the soil and also flooding.
In addition, we must reduce the use of harmful chemical fertilizers and pesticides that are commonly used to repel pests. This is because the two substances are one of the biggest contributors to soil pollution.
2. Reducing “Waste Footprint” Against Humans
Do you know what “Waste Footprint” is ? This waste footprint is the waste we produce in the form of waste that is difficult to decompose. So, humans must try to reduce the use of inorganic waste, such as plastic, and materials that require a long process to decompose.
This needs to be done to reduce the total accumulation of soil affected by harmful pollutants. How to reduce Waste Footprint? We must be accustomed to doing 3R activities, namely Reuse, Reduce and Recycle .
3. Land Washing
By washing the soil it is useful to remove various kinds of contaminants in the soil. How to wash the soil by using clean water and separating soil that has been contaminated with soil that has not been contaminated.
By using the soil washing method, humans can help make the environment healthier and less polluted without having to dig up the soil.
Furthermore, the methods and solutions used to overcome soil pollution are with the help of microorganisms that have the ability to fertilize the soil.
These microorganisms play an important role in reducing various kinds of contaminants and can restore soil to be more fertile as before.
However, the obstacle in this bioremediation method is that there must be an appropriate temperature as well as good nutrient and oxygen content in the soil.
5. Reduce Packaged Goods
Reducing the purchase of packaged products can also help reduce soil contamination. Packaged products will produce a lot of inorganic waste which will eventually end up in landfills and create soil pollution. We ourselves know how dangerous inorganic waste is which is very difficult to decipher.
6. Stop Throwing Garbage on the Ground
In order to create good and unpolluted soil, we must get used to not littering on the ground. Dispose of trash in its place and segregate it by type.
7. Organic Gardening
Next, make an organic garden and make it a habit to eat organic food that is not contaminated with harmful substances such as pesticides. By doing this, not only will the soil be healthy, but our bodies will also be healthy.
By: Ai Siti Rahayu |
Five friends Aman, Karan, Gaurav, Raman and Pawan are sitting around a circle facing the centre. Pawan is second to the left of Raman, who is to the immediate left of Gaurav. Karan is the neighbour of Gaurav. Then who is sitting to the immediate right of Gaurav?
Find the missing number, if same rule is followed in all the three figures.
Which of the following statements is made on the basis of the information given in the table?
ADifferent animals eat different types of food.
BSome animals move to different places in order to meet their needs.
CA salamander moves slower than a mouse.
DAnimals depend on each other for survival.
Look at the given pictures of children. Which of them may get burnt?
DBoth A and C
How is iron not going to cause burn
Refer to the given figures of bacteria X, Y and Z and read the statements (i), (ii), (iii) and (iv) regarding them.
(i) Bacterium X can be causal organism of typhoid.
(ii) Bacterium Y can cause a disease which disrupts proper exchange of gases.
(iii) Bacterium Z can convert lactose sugar of milk into lactic acid.
(iv) Bacterium X can be the causal organism of cholera.
Which of these statements are incorrect?
B(i) and (ii) only
C(ii), (iii) and (iv) only
D(i), (ii), (iii) and (iv)
The given diagram shows the changes in temperature when a substance is heated.
The regions in which particles of the substance have maximum interparticle distance, maximum interparticle forces and maximum kinetic energy are respectively
AII, IV and V
BV, I and V
CV, I and III
DI, III and V
Read the following statements and select the correct option.
Statement 1: Coniferous trees have needle-shaped leaves.
Statement 2: This helps them to cope with shortage of water when the ground is frozen in water.
ABoth statements 1 and 2 are true and statement 2 is the correct explanation of statement 1.
BBoth statements 1 and 2 are true but statement 2 is not the correct explanation of statement 1.
CStatement 1 is true but statement 2 is false.
DBoth statements 1 and 2 are false.
I thought that the needle-shaped leaves are present so that the snow which fell on the leaves can slide off |
Astonishing Photos of "Lonely" Galaxy 3 Million Light Years From the Milky Way, Captured by NASA
"It's really a gorgeous image."
This week, NASA shared images of a "lonely" galaxy three million light years from Earth. Taken by the James Webb Space Telescope (JWST), they showed the dwarf galaxy (known as Wolf-Lundmark-Melotte, or WLM) in unprecedented detail, including thousands of stars. The galaxy was previously spotted by another telescope in 2016, but its much lower resolution only revealed a field of blurry spots.
The Webb telescope uses near-infrared spotting technology that can capture more, and clearer, images of celestial bodies. NASA hopes these images of WLM will help them study the formation and early days of the universe: The galaxy is so secluded that it retains a chemical composition similar to galaxies present when the universe was young.
"We think WLM hasn't interacted with other systems, which makes it really nice for testing our theories of galaxy formation and evolution," said Kristen McQuinn of Rutgers University, who works on the project, in a NASA blog post. "Many of the other nearby galaxies are intertwined and entangled with the Milky Way, which makes them harder to study." Read on to find out more.
The Webb telescope contains an NIRCM, or near-infrared camera. It's the world's most powerful space observatory, capable of bringing distant stars into sharper focus than ever before and detecting space bodies that are invisible to the human eye. The images of WLM were taken as part of Webb's Early Release Science (ERS) program 1334, which focuses on nearby galaxies.
Yes, even though WLM is 3 million light-years away, it's considered relatively close to Earth. Discovered in 1909, it's about one-tenth the size of the Milky Way. Researchers believe it can unlock some mysteries about how the universe developed.
"Another interesting and important thing about WLM is that its gas is similar to the gas that made up galaxies in the early universe. It's fairly unenriched, chemically speaking," said McQuinn in a statement. "This is because the galaxy has lost many of these elements through something we call galactic winds."
"Although WLM has been forming stars recently—throughout cosmic time, really—and those stars have been synthesizing new elements, some of the material gets expelled from the galaxy when the massive stars explode," she said. "Supernovae can be powerful and energetic enough to push material out of small, low-mass galaxies like WLM."
The images provided by the Webb telescope are clarion-clear and sparkle with color. They have even seasoned scientists gushing with childlike wonder. "We can see a myriad of individual stars of different colors, sizes, temperatures, ages, and stages of evolution; interesting clouds of nebular gas within the galaxy; foreground stars with Webb's diffraction spikes; and background galaxies with neat features like tidal tails. It's really a gorgeous image," said McQuinn.
"And, of course, the view is far deeper and better than our eyes could possibly see. Even if you were looking out from a planet in the middle of this galaxy, and even if you could see infrared light, you would need bionic eyes to be able to see what Webb sees."
NASA hopes to use the new data to reconstruct how stars formed in the WLM galaxy. Low-mass stars can live for billions of years, so it's plausible that some stars within WLM formed during the early days of the universe. "By determining the properties of these low-mass stars (like their ages), we can gain insight into what was happening in the very distant past," said McQuinn. "It's very complementary to what we learn about the early formation of galaxies by looking at high-redshift systems, where we see the galaxies as they existed when they first formed."
WLM was first spotted in 1909 by astronomer Max Wolf. In 1926, fellow astronomers Knut Lundmark and Philibert Jacques Melotte were credited with describing the nature of the galaxy, so the star group carries the names of all three. It is part of the constellation Cetus. Although isolated, the galaxy appears to be quite active, forming new stars. "Telltale pinkish star forming regions and hot, young, bluish stars speckle the isolated island universe," says NASA. "Older, cool yellowish stars fade into the small galaxy's halo, extending about 8,000 light-years across." |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.