content
stringlengths 275
370k
|
---|
Venus is our rather weird neighbour, a strange world where the Sun rises in the west and sets in the east, and a day lasts longer than a year. And yet this searingly-hot planet began its life with much the same materials as Earth. So why is Venus so different? We begin our close encounter with Venus at the Observatoire de Paris, an institution founded in 1667 which has seen all manner of new ways to study the solar system – including a raft of devices to watch Venus pass in front of the Sun.
Thomas Widemann, Planetary Scientist, Observatoire de Paris showed us one of them, a precursor to the early film cameras: “This is a unique example of Janssen’s Revolver – this instrument was designed at the Observatory of Paris to observe the transit of Venus in 1874 and 1882.”
Today the Observatoire de Paris remains at the cutting edge of planetary science. There Widemann spends his time there trying to solve the puzzle of the planet Venus.
“Venus and Earth are like sister planets. They were formed in the same part of the solar system, even closer than Earth and Mars are, with the same basic ingredients, the same gases, the same rocks that were spinning around the early solar system. And yet these two planets had completely different destinies,” he says.
So Venus began more or less like Earth. But now it’s bone dry and cloaked in a thick, choking atmosphere of sulphuric acid and CO2.
Håkan Svedhem, Venus Express Project Scientist at ESA, describes what it’s like on the surface: “It has a very dense atmosphere, up to 97% of carbon dioxide, very strong greenhouse effect, and the temperature down on the surface is more than 450 degrees Celcius, and the pressure is 92 bars, it’s almost a hundred times what it is on Earth, so it’s a very, very unpleasant place to be.”
Unpleasant, and also unusual – this is the only planet which rotates clockwise. There’s more, as Michel Breitfellner, ESA’s Venus Express Science Operations Coordinator, tells Euronews: “Venus is the only planet in the solar system that needs more time to rotate once around its own axis than it rotates around the Sun. So it’s 243 days for one Venus day, and it’s 224 days for one rotation around the Sun. “
In 2005 ESA launched the Venus Express spacecraft to have a closer look at this oddball planet. After eight years of scanning the dense clouds below, the team sent Venus Express skimming into the top layer of the atmosphere, a technique known as aerobraking.
Don Merritt from ESA’s astronomy base near Madrid explains that the spacecraft’s underside was the face that went first into the atmosphere. “This face of the spacecraft, which had been attached to the rocket originally when it was launched, was most able to take the forces and the temperatures. We also turned the solar panels, to maximise the amount of friction and to get the most amount of braking.”
The aerobraking manœuvre offered the first ever close up view of Venus’ upper atmosphere, and it wasn’t what was expected.
“What we saw that was a little unusual was the variability in the pressure, as if there were waves within the atmosphere. And so that possible wave-like structure was not expected, and analysing that data will keep scientists busy for a little while yet,” Merritt explains.
The science team has another puzzle from the Venus Express data. They’ve noticed that the Venusian winds are getting faster.
Håkan Svedhem explains: “When we arrived at Venus eight years ago we detected winds of 300 kilometres per hour – very fast – but what has happened during these years until now they have actually increased. We have now seen winds of 400 kilometers per hour, and we can’t really explain why that has happened.”
Yet more riddles lie in the landscape of the planet. One of the very few photographs of the surface of Venus, taken by Russian probe Venera 13, shows no sign of volcanoes, but plenty of volcanic rock.
Widemann has an idea what may be happening: “The surface of Venus is relatively young, on the scale of the solar system. And there’s a contradiction for us between the absence of volcanic and tectonic activity today, and this surface which despite everything appears quite young. So maybe there are some rare, powerful and violent geological processes that could resurface the planet in a catastrophic way, creating a kind of rebirth of the Venutian crust.”
So what could possibly have happened to Venus to turn it into such a hellish place? There are some theories, if not yet proof:
Breitfellner says: “There must have been a major disaster in the early history of the planet, where it collided with a big other object, and this made it stop its rotation, and I think this was really the turning point in the life of Venus.”
Clues as to what happened in the past may yet be found in the vast amounts of data gathered by Venus Express, and the planet itself will continue to entrance and surprise.
“Venus is the brightest planet in the sky. It’s the brightest light after the Sun and the Moon. It’s a planet that’s part of our cultural heritage. So it’s this special personality of Venus as a shining, cultural object that attracts me the most. Maybe even more than the scientific reasons that we’ve talked about,” says Widemann. |
Christina Porter teachers English and is a Literacy Coach at Revere High School, Revere, MA
What's On for Today and Why
Assigning a final project that asks students to demonstrate their understanding of character using both print and non-print medium is an informative way to assess their comprehension.For this project, students choose two characters from Hamlet and assign each character an object that either symbolizes the character or that the character might carry around with him/her. Students will include a short analysis of the object and provide textual evidence that connects the character to the object. Students will present their objects to the whole class or in small groups.
This lesson can be assigned in one class period. Students may need several days to collect their objects and prepare accompanying index cards.
What You Need
Shoe boxes (8-10)
Folger edition of Hamlet
Available in Folger print edition and Folger Digital Texts
What To Do
1. After finishing reading the play, explain to students that they will be doing a character analysis project using objects that they choose and assign to specific characters.
2. Place several shoe boxes round the classroom, labeled with the names of major characters in the play.
3. Give students Handout #1 that explains project details.
4. Have students select two characters from the play that interest them.
5. Have students select an object for each character that either symbolizes the character or that the character may carry around with him/her. The objects should fit inside the shoe boxes provided.
NOTE: Because Hamlet does include weapons, remind students that they may not bring sharp objects, such as knives, into school, even for a project.
6. Each object should be accompanied by a index card that contains the following information:
- student's name
- a detailed description of the object
- an explanation of what the object has to do with the character
- a quotation from the text that somehow connects that object to the character
(You may want to suggest that the students consult an online concordance, such as www.opensourceshakespeare.com to search for specific words within the play)
7. Have students complete Handout #2 which is a grading rubric for the project.
8. Have students present their objects to the whole class or in small groups.
How Did It Go?
Were the students able to choose objects that accurately symbolized a character OR that the character may carry around with him/her? Did the students describe their objects with appropriate detail? Were students able to locate textual evidence that connects their character to the object? When presenting, did students clearly desrcibe their object and verbally articulate the connection to a specific character?
If you used this lesson, we would like to hear how it went and about any adaptations you made to suit the needs of YOUR students.
Awesome blog. I enjoyed reading your article. This is truly a great read for me. I have bookmarked it and I'm looking forward to read new articles from your blog site. Thank you and keep up the good work.www.replicaorologi4it.com
coco November 16, 2014 12:39 PM
This is an excellent post I seen thanks to share it. It is really what I wanted to see hope in future you will continue for sharing such a excellent post descartáveis para festa
Sheila October 13, 2014 7:47 PM |
A Database Management System (DBMS) sometimes called a database manager or database system is a set of computer programs that controls the creation, organization, maintenance, and retrieval of data from the database stored in a computer. It allows the individuals or entities to easily access and use the data from database. An excellent database system helps the end users to easily access and use the data and also stores the new data in a systematic way. It knows better the actual physical location of the data.
A DBMS is a system software package that ensures the integrity and security of the data. The most typical DBMS is a relational database management system (RDBMS). A newer kind of DBMS is the object-oriented database management system (ODBMS). The DBMS are categorized according to their data types and structure. It accepts the request for the data from an application program and instructs the operating system to transfer the appropriate data to the end user. A standard user and program interface is the Structured Query Language (SQL).
There are many Data Base Management System like MySQL, PostgreSQL, Microsoft Access, SQL Server, FileMaker, Oracle, RDBMS, dBASE, Clipper, FoxPro and many more that work independently and freely but also allow other database systems to be integrated with them. For this DBMS software comes with an Open Database Connectivity (ODBC) driver ensuring the databases to be integrated with it.
A DBMS includes four main parts: modeling language, data structure, database query language, and transaction mechanisms Modeling language.
Modeling Language: A data modeling language to define the schema (the overall structure of the database) of each database hosted in the DBMS, according to the DBMS database model. The schema specifies data, data relationships, data semantics, and consistency constraints on the data. The four most common types of models are the:
The optimal structure depends on the natural organization of the application's data, and on the application's requirements that include transaction rate (speed), reliability, maintainability, scalability, and cost.
Data Structures: Data structures which includes fields, records, files and objects optimized to deal with very large amounts of data stored on a permanent data storage device like hard disks, CDs, DVDs, Tape etc.
Database Query Language: Using the Database Query Language (DQL) users can formulate requests and generate reports. It also controls the security of the database. The DQL and the report writer allows users to interactively interrogate the database, analyze its data and update it according to the users privileges on data. For accessing and using personal records there is a need of password to retrieve the individual records among the bunch of records. For Example: the individual records of each employee in a factory.
Transaction mechanisms modeling language: The transaction mechanism modeling language ensures about data integrity despite concurrent user accesses and faults. It maintains the integrity of the data in the database by not allowing more than one user to update the same record at the same time. The unique index constraints prevent to retrieve the duplicate records like no two customers with the same customer numbers (key fields) can be entered into the database.
The Latest Trend
Among several types of DBMS, Relational Database Management System (RDBMS) and Object-oriented Database Management System (OODBMS) are the most commonly used DBMS software.
The RDBMS is a Database Management System (DBMS) based on the relational model in which data is stored in the form of tables and the relationship among the data is also stored in the form of tables. It was introduced by E. F. Codd, which is the most popular commercial and open source databases now days. The most popular RDBMS is:
OODBMS: Object-Oriented Database Management System (OODBMS) in short Object Database Management System (ODBMS) is a database management system (DBMS) that supports the modeling and creation of data as objects. It includes some kind of support for classes of objects and the inheritance of class properties and methods by subclasses and their objects. An ODBMS must satisfy two conditions: it should be an object-oriented programming language and a DBMS too.
OODBMS extends the object programming language with transparently persistent data, concurrency control, data recovery, associative queries, and other database capabilities.
At present it is on its development stage and used in Java and other Object Oriented programming language.
Earlier it was introduced to replace the RDBMS due to its better performance and scalability but the inclusion of object-oriented features in RDBMS and the origin of Object-relational mappers (ORMs) made it enough powerful to defend its persistence. The higher switching cost also played a vital role to defend the existence of RDBMS. Now it is being used as a complement, not a replacement for relational databases.
Now it is being used in embedded persistence solutions in devices, on clients, in packaged software, in real-time control systems, and to power websites. The open source community has created a new wave of enthusiasm that's now fueling the rapid growth of ODBMS installations.
The most commonly DBMS are:
If you enjoyed this post then why not add us on Google+? Add us to your Circles
Liked it! Share this Tutorial
Discuss: Database Management System (DBMS)
Post your Comment |
||ESCOT Problem of the Week:|
Archive of Problems, Submissions, & Commentary
Please keep in mind that this is a research project, and there may sometimes be glitches with the interactive software. Please let us know of any problems you encounter, and include the computer operating system, the browser and version you're using, and what kind of connection you have (dial-up modem, T1, cable).
In Mosaic, you can use different colors and lengths of blocks to fill a canvas. Your first job will be to try to fill in the map of the USA and approximate its area.
To add a block, first select the row to which you want to add your block by clicking on the circle next to the corresponding row. Then enter the value of the block by changing the values of the numerator and the denominator of the fraction (for example, to change the denominator, click on the number in the denominator; this will highlight the number and you can replace it with the one you want). Then select the color of your block by clicking on one of the color swatches. You will see a preview of your block above the swatches. When you are ready, click the "Add Block" button.
Each row has an area of 1 square unit. This means that if you add a fraction block of 1/3, you are filling 1/3 square units of space.
- What is the approximate area of the map of the USA?
- How could you get a better estimate of the area of the map of the USA?
- Design your own image, and use Mosaic to create it. Describe it here, briefly explaining how you made it.
Teacher Support Page
We didn't get many submissions this week, and we're not sure why. One of the few students who did submit a solution described her answers clearly, and is highlighted below.
It seems some students didn't understand how to count up the blocks to estimate the area of the map in the mosaic. They were fractions of the line, so the total number couldn't be more than the number of lines.
Others didn't understand how to make a mosaic to find the area of the underlying map. Maybe this part was a little unclear on our part. The reason different colors were allowed was so you could use one or more colors for the U.S. area and another color for the area not in the U.S.
In response to question 2, students didn't think of the best ways (that is, ways we thought of :^]) to improve the estimate on the map area. The idea of changing block size and shape -- for example, the height -- either didn't occur to them or they thought they should have changed the way they did the problem, for example, use more blocks or only add some of the area.
The third question, design your own mosaic, was actually the next toughest question after question 2. I guess they just didn't believe they were allowed to do anything they wanted.
QUESTIONS 1. What is the approximate area of the USA map? ~ 4 5/18 sq. units 2. How could you get a better estimate of the area of the USA map? ~ I could probably get a better estimate of the area if I could just leave the smaller bits of the USA map blank, and just fill in with blocks for the bigger areas of the map. For example, if the top edge of the country would only fill up one cm of space, I would just leave that piece and continue to fill in the other larger areas of the map. 3. Design your own image and use Mosaic to create it. Describe it here, briefly explaining how you made it. ~ I designed a butterfly and I used the different rows and colors to show the wings and the butterfly's body. For the first 1/3 of the row I used a light blue color and then for the little head of the butterfly I used the yellow, then I switched colors to a pink. So the top left wing is blue and the body will stay yellow, and the top right wing is pink. For the rest of the butterfly, I just changed the colors around so that the bottom left wing is pink and the body will stay yellow and the bottom right wing is light blue.
Nina H., age 14 - Taipei American School, Taipei, Taiwan |
Researchers have developed a tiny lung-on-a-chip that will be used to conduct drug research. The chip is dubbed the ersatz lung and it has an artificial alveolus on the chip along with channels that the researchers can uses to create a vacuum and then release that vacuum forcing the alveolus to contract and expand as real lung tissue would.
An alveolus is the tiny air sack inside the lung where the majority of the oxygen exchange happens in the body. The new chip is designed to allow researchers to test drugs on simulated health and diseased lung tissue. One side of the chip has features that mimic healthy lung tissue and capillary walls.
The other end of the chip has features that mimic capillary walls and lung-cancer cells. Researchers in the project hope that within a few years they will be able to develop a chip that will be able to mimic the process of actually exchanging air for carbon dioxide within the lungs. |
The latest news from academia, regulators
research labs and other things of interest
Posted: Jun 20, 2016
Tailored DNA shifts electrons into the 'fast lane'
(Nanowerk News) DNA molecules don't just code our genetic instructions. They can also conduct electricity and self-assemble into well-defined shapes, making them potential candidates for building low-cost nanoelectronic devices.
A team of researchers from Duke University and Arizona State University has shown how specific DNA sequences can turn these spiral-shaped molecules into electron "highways," allowing electricity to more easily flow through the strand.
The results may provide a framework for engineering more stable, efficient and tunable DNA nanoscale devices, and for understanding how DNA conductivity might be used to identify gene damage. The study appears online June 20 in Nature Chemistry ("Engineering nanometer-scale coherence in soft matter").
Each ribboning strand of DNA in our bodies is built from stacks of four molecular bases, shown here as blocks of yellow, green, blue and orange, whose sequence encodes detailed operating instructions for the cell. New research shows that tinkering with the order of these bases can also be used to tune the electrical conductivity of nanowires made from DNA. (Image: Maggie Bartlett, NHGRI)
Scientists have long disagreed over exactly how electrons travel along strands of DNA, says David N. Beratan, professor of chemistry at Duke University and leader of the Duke team. Over longer distances, they believe electrons travel along DNA strands like particles, "hopping" from one molecular base or "unit" to the next. Over shorter distances, the electrons use their wave character, being shared or "smeared out" over multiple bases at once.
But recent experiments lead by Nongjian Tao, professor of electrical engineering at Arizona State University and co-author on the study, provided hints that this wave-like behavior could be extended to longer distances.
This result was intriguing, says Duke graduate student and study lead author Chaoren Liu, because electrons that travel in waves are essentially entering the "fast lane," moving with more efficiency than those that hop.
"In our studies, we first wanted to confirm that this wave-like behavior actually existed over these lengths," Liu said. "And second, we wanted to understand the mechanism so that we could make this wave-like behavior stronger or extend it to even longer distances."
DNA strands are built like chains, with each link comprising one of four molecular bases whose sequence codes the genetic instructions for our cells. Using computer simulations, Beratan's team found that manipulating these same sequences could tune the degree of electron sharing between bases, leading to wave-like behavior over longer or shorter distances. In particular, they found that alternating blocks of five guanine (G) bases on opposite DNA strands created the best construct for long-range wave-like electronic motions.
The team theorizes that creating these blocks of G bases causes them to all "lock" together so the wave-like behavior of the electrons is less likely to be disrupted by random wiggling in the DNA strand.
"We can think of the bases being effectively linked together so they all move as one," Liu said. "This helps the electron be shared within the blocks."
The Tao group confirmed these theoretical predictions using break junction experiments, tethering short DNA strands built from alternating blocks of three to eight guanine bases between two gold electrodes and measuring the amount of electrical charge flowing through the molecules.
The results shed light on a long-standing controversy over the exact nature of the electron transport in DNA, Beratan says. They might also provide insight into the design of tunable DNA nanoelectronics, and into the role of DNA electron transport in biological systems.
"This theoretical framework shows us that the exact sequence of the DNA helps dictate whether electrons might travel like particles, and when they might travel like waves," Beratan said. "You could say we are engineering the wave-like personality of the electron." |
Surface energy, or interface energy, quantifies the disruption of intermolecular bonds that occur when a surface is created. In the physics of solids, surfaces must be intrinsically less energetically favorable than the bulk of a material (the molecules on the surface have more energy compared with the molecules in the bulk of the material), otherwise there would be a driving force for surfaces to be created, removing the bulk of the material (see sublimation). The surface energy may therefore be defined as the excess energy at the surface of a material compared to the bulk, or it is the work required to build an area of a particular surface. Another way to view the surface energy is to relate it to the work required to cut a bulk sample, creating two surfaces.
Cutting a solid body into pieces disrupts its bonds, and therefore consumes energy. If the cutting is done reversibly (see reversible), then conservation of energy means that the energy consumed by the cutting process will be equal to the energy inherent in the two new surfaces created. The unit surface energy of a material would therefore be half of its energy of cohesion, all other things being equal; in practice, this is true only for a surface freshly prepared in vacuum. Surfaces often change their form away from the simple "cleaved bond" model just implied above. They are found to be highly dynamic regions, which readily rearrange or react, so that energy is often reduced by such processes as passivation or adsorption.
Determination of surface energy
Measuring the surface energy of a solid
The surface energy of a liquid may be measured by stretching a liquid membrane (which increases the surface area and hence the surface energy). In that case, in order to increase the surface area of a mass of liquid by an amount, δA, a quantity of work, γδA, is needed (where γ is the surface energy density of the liquid). However, such a method cannot be used to measure the surface energy of a solid because stretching of a solid membrane induces elastic energy in the bulk in addition to increasing the surface energy.
The surface energy of a solid is usually measured at high temperatures. At such temperatures the solid creeps and even though the surface area changes, the volume remains approximately constant. If γ is the surface energy density of a cylindrical rod of radius and length at high temperature and a constant uniaxial tension , then at equilibrium, the variation of the total Helmholtz free energy vanishes and we have
where is the Helmholtz free energy and is the surface area of the rod:
Also, since the volume () of the rod remains constant, the variation () of the volume is zero, i.e.,
Therefore, the surface energy density can be expressed as
The surface energy density of the solid can be computed by measuring , , and at equilibrium.
This method is valid only if the solid is isotropic, meaning the surface energy is the same for all crystallographic orientations. While this is only strictly true for amorphous solids (glass) and liquids, isotropy is a good approximation for many other materials. In particular, if the sample is polygranular (most metals) or made by powder sintering (most ceramics) this is a good approximation.
In the case of single-crystal materials, such as natural gemstones, anisotropy in the surface energy leads to faceting. The shape of the crystal (assuming equilibrium growth conditions) is related to the surface energy by the Wulff construction. The surface energy of the facets can thus be found to within a scaling constant by measuring the relative sizes of the facets.
Calculating the surface energy of a deformed solid
In the deformation of solids, surface energy can be treated as the "energy required to create one unit of surface area", and is a function of the difference between the total energies of the system before and after the deformation:
Calculation of surface energy from first principles (for example, density functional theory) is an alternative approach to measurement. Surface energy is estimated from the following variables: width of the d-band, the number of valence d-electrons, and the coordination number of atoms at the surface and in the bulk of the solid.
Calculating the surface formation energy of a crystalline solid
In density functional theory, surface energy can be calculated from the following expression
where is the total energy of surface slab obtained using density functional theory. is the number of atoms in the surface slab. is the bulk energy per atom. is the surface area. For a slab, we have two surfaces and they are of the same type, which is reflected by the number 2 in the denominator. To guarantee this, we need to create the slab carefully to make sure that the upper and lower surfaces are of the same type.
Estimating surface energy from the heat of sublimation
To estimate the surface energy of a pure, uniform material, an individual molecular component of the material can be modeled as a cube. In order to move a cube from the bulk of a material to the surface, energy is required. This energy cost is incorporated into the surface energy of the material, which is quantified by:
where and are coordination numbers corresponding to the surface and the bulk regions of the material, and are equal to 5 and 6, respectively; is the surface area of an individual molecule, and is the pairwise intermolecular energy.
Surface area can be determined by squaring the cube root of the volume of the molecule:
Here, corresponds to the molar mass of the molecule, corresponds to the density, and is Avogadro’s number.
In order to determine the pairwise intermolecular energy, all intermolecular forces in the material must be broken. This allows thorough investigation of the interactions that occur for single molecules. During sublimation of a substance, intermolecular forces between molecules are broken, resulting in a change in the material from solid to gas. For this reason, considering the enthalpy of sublimation can be useful in determining the pairwise intermolecular energy. Enthalpy of sublimation can be calculated by the following equation:
Using empirically tabulated values for enthalpy of sublimation, it is possible to determine the pairwise intermolecular energy. Incorporating this value into the surface energy equation allows for the surface energy to be estimated.
The following equation can be used as a reasonable estimate for surface energy:
The presence of an interface influences generally all thermodynamic parameters of a system. There are two models that are commonly used to demonstrate interfacial phenomena, which includes the Gibbs ideal interface model and the Guggenheim model. In order to demonstrate the thermodynamics of an interfacial system using the Gibb’s model, the system can be divided into three parts: two immiscible liquids with volumes and and an infinitesimally thin boundary layer known as the Gibbs dividing plane (σ) separating these two volumes.
The total volume of the system is:
All extensive quantities of the system can be written as a sum of three components: bulk phase a, bulk phase b, and the interface, sigma. Some examples include internal energy (), the number of molecules of the ith substance (), and the entropy ().
While these quantities can vary between each component, the sum within the system remains constant. At the interface, these values may deviate from those present within the bulk phases. The concentration of molecules present at the interface can be defined as:
where and represent the concentration of substance in bulk phase and , respectively. It is beneficial to define a new term interfacial excess which allows us to describe the number of molecules per unit area:
Spreading Parameter: Surface energy comes into play in wetting phenomena. To examine this, consider a drop of liquid on a solid substrate. If the surface energy of the substrate changes upon the addition of the drop, the substrate is said to be wetting. The spreading parameter can be used to mathematically determine this:
where is the spreading parameter, the surface energy of the substrate, the surface energy of the liquid, and the interfacial energy between the substrate and the liquid.
- If , the liquid partially wets the substrate.
- If , the liquid completely wets the substrate.
Contact angle: A way to experimentally determine wetting is to look at the contact angle (θ), which is the angle connecting the solid-gas interface and the solid-liquid interface [figure].
- If , the liquid completely wets the substrate.
- If , high wetting occurs.
- If , low wetting occurs.
- If , the liquid does not wet the substrate at all.
The Young Equation relates the contact angle to interfacial energy:
where is the interfacial energy between the solid and gas phases, the interfacial energy between the substrate and the liquid, is the interfacial energy between the liquid and gas phases, and is the contact angle between the solid-gas and the solid-liquid interface.
Wetting of high and low energy substrates: The energy of the bulk component of a solid substrate is determined by the types of interactions that hold the substrate together. High energy substrates are held together by bonds, while low energy substrates are held together by forces. Covalent, ionic, and metallic bonds are much stronger than forces such as van der Waals and hydrogen bonding. High energy substrates are more easily wet than low energy substrates. In addition, more complete wetting will occur if the substrate has a much higher surface energy than the liquid.
Surface energy modification techniques
The most commonly used surface modification protocols are plasma treatment, wet chemical treatment, including grafting, and thin-film coating. Surface energy mimicking is a technique that enables merging the device manufacturing and surface modifications, including patterning, into a single processing step using a single device material.
Many techniques can be used to enhance wetting. Surface treatments (such as Corona treatment and acid etching) can be used to increase the surface energy of the substrate. Additives can also be added to the liquid to decrease its surface energy. This technique is employed often in paint formulations to ensure that they will be evenly spread on a surface.
The Kelvin equation
As a result of the surface tension inherent to liquids, curved surfaces are formed in order to minimize the area. This phenomenon arises from the energetic cost of forming a surface. As such the gibbs free energy of the system is minimized when the surface is curved.
The Kelvin equation is based on thermodynamic principles and is used to describe changes in vapor pressure caused by liquids with curved surfaces. The cause for this change in vapor pressure is the Laplace pressure. The vapor pressure of a drop is higher than that of a planar surface because the increased laplace pressure causes the molecules to evaporate more easily. Conversely, in liquids surrounding a bubble, the pressure with respect to the inner part of the bubble is reduced, thus making it more difficult for molecules to evaporate. The Kelvin equation can be stated as:
where is the vapor pressure of the curved surface, is the vapor pressure of the flat surface, is the surface tension, is the molar volume of the liquid, is the universal gas constant, is temperature (K), and and are the principal radii of curvature of the surface.
Surface modified pigments for coatings
Pigments offer great potential in modifying the application properties of a coating. Due to their fine particle size and inherently high surface energy, they often require a surface treatment in order to enhance their ease of dispersion in a liquid medium. A wide variety of surface treatments have been previously used, including the adsorption on the surface of a molecule in the presence of polar groups, monolayers of polymers, and layers of inorganic oxides on the surface of organic pigments.
New surfaces are constantly being created as larger pigment particles get broken down into smaller subparticles. These newly formed surfaces consequently contribute to larger surface energies, whereby the resulting particles often become cemented together into aggregates. Because particles dispersed in liquid media are in constant thermal or Brownian motion, they exhibit a strong affinity for other pigment particles nearby as they move through the medium and collide. This natural attraction is largely attributed to the powerful short-range Van der Waals forces, as an effect of their surface energies.
The chief purpose of pigment dispersion is to break down aggregates and form stable dispersions of optimally sized pigment particles. This process generally involves three distinct stages: wetting, deaggregation, and stabilization. A surface that is easy to wet is desirable when formulating a coating that requires good adhesion and appearance. This also minimizes the risks of surface tension related defects, such as crawling, catering, and orange peel. This is an essential requirement for pigment dispersions; for wetting to be effective, the surface tension of the vehicle must be lower than the surface free energy of the pigment. This allows the vehicle to penetrate into the interstices of the pigment aggregates, thus ensuring complete wetting. Finally, the particles are subjected to a repulsive force in order to keep them separated from one another and lowers the likelihood of flocculation.
Dispersions may become stable through two different phenomena: charge repulsion and steric or entropic repulsion. In charge repulsion, particles that possess the same like electrostatic charges repel each other. Alternatively, steric or entropic repulsion is a phenomenon used to describe the repelling effect when adsorbed layers of material (e.g. polymer molecules swollen with solvent) are present on the surface of the pigment particles in dispersion. Only certain portions (i.e. anchors) of the polymer molecules are adsorbed, with their corresponding loops and tails extending out into the solution. As the particles approach each other their adsorbed layers become crowded; this provides an effective steric barrier that prevents flocculation. This crowding effect is accompanied by a decrease in entropy, whereby the number of conformations possible for the polymer molecules is reduced in the adsorbed layer. As a result, energy is increased and often gives rise to repulsive forces that aid in keeping the particles separated from each other.
Table of common surface energy values
|Material||Orientation||Surface Energy (mJ/m2)|
|Magnesium oxide||(100) plane||1200|
|Calcium fluoride||(111) plane||450|
|Lithium fluoride||(100) plane||340|
|Calcium carbonate||(1010) plane||230|
|Sodium chloride||(100) plane||300|
|Sodium chloride||(110) plane||400|
|Potassium chloride||(100) plane||110|
|Barium fluoride||(111) plane||280|
- D.P. Woodruff, ed. "The Chemical Physics of Solid Surfaces", Vol. 10, Elsevier, 2002.
- Bonn, D; Eggers, J; Indekeu, J; Meunier, J; Rolley, E (2009). "Wetting and Spreading". Reviews of Modern Physics. 81: 739–805. doi:10.1103/revmodphys.81.739.
- Zisman, W (1964). "Relation of the Equilibrium Contact Angle to Liquid and Solid Constitution". Advances in Chemistry Series. 43: 1–51.
- Owens, D K; Wendt, R C (1969). "Estimation of the Surface Free Energy of Polymers". Journal of Applied Polymer Science. 13: 1741–1747. doi:10.1002/app.1969.070130815.
- De Gennes, P G (1985). "Wetting: statics and dynamics". Reviews of Modern Physics. 57: 827–863. doi:10.1103/revmodphys.57.827.
- Kern, K; David, R; Palmer, R L; Cosma, G (1986). "Complete Wetting on 'Strong' Substrates: Xe/Pt(111)". Physical Review Letters. 56: 2823–2826. doi:10.1103/physrevlett.56.2823.
- Becker, H; Gärtner, C (2007). Analytical and bioanalytical chemistry. 390 (89). Missing or empty
- Mansky, et al., (1997). Science. 275 (1485). Missing or empty
- Rastogi, et al., (2010). ACS Nano. doi:10.1021/nn901344u. Missing or empty
- Pardon, G; Haraldsson, T; van der Wijngaart, W (2016). "Surface Energy Mimicking: Simultaneous Replication of Hydrophilic and Superhydrophobic Micropatterns through Area-Selective Monomers Self-Assembly". Adv. Mater. Interfaces. doi:10.1002/admi.201600404.
- Sakata, I; Morita, M; Tsuruta, N; Morita, K (2003). "Activation of Wood Surface by Corona Treatment to Improve Adhesive Bonding". Journal of Applied Polymer Science. 49: 1251–1258.
- Rosales, J I; Marshall, G W; Marshall, S J; Wantanabe, L G; Toledano, M; Cabrerizo, M A; Osorio, R (1999). "Acid-etching and Hydration Influence on Dentin Roughness and Wettability". Journal of Dental Research. 78: 1554–1559. doi:10.1177/00220345990780091001.
- Khan, H; Fell, J T; Macleod, G S (2001). "The influence of additives on the spreading coefficient and adhesion of a film coating formulation to a model tablet surface". International Journal of Pharmaceuticals. 227: 113–119. doi:10.1016/s0378-5173(01)00789-x.
- Wicks, Z.W. (2007). "Organic Coatings: Science and Technology. Third Edition" New York: Wiley Interscience: 435 – 441.
- Tracton, A. A. (2006). "Coatings Materials and Surface Coatings. Third Edition" Florida: Taylor and Francis Group: 31-6 – 31-7.
- Auschra, C.; Eckstein, E.; Muhlebach, A.; Zink, M.; Rime, F. (2002). "Design of new pigment dispersants by controlled radical polymerization". Progress in Organic Coatings. 45: 83–93. doi:10.1016/s0300-9440(02)00048-6.
- Rhee, S.K. (1977). "Surface energies of silicate glasses calculated from their wettability data". Journal of Materials Science. 12 (4): 823–824. Bibcode:1977JMatS..12..823R. doi:10.1007/BF00548176.
- Dundon, M. L.; Mack, E. (1923). "THE SOLUBILITY AND SURFACE ENERGY OF CALCIUM SULFATE". J. Amer. Chem. Soc. 45: 2479–2485. doi:10.1021/ja01664a001.
- Udin, H (1951). J. Metals. 3: 63. Missing or empty
- Gilman, J. J. (1960). "Direct Measurements of the Surface Energies of Crystals". J. Appl. Phys. 31: 2208. doi:10.1063/1.1735524.
- Butt, Hans-Jürgen, Kh Graf, and Michael Kappl. Physics and Chemistry of Interfaces. Weinheim: Wiley-VCH, 2006. Print.
- Lipsett, S. G.; Johnson, F. M. G.; Maass, O. (1927). J. Amer. Chem. Soc. 49: 925. Missing or empty |
In this land reinforcement worksheet, students complete 7 matching questions, 2 short answer questions, and 1 essay question about key points from their chapter on land conservation.
3 Views 8 Downloads
- Activities & Projects
- Graphics & Images
- Lab Resources
- Learning Games
- Lesson Plans
- Primary Sources
- Printables & Templates
- Professional Documents
- Study Guides
- Writing Prompts
- AP Test Preps
- Lesson Planet Articles
- Interactive Whiteboards
- All Resource Types
- Show All
See similar resources:
Mangrove Loss Faster than Land-Based Forests
Learners explore the reasons Mangrove forests are in jeopardy. In this instructional activity, students read an article that discusses specific facts on Mangrove forests, then complete numerous activities that reinforce the information,...
6th - 8th Social Studies & History
Sea Urchins Pull Themselves Inside Out To Be Reborn
Sea Urchins live for centuries if they can make it to adulthood. The video highlights the challenges of sea urchins making the journey through the open sea for years. When they finally find a place to land, an amazing transformation occurs.
3 mins 6th - 12th Science CCSS: Adaptable
Attack of the Cosmic Space Junk!
Even lands and planets far, far away feel the impact of humans! A video explains how space exploration leads to space litter. The lesson considers different events over time that led to space debris dangerous to satellites and even...
6 mins 6th - 12th Science CCSS: Adaptable
Roly Polies Came From the Sea to Conquer the Earth
Roly polies or pill bugs? No matter what you call them, these organisms are unique. Biology scholars discover a true evolutionary success story in a video about tiny, land-dwelling crustaceans. The narrator describes their journey from...
4 mins 6th - 12th Science CCSS: Adaptable
Lesson Plan: Humans and the Land
Art acts as inspiration for a conversation about human impact on the environment and creative writing. The class examines three pieces, looking for evidence of human impact on the landscape. They then write a first-person narrative, from...
6th - 12th Visual & Performing Arts |
Seashells in the Mountains
Leonardo da Vinci went on a hike in the Italian mountains. He discovered huge banks of sea shells. He concluded that the bible must be wrong. How do modern scientists explain this?
There are great continental tectonic plates that move very slowly, floating on the molten core of the earth. When two plates smash into each other, they push up and form a mountain range. The process is very slow, but with satellite laser surveying, it can be measured. Areas that were once the bottoms of seas are sometimes thrust up to become mountains.
On the bottom of the sea, each year, creatures living that year fall to the bottom and a tiny fraction of them fossilize. The process may continue over a million years or so. During this time the creatures change, according to evolutionary pressure. That is why you find the layers finely sorted in evolutionary order.
How do Christians explain this? They say Noah’s flood washed the shells to the top of every mountain. There are three problems with this.
~ Roedy (1948-02-04 age:69)
- You don’t find shells at the top of every mountain.
- It does not explain why the shells at the top of mountains are layered precisely in evolutionary order.
- It does not explain why the Burgess Shales got millions of trilobites, but no seashells. |
- List at least five complex carbohydrates and five simple carbohydrates. During a crew meeting (or another activity approved by your Advisor and/or coach), discuss with your crew why complex carbohydrates are nutritionally dense and what that means to a sportsperson. Tell why fiber is considered a complex carbohydrate and list some examples of fiber-rich foods. Serve snacks that represent each carbohydrate. You could even make this a game where people guess which snack went with each group.
- Interview a registered dietician and talk about your favorite sport. Have the dietician help you evaluate and develop a nutritional pro- gram that fits you (and/or your team as a whole) and your sport.
- Make a presentation on “Good Fats” and “Bad Fats.” Explain how they affect a teenager’s diet. Include in your presentation information on saturated fats, unsaturated fats, hydrogenated fats, and cholesterol. Use posters, overhead transparencies, computer slide shows, charts, and relevant information from your school health text book. Working with your crew, calculate fat needs for yourself and the other members of your crew.
- Keep a three-day food record of everything you eat and drink. If you put it in your mouth, write it down. With the help of a health-care practitioner, determine if you are eating enough protein, vegetables, fat, carbohydrates, and fiber. Also determine the amount of sugar, sodium, and hydrogenated fat consumed. Resources for determining these amounts are available at your local library.
- People who do not eat meat are called vegetarians. Vegetarians can be categorized into three different groups. In a discussion with your Advisor and/or coach, name those three groups and explain their differences and similarities. In an interview with a registered dietician or nutritionist, ask questions about the complete protein requirements of a vegetarian and how they make sure they are achieving these daily requirements. Using this information, put on a presentation, tabletop display, or other such activity approved by your Advisor and/or coach for a Boy Scout troop or Cub Scout pack.
|| The official source for the information shown in this article or section is:|
Quest Handbook, 2015 Edition (BSA Supply SKU #620714)
The text of these requirements may be locked. In that case, they can only be edited by an administrator.
Please note any errors found in the above requirements on this article's Talk Page. |
Michael Brown and Chadwick Truillo of the California Institute of Technology first discovered Quaoar in June while they were surveying the Kuiper Belt, the field of comet-like bodies stretching seven billion miles beyond Neptune's orbit, using a 1.2-meter telescope. It appeared as a point of light creeping across the constellation Ophiuchus. The researchers then used the Hubble Space Telescope to measure the object's 1,300-kilometer diameter. The icy rock reflects just 10 percent of the light that hits it and moves around the sun in a circular path once every 288 years. Brown and Truillo chose the name Quaoar from creation mythology of the Native American Tongva tribe, early inhabitants of the Los Angeles area, but the object has not yet been officially christened. Until the International Astronomical Union (IAU) votes on the moniker, the body's designation is the somewhat less flashy 2002 LM60.
Discovering Quaoar, the scientists say, fuels hope that more large-scale bodies will be found in the Kuiper Belt--perhaps even some larger than Pluto. As it stands, several hundred so-called Kuiper Belt Objects (KBO) have been identified since 1992. According to Brown, Pluto is also a KBO. "Quaoar definitely hurts the case for Pluto being a planet," he says. "If Pluto were discovered today, no one would even consider calling it a planet because it's clearly a Kuiper Belt Object." |
In mathematics, the Veblen functions are a hierarchy of normal functions (continuous strictly increasing functions from ordinals to ordinals), introduced by Oswald Veblen in Veblen (1908). If φ0 is any normal function, then for any non-zero ordinal α, φα is the function enumerating the common fixed points of φβ for β<α. These functions are all normal.
The Veblen hierarchy
In the special case when φ0(α)=ωα this family of functions is known as the Veblen hierarchy. The function φ1 is the same as the ε function: φ1(α)= εα. If then From this and the fact that φβ is strictly increasing we get the ordering: if and only if either ( and ) or ( and ) or ( and ).
Fundamental sequences for the Veblen hierarchy
The fundamental sequence for an ordinal with cofinality ω is a distinguished strictly increasing ω-sequence which has the ordinal as its limit. If one has fundamental sequences for α and all smaller limit ordinals, then one can create an explicit constructive bijection between ω and α, (i.e. one not using the axiom of choice). Here we will describe fundamental sequences for the Veblen hierarchy of ordinals. The image of n under the fundamental sequence for α will be indicated by α[n].
A variation of Cantor normal form used in connection with the Veblen hierarchy is — every nonzero ordinal number α can be uniquely written as , where k>0 is a natural number and each term after the first is less than or equal to the previous term, and each If a fundamental sequence can be provided for the last term, then that term can be replaced by such a sequence to get
For any β, if γ is a limit with then let
No such sequence can be provided for = ω0 = 1 because it does not have cofinality ω.
For we choose
For we use and i.e. 0, , , etc..
For , we use and
Now suppose that β is a limit:
If , then let
For , use
Otherwise, the ordinal cannot be described in terms of smaller ordinals using and this scheme does not apply to it.
The Γ function
The function Γ enumerates the ordinals α such that φα(0) = α. Γ0 is the Feferman–Schütte ordinal, i.e. it is the smallest α such that φα(0) = α.
For Γ0, a fundamental sequence could be chosen to be and
For Γβ+1, let and
For Γβ where is a limit, let
Finitely many variables
In this section it is more convenient to think of φα(β) as a function φ(α,β) of two variables. Veblen showed how to generalize the definition to produce a function φ(αn,αn−1,…,α0) of several variables, namely: let
- φ(α)=ωα for a single variable,
- φ(0,αn−1,…,α0)=φ(αn−1,…,α0), and
- γ↦φ(αn,…,αi+1,α,0,…,0,γ) be the function enumerating the common fixed points of the functions ξ↦φ(αn,…,αi+1,β,ξ,0,…,0) for all β<α.
For example, φ(1,0,γ) is the γ-th fixed point of the functions ξ↦φ(ξ,0), namely Γγ; then φ(1,1,γ) enumerates the fixed points of that function, i.e., of the ξ↦Γξ function; and φ(2,0,γ) enumerates the fixed points of all the ξ↦φ(1,ξ,0). Each instance of the generalized Veblen functions is continuous in the last nonzero variable (i.e., if one variable is made to vary and all later variables are kept constantly equal to zero).
Transfinitely many variables
More generally, Veblen showed that φ can be defined even for a transfinite sequence of ordinals αβ, provided that all but a finite number of them are zero. Notice that if such a sequence of ordinals is chosen from those less than an uncountable regular cardinal κ, then the sequence may be encoded as a single ordinal less than κκ. So one is defining a function φ from κκ into κ.
The definition can be given as follows: let α be a transfinite sequence of ordinals (i.e., an ordinal function with finite support) which ends in zero (i.e., such that α₀=0), and let α[0↦γ] denote the same function where the final 0 has been replaced by γ. Then γ↦φ(α[0↦γ]) is defined as the function enumerating the common fixed points of all functions ξ↦φ(β) where β ranges over all sequences which are obtained by decreasing the smallest-indexed nonzero value of α and replacing some smaller-indexed value with the indeterminate ξ (i.e., β=α[ι₀↦ζ,ι↦ξ] meaning that for the smallest index ι₀ such that αι₀ is nonzero the latter has been replaced by some value ζ<αι₀ and that for some smaller index ι<ι₀, the value αι=0 has been replaced with ξ).
For example, if α=(ω↦1) denotes the transfinite sequence with value 1 at ω and 0 everywhere else, then φ(ω↦1) is the smallest fixed point of all the functions ξ↦φ(ξ,0,…,0) with finitely many final zeroes (it is also the limit of the φ(1,0,…,0) with finitely many zeroes, the small Veblen ordinal).
The smallest ordinal α such that α is greater than φ applied to any function with support in α (i.e., which cannot be reached “from below” using the Veblen function of transfinitely many variables) is sometimes known as the “large” Veblen ordinal.
- Hilbert Levitz, Transfinite Ordinals and Their Notations: For The Uninitiated, expository article (8 pages, in PostScript)
- Pohlers, Wolfram (1989), Proof theory, Lecture Notes in Mathematics 1407, Berlin: Springer-Verlag, ISBN 3-540-51842-8, MR 1026933
- Schütte, Kurt (1977), Proof theory, Grundlehren der Mathematischen Wissenschaften 225, Berlin-New York: Springer-Verlag, pp. xii+299, ISBN 3-540-07911-4, MR 0505313
- Takeuti, Gaisi (1987), Proof theory, Studies in Logic and the Foundations of Mathematics 81 (Second ed.), Amsterdam: North-Holland Publishing Co., ISBN 0-444-87943-9, MR 0882549
- Smorynski, C. (1982), "The varieties of arboreal experience", Math. Intelligencer 4 (4): 182–189, doi:10.1007/BF03023553 contains an informal description of the Veblen hierarchy.
- Veblen, Oswald (1908), "Continuous Increasing Functions of Finite and Transfinite Ordinals", Transactions of the American Mathematical Society 9 (3): 280–292, doi:10.2307/1988605, JSTOR 1988605
- Miller, Larry W. (1976), "Normal Functions and Constructive Ordinal Notations", The Journal of Symbolic Logic 41 (2): 439–459, doi:10.2307/2272243, JSTOR 2272243 |
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Biological: Behavioural genetics · Evolutionary psychology · Neuroanatomy · Neurochemistry · Neuroendocrinology · Neuroscience · Psychoneuroimmunology · Physiological Psychology · Psychopharmacology (Index, Outline)
The frontal lobe is an area in the brain of vertebrates. Located at the front of each cerebral hemisphere, frontal lobes are positioned in front of (anterior to) the parietal lobes. The temporal lobes are located beneath and behind the frontal lobes.
In the human brain, the central sulcus separates the frontal lobe from the parietal lobe along the top of each cerebral cortex. The lateral sulcus separates the inferior frontal gyrus of lower frontal lobes from the temporal lobes.
- Lateral part: Precentral gyrus, lateral part of the superior frontal gyrus, middle frontal gyrus, inferior frontal gyrus.
- Polar part: Transverse frontopolar gyri, frontomarginal gyrus.
- Orbital part: Lateral orbital gyrus, anterior orbital gyrus, posterior orbital gyrus, medial orbital gyrus, gyrus rectus.
- Medial part: Medial part of the superior frontal gyrus, cingulate gyrus.
The gyri are separated by sulci. E.g., the precentral gyrus is in front of the central sulcus, and behind the precentral sulcus. The superior and middle frontal gyri are divided by the superior frontal sulcus. The middle and inferior frontal gyri are divided by the inferior frontal sulcus.
In the human brain, the precentral gyrus and the related cortical tissue that folds into the central sulcus comprise the primary motor cortex, which controls voluntary movements of specific body parts associated with areas of the gyrus.
Frontal lobes have been found to play a part in impulse control, judgment, language, memory, motor function, problem solving, sexual behavior, socialization and spontaneity. Frontal lobes assist in planning, coordinating, controlling and executing behavior. People that have damaged frontal lobes may experience problems with these aspects of cognitive function, being at times impulsive; impaired in their ability to plan and execute complex sequences of actions; perhaps persisting with one course of action or pattern of behavior when a change would be appropriate (perseveration).
Cognitive maturity associated with adulthood is marked by related maturation of cerebral fibers in the frontal lobes between late teenager years and early adult years. Research by Dr. Arthur Toga, UCLA, found increased myelin in the frontal lobe gray matter of young adults compared to that of teens, whereas gray matter in parietal and temporal lobes was more fully matured by teen years. Typical onset of schizophrenia in early adult years correlates with poorly myelinated and thus inefficient connections between cells in the forebrain.
A report from the National Institute of Mental Health says a gene variant that reduces dopamine activity in the prefrontal cortex is related to poorer performance and inefficient functioning of that brain region during working memory tasks, and to slightly increased risk for schizophrenia.
Dopamine-sensitive neurons in the cerebral cortex are found primarily in the frontal lobes. The dopamine system is associated with pleasure, long-term memory, planning and drive. Dopamine tends to limit and select sensory information arriving from the thalamus to the forebrain. Poor regulation of dopamine pathways has been associated with schizophrenia.
The so-called executive functions of the frontal lobes involve the ability to recognize future consequences resulting from current actions, to choose between good and bad actions (or better and best), override and suppress unacceptable social responses, and determine similarities and differences between things or events.
The frontal lobes also play an important part in retaining longer term memories which are not task-based. These are often memories with associated emotions, derived from input from the brain's limbic system, and modified by the higher frontal lobe centers to generally fit socially acceptable norms (see executive functions above). The frontal lobes have rich neuronal input from both the alert centers in the brainstem, and from the limbic regions.
In the early 20th century, a medical treatment for mental illness, first developed by Portuguese neurologist Egas Moniz, involved damaging the pathways connecting the frontal lobe to the limbic system. Frontal lobotomy (sometimes called frontal leucotomy) successfully reduced distress but at the cost of often blunting the subject's emotions, volition and personality. The indiscriminate use of this psychosurgical procedure, combined with the severe side effects and dangerous nature of the operation gained it a bad reputation and the frontal lobotomy has largely died out as a psychiatric treatment.
More precise psychosurgical procedures are still occasionally used, although are now very rare occurrences. They may include procedures such as the anterior capsulotomy (bilateral thermal lesions of the anterior limbs of the internal capsule) or the bilateral cingulotomy (bilateral thermal lesions of the anterior cingulate gyri) and might be used to treat otherwise untreatable obsessional disorders or clinical depression.
|Telencephalon (cerebrum, cerebral cortex, cerebral hemispheres) - edit|
frontal lobe: precentral gyrus (primary motor cortex, 4), precentral sulcus, superior frontal gyrus (6, 8), middle frontal gyrus (46), inferior frontal gyrus (Broca's area, 44-pars opercularis, 45-pars triangularis), prefrontal cortex (orbitofrontal cortex, 9, 10, 11, 12, 47)
temporal lobe: transverse temporal gyrus (41-42-primary auditory cortex), superior temporal gyrus (38, 22-Wernicke's area), middle temporal gyrus (21), inferior temporal gyrus (20), fusiform gyrus (36, 37)
limbic lobe/fornicate gyrus: cingulate cortex/cingulate gyrus, anterior cingulate (24, 32, 33), posterior cingulate (23, 31),
Some categorizations are approximations, and some Brodmann areas span gyri.
References & BibliographyEdit
- Frank, M.J., Loughry, B., & O'Reilly, R.C. (2001). Interactions between frontal cortex and basal ganglia in working memory: A computational model. Cognitive, Affective, & Behavioral Neuroscience, 1 (2), 137-160 Full text
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| |
In multicellular animals, it makes a certain degree of sense for cells to commit suicide. As long as there's more to take their place, the loss of a few cells won't hurt the organism as a whole. Plus, if the cells involved are damaged, infected, or superfluous, they can actually improve the overall health of the organism by dying in an organized fashion. The same logic, however, doesn't obviously hold for single celled animals, which occupy environments where it's generally thought to be every cell for itself. That view has been receding under the weight of results that suggest that bacteria can act as a multicellular collective, organizing biofilms and identifying the number of fellow species-members through quorum-sensing signaling molecules. A paper in Science takes this sort of behavior to the next level, suggesting that bacteria can coordinate cellular suicide.
The system they use for doing this has previously been described in E. coli. In short, the bacteria constantly express a stable enzyme that, when active, will chew up the RNA in the cell, killing it. At the same time, they also express a less-stable inhibitor. When some sort of stress causes the bacteria to stop producing proteins, the inhibitor protein degrades first, setting its lethal target loose on the cell. The new data showed that this lethal combination is also held in check by a signaling system that bacteria use to sense their population density.
The authors found that they could trigger enough stress to cause E. coli to engage in cell suicide by hitting them with a brief dose of antibiotic, but only if the cells were growing in a dense culture; dilute cultures grew unperturbed. They next showed that the the bacteria sensed culture density through a soluble molecule by getting rid of the bacteria in a dense culture, and then using the remaining liquid to enable stress-based cell suicide in a dilute culture. Careful fractionation allowed them to isolate the signaling molecule involved, which is a five amino acid long peptide.
So, not only do bacteria engage in cell suicide, but they do so on the basis of signals from their fellow bacteria—it all sounds suspiciously like a multicellular organism, as the authors themselves note. The logic behind it also seems very similar to that which explains altruistic behavior in multicellular animals. In a dense, rapidly growing culture, most of the bacteria would be expected to be genetically related. When a source of stress, such as a virus or antibiotic, starts harming cells, the most efficient way to preserve their shared inheritance may be for some of the cells to sacrifice themselves.
Science, 2007. DOI: 10.1126/science.1147248 |
1. Bacteriophage M13 infects E. coli differently from the way bacteriophage T2 does. The M13 coat is removed in the inner membrane of the bacterial cell, where it is sequestered during phage replication. It is subsequently used to package the newly replicated phage DNA to create progeny phage. Why would this make M13 less suitable than T2 for the Hershey and Chase experiment?
2. Adenovirus has a double-stranded linear DNA genome that measures 12.2 m in length. How many base pairs make up the adenovirus genome?
3. A DNA is found to exhibit a melting point of 75C in 0.1 M NaCl. What effect would each of the following have on the malting point of the DNA?
a. Increased NaCl
b. Decreased DNA concentration
c. Adding 1% formamide
I will answer your molecular biology questions with a little bit of background so that you can understand the important concepts.
#1. To understand this question it is important to understand the "Hershey Chase" experiment. This experiment set out to determine what was the unit of heredity in a cell: nucleic acid or protein? Now of course we know that the answer is nucleic acid (specifically DNA), but this experiment came at a time when this was still being determine.
In 1952, A.D Hershey and Martha Chase performed this following historic experiment:
Using bacteriophage (virus that infects bacteria) T2 (which ONLY contains DNA and protein), they set out to determine whether the genes reside in the DNA of the protein. They ... |
Just as the title says, I heard about this term but am not sure how it works.
Normally, temperature broadly decreases with altitude, and convection is effective: locally warmer air will rise, and cooler air will fall. A temperature inversion is where the air temperature rises with altitude. This means that convection is less effective because the air above is already warmer, and so there is less mixing of air between altitudes.
Since pollution is generally produced at ground level, temperature inversions can trap the pollution (e.g. smog) at ground level.
As others have noted, a temperature inversion is a layer where temperature increases with height. This is called an inversion because the normal temperature profile decreases with height.
A temperature inversion can trap pollution. Factors that influence this are the environmental temperature profile, the height of the chimneys or smokestacks that expel pollution and the temperature of the pollutants as they leave the smokestacks.
If the temperature of the air coming from the smokestacks is warmer than its surrounding and the lofted pollutants aren't too heavy, the air will be buoyant and rise. Even in the case of heavy pollutants, those heavy particles tend to fall to the ground near the smokestacks, polluting the ground environment while the rest of the stuff coming out of the stacks rises. This is particularly evident around a coal stack in the winter as the snow steadily turns orange near the stacks. The lofted pollutants will rise with the air they are carried by until they are no longer buoyant and then they will stabilize in height where they find equilibrium.
To demonstrate, consider the following two scenarios.
This scenario has an environmental temperature profile based on a surface temperature of 20 C, a dewpoint around 8 C, no capping inversion and a well mixed boundary layer. The black line represents the temperature a parcel lifted from the surface would have. The pollutants are being emitted from a smokestack with a height around 300 m with a temperature around 22 C. I've also assumed no water vapor coming out of the smokestacks. In this case the air from the smokestack (red) is always warmer than the environment and it escapes the boundary layer and rises well above the surface. This pollution is not trapped.
This scenario has a capping inversion between 900 and 850 mb but is otherwise the same as the previous example (same surface temperature and dewpoint). The pollutants have the same properties as the first scenario. In this case, however, due to the temperature inversion there is a height where the environment becomes warmer than the pollutants. If the pollutant ties to rise any higher it will be negatively buoyant and will oscillate around the height of neutral buoyancy (see Brunt–Väisälä frequency). In this case the pollution will be trapped around 860 mb, which is around 1250 m. This isn't too close to the ground but in a well mixed boundary layer this pollution eventually will be mixed throughout the boundary layer. This pollution is trapped.
Both of these scenarios are on warm well-mixed days. Bigger problems tend to happen at night or early morning and in the cold season. In these cases deep inversions that start at the surface tend to form and if smokestacks are not very tall this can cause pollution to be trapped very close to the ground. This kind of temperature structure will cause the pollutant to spread out at low heights. In places where these type of inversions are frequent it is important that smokestacks are built tall enough to emit pollution above the inversion height. The picture below demonstrates how this kind of pollution trapping can look.
Image by JohanTheGhost, wikimedia commons. https://commons.wikimedia.org/wiki/File:SmokeCeilingInLochcarron.jpg
this is straight from Wikipedia:
In meteorology, an inversion is a deviation from the normal change of an atmospheric property with altitude. It almost always refers to a "temperature inversion", i.e. an increase in temperature with height, or to the layer ("inversion layer") within which such an increase occurs.
An inversion can lead to pollution such as smog being trapped close to the ground, with possible adverse effects on health. An inversion can also suppress convection by acting as a "cap". If this cap is broken for any of several reasons, convection of any moisture present can then erupt into violent thunderstorms. Temperature inversion can notoriously result in freezing rain in cold climates. |
Antarctica wasn’t always the frozen barren land that it is today — millions of years ago, it was covered in lush forests. The trees had some bizarre traits to let them survive the extreme light conditions at the poles. They were extremely hardy so researchers are curious about what made them go extinct.
The British explorer Robert Falcon Scott was the first to find plant fossils in Antarctica in 1912. However, unfortunately, he did not survive the expedition. Now, UW-Milwaukee geologists have climbed the frozen Transantarctic mountains looking for more fossils to give insight into why the trees went extinct. They found the fossil fragments of 13 trees, which are over 260 million years old. This was before the first dinosaurs, at the end of the Permian period. The Permian ended 251 million years with a mass extinction that caused 90% of species to go extinct. These forests went extinct at the same time.
The hypothesis is that volcanic eruptions in Siberia released a tremendous amount of greenhouse gases, including carbon dioxide and methane, over 200,000 years. This input of gases over a short time frame likely caused the mass extinction.
“This forest is a glimpse of life before the extinction, which can help us understand what caused the event,” said Gulbranson, a paleoecologist and visiting assistant professor in UWM’s Department of Geosciences.
Antarctica was warmer and more humid at this point in time. It was a part of the Gondwana supercontinent in the southern hemisphere. Plants included mosses, ferns, and trees. These forests were not so diverse but stretched across all of Gondwana (which included South America, Africa, India, Australia and the Arabian Peninsula)
“This plant group must have been capable of surviving and thriving in a variety of environments,” Gulbranson said. “It’s extremely rare, even today, for a group to appear across nearly an entire hemisphere of the globe.”
At the south pole, night rules the winter, with months of pitch-darkness, while the summer months are constantly lit. From the preserved tree fossil rings, the researchers found that the trees switched from summer-mode to winter-mode very quickly, in just one month. In the summer, the trees were active and grew, while in the winter, they were dormant. Modern plants make this switch over several months and keep water by making food during the day and resting at night. Another study has shown that the forests were likely composed of a mix of deciduous and evergreen trees.
“There isn’t anything like that today,” Gulbranson said. “These trees could turn their growing cycles on and off like a light switch. We know the winter shutoff happened right away, but we don’t know how active they were during the summertime and if they could force themselves into dormancy while it was still light out.”
Although these trees seem to have been very hardy and able to withstand extreme conditions, they weren’t able to survive the high carbon dioxide levels that led to the mass extinction. The research team will go back this winter (Antarctic summer) for more clues on how these trees went extinct and how the greenhouse gases affected them. |
Kindergarten Activities From "Tools of the Mind"
"Tools of the Mind" is a curriculum that focuses on children learning through self-regulatory activities. This means that unlike traditional teaching, where students are led by the teacher's instructions and guidance, teachers using the "Tools of the Mind" curriculum encourage students to figure out things on their own and help each other. This is said to improve social, cognitive and emotional skills, as well as foster independence. "Tools of the Mind" activities for kindergartners include those focused on self-monitoring, reading, writing and counting.
1 Learning Plans
At the beginning of each class, have your students plan out and monitor their classroom activities by letting them create their own learning plan. Divide a sheet of paper into sections. The number of sections on the sheet will depend on how many stations or activities are in the classroom. Have your students fill in the chart with words or drawings, representing each activity and whether they have done that activity yet or not. For example, if the classroom has a water table, a student may write the word "water" in the box or draw waves or a water drop. Learning plans enable the students to keep track of their learning and to set goals, based on what they want to do next.
2 Buddy Reading
This activity enables children to develop their reading skills on their own, as the "listener" or the "reader." Pair up the students and have one read a book while the other listens. Not only does this develop reading skills, it also develops cooperation and listening skills. The student will have to self-regulate to stay in his role as the listener and to remain quiet and attentive to his partner. The student who is playing the reader will have to read aloud and show his partner the pictures in the book.
3 Scaffolded Writing
Scaffolded writing is a way for children to practice writing and spelling and to sound out words independently. First, draw a picture that represents what your students will write, as this enables your students to plan what they will write. This can be a made-up story or a response to a book that you have read in class. Then, the students should plan the written text by using lines to represent each word. After this, they will fill in each word. Since they are kindergartners, at first, their spelling will be phonetic, but over time their spelling will improve.
4 The Numeral Game
The numeral game is a partner activity which allows students to practice and monitor counting skills. Pair up the students and have them take turns as the "doer" and the "checker." The doer must draw a number card and put that number of objects -- such as counting bears -- in a cup. The checker must then take a sheet of paper that has the corresponding number -- as well as the number of dots on it -- and place each counting bear on a dot to check if the numbers match up. This activity not only introduces students to counting -- which will help with math in future grades -- it will also familiarize the students with role-shifting and cooperation. |
Developing fine motor skills is one of the most important skills a preschooler can work on. Today we have a preschool painting activity that is fun for young kids but also helps to build up those important hand muscles that they need to use in learning how to write.
Fine Motor Skills
Strengthening hand muscles is important for young children to be able to dress themselves, pick up their toys, and use utensils properly. As children get into preschool and are approaching kindergarten, the muscles can continue to be strengthened in order to improve pencil grip and control of a pencil as basic steps towards getting ready to write their letters.
This simple preschool activity uses paint, cotton balls, and tongs. Simple household items that can really make painting fun and unique for a child.
My daughter used the tongs to pick up a cotton ball and dip it in the paint. She loved using the tongs and feeling the cotton ball squish up and down in the paint. Then she continued to hold the tongs as she rubbed the cotton ball around on the paper.
Let your child use a separate cotton ball for each color of paint. My daughter enjoyed just smearing paint around as she got used to controlling it with the tongs.
Once she got used to the feel of the tongs in her hand and the cotton ball on the paper, she began to get more creative with her art work and painted a flower.
It was a fun activity for her and she didn’t even know she was giving her little hand muscles a work out too.
Encourage your child to paint with tongs.
By gripping them in their hands, children are building those pincer grip muscles which will help develop the fine motor skills for good handwriting in school in the next few years. |
Personal, Social and Emotional Development (PSED)
Children’s personal, social and emotional development (PSED) is crucial for children to lead healthy and happy lives, and is fundamental to their cognitive development. Underpinning their personal development are the important attachments that shape their social world. Strong, warm and supportive relationships with adults enable children to learn how to understand their own feelings and those of others. Children should be supported to manage emotions, develop a positive sense of self, set themselves simple goals, have confidence in their own abilities, to persist and wait for what they want and direct attention as necessary. Through adult modelling and guidance, they will learn how to look after their bodies, including healthy eating, and manage personal needs independently. Through supported interaction with other children they learn how to make good friendships, co-operate and resolve conflicts peaceably. These attributes will provide a secure platform from which children can achieve at school and in later life.
Activities at Home
- Play games to encourage sharing and turn taking
- Talk about how things make both you and your child feel
- Encourage your child to wash their hands after going to the toilet
- Have a go at encouraging your child to dress themselves
- When your child does something they shouldn't have, encourage your child to think about what they did and why it was wrong |
Acid reflux, also known as gastroesophageal reflux disease (GERD), can cause scarring over time as the tissue in the esophagus tries to heal itself. Scar tissue is thicker than the normal lining of the esophagus, which causes the esophagus to narrow in places where the scar tissue forms, making it difficult to swallow. This narrowing in the esophagus is called a stricture. Strictures act as a barrier to food being swallowed and can eventually prevent food and even liquids from making their way down the esophagus and into the stomach. Eighty percent of esophageal strictures are related to GERD.
Symptoms of esophageal strictures
- Difficulty swallowing (dysphagia)
- Regurgitation of food
- Weight loss
- Chest discomfort/pain
With strictures, you may find yourself chewing longer, needing to wash food down with water or other liquids and even taking smaller bites of food to help it pass through the esophagus. Some people with strictures begin to eat less because of pain when swallowing. This can lead to weight loss. When food gets stuck in your esophagus from a severe stricture and is vomited back up, you may need immediate treatment.
Doctors can diagnose strictures with a barium esophagram. The barium esophagram outlines the size and location of the stricture or strictures in your esophagus. Your doctor may also perform an endoscopy to visually evaluate the situation in your esophagus visually.
Treatment for Strictures
- Dilation- stretching/dilating the wall of the esophagus to enlarge the opening to allow food to pass into the stomach
Doctors may also prescribe PPIs, also known as proton pump inhibitors. This acid-suppression medication may help reduce the need for additional dilations, thereby lowering the possibility for esophageal perforations, bleeding and other complications. |
NCERT Solutions for Class 6 Science Chapter 7 – Getting to Know Plants
NCERT Solutions for Class 6 Science Chapter 7 let the students solve and revise the whole syllabus very effectively. After covering all the stepwise solutions given by our subject expert teachers, the student will be able to score good marks.
Class 6 plays an extremely important role in every students’ career as it is the period when a student comes across some advanced syllabus content. Our NCERT Solutions for Class 6 Science Chapter 7 is a complete package providing all topics, contents, problems, and most importantly self-explanatory solutions. NCERT Solutions for Class 6 Science Chapter 7 will help the students to solve all types of problems related to all the topics. Toppr.com or the Toppr App provides you quality answers to all your questions related to the basic concepts of Forests.
Our specialized team of expert teachers has created these NCERT Solutions considering the curriculum, the pattern of board exams and need of students. We will help you to get complete NCERT Solutions for Class 6 Science and other subjects. We are providing you the free PDF download links of the class 6 Science Chapter 7.
Toppr provides free study materials, previous 10 years of question papers, 1000+ hours of video lectures.
CBSE Class 6 Science Chapter 7 NCERT Solutions
This chapter brings into light the beautiful concept of plants and various varieties thereof. The chapter starts beautifully with the introduction of plants and their characteristics in nature or the environment. The chapter makes students know about various parts of the plants along with their respective functions. Staring with the varieties of plants like herbs, shrubs, and trees, the chapter develops itself wonderfully, describing each of the elements of plants like roots, stems, leaves, etc. Along with that, the chapter has various practical activities table asking students to develop their minds and apply practical knowledge along with the theoretical one.
Sub-topics Covered under NCERT Solutions for Class 6 Science Chapter 7
- Ex. 1.1: Herbs, Shrubs, and Trees
- Ex. 1.2: Stem, Leaf, and Root
- Ex. 1.3: Flower
Let us discuss the sub-topics in detail –
In this chapter, the students will come to know about the basics of different types of plants. Along with discussing various kinds of plants, the chapter puts light on the various parts of a plant like stems and roots.
1.1: Herbs, Shrubs, and Trees
All the plants near you are not the same. They have their respective characteristics. Herbs are the ones with shorter stems and tender leaves. Shrubs have their stems coming out of branches. Trees, at last, are the tall plants with a thick brown base.
1.2: Stem, Leaf, and Root
Stems are one of the most important parts of any plant since it acts as the messenger of nutrients and water to all the other parts of the plant. Similarly, leaves are important too because it makes and stores food for the growth of the plant. Students will also read about various functions of the root also.
A flower is one of the most parts of a plant. This topic has a high weightage in class 6 from the exam point of view. Students will be confronted with various parts of a flower and their respective functions with the help of a diagram in the book itself.
You can download NCERT Solutions for Class 6 Science Chapter 7 by clicking on the download button below
Download Toppr – Best Learning App for Class 5 to 12
Toppr covers all the important questions and solutions from the examination point of view. It will help students to develop knowledge of various mathematical concepts to better prepare for competitive exams also. We provide you free pdf downloads, free online classes, free video lectures and free doubt solving sessions. Our faculties are highly adaptive and are very amiable. We are just a click away. Download Toppr for Android and iOS or signup for free. |
The classic Atlas of the Historical Geography of the United States shows exactly how travel times across the United States have evolved over time. Back in the early 1800s, without easily navigable roads or railroads, even a journey from New York to Washington, DC, was a multi-day affair.
Over time, that slowly improved. Construction on the National Road, which stretched from Cumberland, Maryland, across the United States, began in 1811 and continued through the 1830s. The advent of the steamboat also made it easier to use rivers.
The big advance, however, came through trains. By 1857, railroads had improved travel times significantly — culminating with the development of the Transcontinental Railroad in 1869. Even in 1857, travel was easier, thanks to the railroad system.
By 1930, railroads had successfully compressed travel times to a couple of days versus the many weeks it took in the 1800s.
These maps don't just show the rapid pace of technological progress, however. They also show how that progress advanced unevenly, in fits and starts. Railroads didn't reduce travel times right away — they still required significant infrastructure investments, ranging from laying down tracks to building tunnels. That took decades.
The same thing happened to airline travel. This map of air travel times in the 1930 shows it was a huge advance on railroads. But it was still significantly slower than air travel is today:
Travel times may get shorter still. But a faster plane or train isn't enough to change it — the infrastructure has to be able to handle whatever invention comes along next. |
Have you ever wondered what your child is thinking when he says or does something that seems utterly meaningless to you? For example, he or she might want you to meet his/her imaginary friend or be talking to an imaginary friend or very often our response is to stop.
But have you thought of taking a different approach? Rather than dismissing what he is saying or doing, why not ask some engaging questions that will give you a better idea of what they are thinking? By the answers they give, you are more likely to find out what and how they are thinking and what they are learning about the world around them.
Engaging questions are those that get a child to go beyond the mere yes and no answers which are so easy. Instead, the kind of questions I have in mind are those that require them to pause and think more carefully and deeply before they answer them. For example, instead of asking them how they liked the movie, you could ask a much more thought-provoking question such as ‘Why do you like the hero in the movie?’ This encourages them to think and process what they saw before giving an answer.
One of the reasons for asking this kind of thought-provoking questions is because their answers will help us to know how they are thinking and making sense of the world they live in. Hearing their ideas provide the perfect opportunity for teachable moments when we can guide their thinking, clarify what they do not yet understand, and reaffirm what they already know. All this can be done in a conversational and positive manner that affirms their self-esteem and make learning easy.
Another reason for asking this type of question is because children must think to find the answers and, in the process, they also learn. This might be a bit more complex than the first reason as in thinking and responding they must put their ideas into words. Expressing the ideas aloud might also make them change what they are thinking as, sometimes, the very act of expressing themselves causes them to rearrange their ideas to sound better and make more sense, as they understand it.
So here are five questions you can ask your child to engage their thinking:
- What happens when you do something?
This kind of questions makes the child think about how one thing leads to another, a kind of cause and effect.
- What do you think will happen when…? You might ask this about what happens to birds when the sun goes down. This is likely to trigger the child to predict an action based on their prior knowledge. This ability to predict events helps a child to understand a story better and to make connections between their various experiences.
- How does one thing remind you of something else? Here the child is being encouraged to use his memory to make connections between current and past experiences and for interpreting the former.
- What do you notice about something? This will help to sharpen the child’s power of observation by making them look more closely at the things around them and use their five senses more. For example, they might notice that the sky gets dark before it rains or that the lightning usually flashes before the thunder sounds or that owls tend to hoot in the nights.
- Why? This question requires an explanation for what the child thinks about something. For example, asking them why they are afraid of the dark or think there is a monster in their room should yield some interesting information of their thought process.
When you ask your child any of these or other kinds of engaging questions, you should give them enough time to explain their ideas and then repeat what they say so they will know you are listening and trying to understand. So next time you talk to your child, remember to ask them engaging questions that will help to stimulate their thinking. |
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
A large mass of moving ice. (frozen water) A measure of the amount of dissolved salts in a given amount of liquid. Groundwater is the water located beneath the earth's surface in soil pore spaces and in the fractures of rock formations. A unit of rock or an unconsolidated deposit is called an aquifer when it can yield a usable quantity of water. A body of rock or sediment that stores groundwater and allows the flow of groundwater. Surface water is water on the surface of the planet such as in a stream, river, lake, wetland, or ocean. It can be contrasted with groundwater and atmospheric water. Surface runoff is the water flow that occurs when the soil is infiltrated to full capacity and excess water from rain, snow melt water, or other sources flows over the land. This is a major component of the water cycle, and the primary agent in water erosion, A drainage basin or watershed is an extent or an area of land where surface water from rain and melting snow or ice converges to a single point at a lower elevation, usually the exit of the basin, where the waters join another water body, such as a river, lake, reservoir, estuary, wetland, sea, or ocean. Irrigation is the artificial application of water to the land or soil. It is used to assist in the growing of agricultural crops, maintenance of landscapes, and re-vegetation of disturbed soils in dry areas and during periods of inadequate rainfall. The continuous movement of water from the ocean to the atmosphere to the land and back to the ocean. The water cycle, also known as the hydrologic cycle or the H2O cycle, describes the continuous movement of water on, above and below the surface of the Earth. Evaporation is a type of vaporization of a liquid that occurs from the surface of a liquid into a gaseous phase that is not saturated with the evaporating substance. Evaporation is an essential part of the water cycle. The sun (solar energy) drives evaporation of water from oceans, lakes, moisture in the soil, and other sources of water. Evaporation of water occurs when the surface of the liquid is exposed, allowing molecules to escape and form water vapor; this vapor can then rise up and form clouds. Water vapor or aqueous vapor is the gas phase of water. It is one state of water within the hydrosphere. Water vapor can be produced from the evaporation or boiling of liquid water. Transpiration is the process by which moisture is carried through plants from roots to small pores on the underside of leaves, where it changes to vapor and is released to the atmosphere. Transpiration is essentially evaporation of water from plant leaves. Condensation is the change of the physical state of matter from gas phase into liquid phase, and is the reverse of vaporization. Water that collects as droplets on a cold surface when humid air is in contact with it. Water condenses into a cloud. Any form of water that falls to the Earth’s surface from the clouds (in the sky). |
Because imaginative thinking hones creativity and improves students’ social and emotional skills, it’s something that teachers and schools should fold into their planning. Ostroff identified several strategies teachers can adopt to encourage older students to activate their dormant imaginations.
Give students more control. Loosening the classroom structure and allowing students more power over their work can activate their curiosity. Ostroff encourages teachers to “flip the system,” so that students understand that the learning is for them, and not the teachers. As a practical matter, this might mean assigning essays and allowing the students to determine their length, or telling kids to turn the papers in when they’re done rather than on a particular day, or simply offering a free-write period, where students write what they please for their eyes only. Teachers also can invite students to decide for themselves how a paper or assignment is assessed, and to encourage kids to reflect on and evaluate their own work. “They start to crack open when they feel like they’re in charge,” Ostroff said.
Have students track their Google searches. Internet search engines can seem to provide all the answers, blocking students from thinking expansively. For Ostroff, “Google is the beginning of the learning, not the end.” She recommends the following assignment: Ask students to Google something that they find intensely interesting. Then, suggest that they click the hyperlink that’s most appealing, and then the one after that. They should keep track of what interested them in each link, so they develop an awareness of their own process. A student might start by searching “Mayans,” then move to “jewelry they wore,” then “precious metals,” then to “mining.” The point is to understand that learning is not simply finding an answer; it’s going deeper to figure out the next question. The first Google search should be the start of a larger inquiry. “Learning is about letting yourself get carried away,” Ostroff said.
Tell collaborative stories. Reading and telling stories is an effective way to learn. To spark imagination, the teacher might start by writing the first few lines of a story or poem on a piece of paper. She then passes the paper to a student, who adds more to the story. Every student receives the paper in turn, but reads only the written contribution of the student before her. (The paper should be folded to conceal all but the most recent addition.) This kind of impromptu storytelling, with its unpredictable outcome, keeps students engaged and thinking creatively.
Try improv. Once the domain of jazz musicians and comedians, improvisation has found its way into businesses and schools. Improv is the practice of telling stories, or playing music, without scripts. One person begins the story with a few lines, and turns to the person next to her to continue it, and so on, until everyone in the group has contributed. The inviolate rule of improv is “yes, and”—meaning every contribution is accepted, regardless of its randomness, and woven into the story. Improv sparks creativity and spontaneity, and its nonjudgmental tone frees up the introverted or fearful. Because improv tends toward playfulness, it also allows some lightness into the classroom, and to learning.
Introduce real-life experiences whenever possible. What might seem bloodless or irrelevant in the classroom can come alive if students see the subject play out before them. To bring energy to science and math, for example, a teacher might take her class to a Maker Faire, where kids (and sometimes adults) use their imaginations and minds to create new things. Ostroff suggests something as simple as taking a walk in pursuit of objects that can be used to build sculptures; or, if a manufacturer is nearby, asking for their remnants to build machines. Another interesting project for teenagers is building a “box city,” in which students construct their own buildings and work to combine them into a model city. Done right, the box city will take into account economics, geography, history and culture, and give children hands-on experience with design and urban planning.
Encourage doodling. Drawing pictures or coloring while listening is both common and useful: it enables the doodler to stay focused and heightens intellectual arousal. Teachers can capitalize on that benefit by including doodling in class work. For example, students can be given notebooks to doodle in when listening, and asked to do a “doodle content analysis” of their scribbles. As well, teachers might ask students to select one or more drawings to modify for an art project, or to combine several doodles into a mural. The point is to be mindful of the value of doodling—how it enhances imagination and improves focus—and to invite students to continue the practice.
Imagine a classroom “creative council.” The council is an imaginary body of visionaries and experts that the students could “create” and then look to for answers to problems. A teacher might ask students to recommend people from the past or present who could “sit” on this council and serve as sources of wisdom. Ostroff writes, “We can tap into their knowledge virtually, by imagining and researching their potential responses and actions.” If students selected Marie Curie, for example, they would speculate about how she would respond to a particular issue. How would she approach the problem? What would she say we’re forgetting? This kind of made-up collective compels students to better understand how another thinks and even provides a kind of “imaginary mentorship.” |
INTRODUCTION Every day we must rely upon our memory. We must remember to brush our teeth in the morning. We must remember where we parked our car at the super market. We must remember to set our alarm clock before we go to bed at night. It is often said that males and females think in different ways. A study was conducted by Jausovec and Jausovec (2005) that investigated gender differences in resting EEG related to the level of general and emotional intelligence. It was found that male’s brain activity decreased with the level of general intelligence, whereas an opposite pattern of brain activity was observed in females. Therefore, it appears that males and females have different resting EEG correlates of IQ. In a study conducted by Lawton and Hatcher (2005), the gender differences in manipulation of information in visuospatial short-term memory, specifically, the mental integration of two images that had been briefly presented as separate locations or at separate times was investigated. Men were more accurate than women in recognizing the combined abstract shape that would result if two individual shapes were overlapped and matched by a dot common to both. It was discovered by this study that men responded faster than women did in this type of situation (Lawton and Hatcher, 2005). Larabee and Crook (1993) found women perform better than men in tasks such as verbal-learning-remembering tasks, name-face association, and first-last- name associations learning (Larrabee and Crook, 1993 as cited in Halpern, 2000). All three of these studies show that women and men have very different ways of thinking, therefore they will most likely have different memory and recall as well. In a study done by Cherney (1999) three to six year old children and adults were exposed to various gendered objects which they later were asked to recognize or recall. The findings of this study revealed gender schematic processing for all age groups. Males tend to recall more male-stereotyped objects than female-stereotyped objects. Females were more likely to recall more female-stereotyped objects than male-stereotyped. The purpose of this study is to discover if there is a significant effect for the sex of a participant on the types of gender associated images recalled.
Data were collected from 28 undergraduate students from a mid sized university in Northwest Missouri. All students were enrolled in a Cognitive Psychology class at the same university.
Each participant was given a piece of paper with 20 lines for recall and one line for their gender. A Power Point show of 20 slides with people or objects on them, one slide with the word “start” and one slide with the word “stop” on them were shown to the participants. The people or objects were chosen based upon lists provided by male and female students enrolled in a research methods lab class.
Each participant received a paper with 20 lines for recall and one line for their gender. They were instructed to pay attention to the Power Point show because they would be later asked to recall as many slides as possible. A Power Point slide show was shown to the participants. Slides may be viewed in the appendix. Each slide was consecutively shown for three seconds a piece. After the 20 slides were shown a slide instructed participants to begin recalling. They were given one minute to recall as many slides as possible. After one minute was up another slide instructed them to stop.
RESULTS A 2(gender of participants) X 2(gender associated with image) mixed-design factorial ANOVA was calculated comparing the number of gender associated images recalled by male or female participants. The main effect of the gender of the images was not significant (F (1, 26) =.009, p=.924). The main effect of the gender of the participant had a non-significant trend (F (1, 26) =3.976, p=.057). Finally, the interaction of memory for the different kinds of images depending on gender was also not significant (F91, 26) =2.269, p=.144). Thus, it appears that neither the gender of the participant nor the gender associated with the image has any significant effect on the recall. Males recalled an average of 6.3 male associated images and an average of only 5.7 female associated images. Females recalled an average of 5.4 female associated images and only 4.7 male associated images.
DISCUSSION Merriam-Webster defines memory as the power or process of reproducing or recalling what has been learned and retained especially through associative mechanisms. The purpose of this study was to see if a person`s gender would significantly affect the recall of gender associated images. It was thought that males would remember male associated images more than females and that females would remember female associated images more than males. It was found that although males did remember more male associated images and females remembered more female associated images there was not a significant enough effect of the gender of the participant on the recall of the images. It was thought that there would be a much greater significance. The results that were found in this study are somewhat similar to the study done by Isabelle Cherney where participants were shown pictures of female associated, male associated or neutral toys. They were then asked to name the toys that they could recall. It was found that males did recall more male associated toys. Although, females recalled more male associated toys as well. In Cherney`s study, it was stated that children ages three to six years old have better recall of toys, objects and activities labeled or stereotyped for their own relative to the opposite sex. Research has also shown that girls tend to like male- stereotyped toys more than boys like female-stereotyped toys. The aspect of males recalling more male associated toys was similar to the findings in this study where males recalled more male associated images. Through studies based on the Allport-Vernon-Lindzey Scale of Values assessment instrument it has been shown that there are strong and consistent differences in the interests, values, and attitudes of females and males (Halpern, 2000). It is assumed that these strong differences in interests would affect the images recalled by either gender. As shown in this study, males recalled an X-box 360 more than females and females recalled a purse more than males. These things were stereotypically associated with the same genders, the X-box 360 was male and the purse was female. Some of the limitations on this study were the amount of participants, the amount of slides shown, and the order in which the slides were presented. In reading the participant`s recall sheets, it was noticed by the researcher that many of the participants recalled the first three slides in the exact order that they were presented.
REFERENCESCherney, I.D., Children’s and adults’ recall of sex-stereotyped toy pictures: effects of presentation and memory task. Infant and Child Development, 14, 11-27.Jausovec, N., & Jausovec, K., Sex differences in brain activity related to general and emotional intelligence. Brain and Cognition, 59(3), 277-86.Halpern, D. F. (2000). Sex Differences in Cognitive Abilities. Mahwah, New Jersey: Lawrence Erlbaum Associates PublishersHatcher, D.W., & Lawton, C.A., Gender differences in integration of images in visuospatial memory. Sex Roles, 53, 717-24.Merriam-Webster dictionary. Retrieved November 30, 2006, from Merriam-Webster Online website: http://www.m-w.com. |
Encryption dates back through the ages. Ancient Hebrews used a basic cryptographic system called ATBASH that worked by replacing each letter used with another letter the same distance away from the end of the alphabet; A was sent as a Z, and B was sent as a Y.
The Spartans also had their own form of encryption, called scytale. This system functioned by wrapping a strip of papyrus around a rod of fixed diameter on which a message was written. The recipient used a rod of the same diameter on which he wrapped the paper to read the message. If anyone intercepted the paper, it appeared as a meaningless letter.
The Caesar cipher used the alphabet but swapped one letter for another by incrementing by three characters. In this system, Caesar wrote D instead of A and E instead of B.
More complicated substitution ciphers were developed through the middle ages as individuals became better at breaking simple encryption systems. In the ninth century, Abu al-Kindi published what is considered to be the first paper that discusses how to break cryptographic systems, titled "A Manuscript on Deciphering Cryptographic Messages." It deals with using frequency analysis to break cryptographic codes. Frequency analysis is the study of how frequent letters or groups of letters appear in cipher text. Uncovered patterns can aid individuals in determining patterns and breaking the cipher text.
All three encryption techniques discussed are considered substitution ciphers, which operate by replacing bits, bytes, or characters with alternate bits, bytes, or characters. Substitution ciphers are vulnerable to frequency analysis and are no longer used.
Around the beginning of the twentieth century, mechanical devices such as the German Enigma machine, which used a series of internal rotors to perform the encryption, and the Japanese Purple Machine were developed to counter the weaknesses of substitution ciphers. Today, in the United States the National Security Agency (NSA) is responsible for coding and code breaking. It helped lead the implementation of the Data Encryption Standard (DES).
Modern cryptographic systems no longer use substitution and transposition. Today block ciphers and stream ciphers are used. A block cipher, such as DES, operates on fixed-length groups of bits. A stream cipher inputs and encrypts one digit at a time.
The CISSP Cram Sheet
A Note from Series Editor Ed Tittel
About the Author
We Want to Hear from You!
The CISSP Certification Exam
Access-Control Systems and Methodology
System Architecture and Models
Telecommunications and Network Security
Applications and Systems-Development Security
Business Continuity Planning
Law, Investigations, and Ethics
Practice Exam 1
Answers to Practice Exam 1
Practice Exam 2
Answers to Practice Exam 2 |
Think about our science topic last term where we looked at recycling. What recyclable materials you can use to make a rocket?
Y1: Can you have a go at designing your own rocket? Remember the different parts that a rocket needs e.g. fuel tanks and a place for the astronauts! Can you then label the different parts of your rocket?
Y2: same as Y1 but to also describe the simple physical properties of a variety of everyday materials and explain why that would be a good material to make it out of. The windows are made out of cling film because it is transparent. |
Student Age and Learning Needs
Same age does not mean same learning needs
Students who are the same age do not always have the same learning needs. For example, as shown in the figure below, students in the typical fourth grade classroom can range from first- to twelfth-grade reading level.
As you can see in the figure, most of the fourth grade students perform near the fourth grade reading level. However, a great many have reading achievement well above (or well below) the fourth-grade reading level. This means that their learning needs are quite different from the standard fourth-grade curriculum, as well as from their fourth-grade peers. Some argue that academically talented students don’t “need” services and can be challenged within the regular classroom. However, there is no evidence that most teachers can effectively teach twelve reading levels simultaneously without substantial support and training. This is not a criticism of teachers—juggling twelve sets of learning needs at one time is quite a task! Even common techniques, like forming small groups, put students performing three or more grade levels apart in the same group.
One effective way of matching learning environment with learning needs is academic acceleration. Acceleration comes in many forms, such as: subject-specific acceleration (e.g., having a fourth-grade student go to the sixth-grade class for reading), curriculum compacting (e.g., going through an entire year’s worth of curriculum in one semester), and even grade skipping. The appropriate form of acceleration depends on an individual student’s needs and situation. |
There are several factors that can affect the process of learning activities. These factors are found in ourselves, but some are outside of us. In this case there are seven factors that influence the learning process according to Hutabarat.
Factors That Influence Learning
Factors That Influence Learning
1. Intelligence Factor
What is meant by intelligence is the ability of a person to carry out thinking activities that are complex and abstract. The level of intelligence of each is not the same. Some are high, some are medium and some are low. People with high intelligence can process ideas that are abstract, complicated and difficult to do quickly without many difficulties compared to people who are less intelligent.
Intelligent people can think and do more, faster with relatively little effort. Intelligence is an ability that is brought from birth whereas education cannot improve it, but can only develop it. But this high intelligence of a person is not a guarantee that he will succeed in completing education well, because success in learning is not only determined by intelligence, but also by other factors.
2. Learning Factors
What is meant by the learning factor is that all aspects of learning activities, for example are less able to focus attention on the subject being faced, cannot master the rules that are related so that they cannot read all the material that should be read. Included here is less mastered ways to learn effectively and efficiently.
3. Attitude Factor
Many factors influence attitudes towards students’ activities and success in learning. Attitude can determine whether someone will be able to learn fluently or not, long-lasting learning or not, happy lessons they face or not and many others. Among the attitudes meant here are interest, open-mindedness, prejudice or loyalty. A positive attitude towards learning stimulates the speed of learning.
4. Activity Factor
Activity factors are factors that are related to health, physical fitness and physical condition of a person. As is well known, unhealthy bodies disturb the concentration of the mind so that it interferes with learning activities.
5. Emotional and Social Factors
Emotional factors such as displeasure and liking and social factors such as competition and cooperation have a profound effect on the learning process. There are some of these factors that encourage learning but there are also obstacles to effective learning.
6. Environmental Factors
What is meant by environmental factors is the circumstances and atmosphere in which a person learns. The atmosphere and condition of the place of study also determine the success or failure of learning activities. Noise, foul odors and mosquitoes that interfere with study time and the chaotic conditions at the learning place have a huge effect on learning success. Relationships that are less harmonious with friends can disrupt concentration in learning.
7. Teacher Factors
The teacher’s personality, the teacher’s relationship with students, the teacher’s ability to teach and the teacher’s attention to the ability of students also influence learning success. Teachers who are less able to teach well and who … |
by Drew VandeCreek
The French, British and Spanish each established colonial footholds on the vast American continent in the seventeenth century and vied with one another for supremacy. Between the 1670s and 1763, the French controlled the sparsely populated Illinois country.
Europeans first visited Illinois in the 1670s, when Father Jacques Marquette, a French Jesuit priest, led a small party of explorers west from Lake Michigan. Marquette's band examined lands the French crown had claimed sight-unseen during this period of empire-building.
French voyageurs, trappers and missionaries built a hybrid society many historians have come to call a "middle ground." Although priests converted a significant number of Native Americans to Christianity, French settlers in North America generally respected Indian society and culture. Frenchmen and Indians mingled freely, exchanged ideas and cultural practices, and frequently intermarried. Native American tribes had for centuries practiced hunting, gathering, fishing and subsistence agriculture. The French found that their preference for hunting and trapping, and distaste for intensive agriculture, meshed easily with Native Americans' approaches to living in nature.1
In the middle of the eighteenth century the French and British empires clashed in a conflict that has come to be known as the French and Indian War. French and Indian troops, sensing their mutually beneficial arrangement in the American backcountry, pulled together to oppose British forces. But the powerful British Empire briefly strengthened its hand in North America with victory in this struggle. The French slowly pulled out of North America. The British, for their part, determined that the territory west of the Allegheny Mountains and north of the Ohio River should serve as a large Indian preserve closed to general settlement by English colonists.
By 1783 the American Revolution had produced an amazing reversal of fortune in North America. The United States of America stood as a new nation, and British redcoats retreated to a few forts in the northwest (today's Michigan and Ohio).
Americans eyed the lands that the British had sought to set aside for Indian tribes intently. As Americans pushed westward into the new frontier, they replaced the French and Indian "middle ground" social arrangements with a legal system emphasizing private property, economic goals emphasizing intensive agricultural and eventually industrial development, and cultural ideals that posited the rise of Christian civilization, marked Indians as savages, and mandated a firm distinction in gender roles.
Eastern officials struggled to administer lands west of the Alleghenies. In 1778 Virginia had announced that it considered huge tracts of western lands its own, and formed the County of Illinois, an entity that included all of present-day Illinois. By 1783 the Virginians had ceded these claims under pressure from the other new states. The Illinois country became part of the national lands, and eventually fell under the jurisdiction of Thomas Jefferson's Northwest Ordinance.
The Northwest Ordinance organized lands comprising the modern-day states of Ohio, Michigan, Indiana, Illinois and Wisconsin as the Northwest Territory. The Ordinance provided for a systematic surveying of the territories, laying out townships on a simple grid. Such an arrangement helped ensure that arriving settlers secured good title to their lands, and solved the persistent land disputes that had so disrupted the settlement of Kentucky.
The Northwest Ordinance also barred slavery in the new territory, but American settlers brought the peculiar institution west anyway, insuring a future of political contention and Civil War.
Illinois remained sparsely populated until the conclusion of the War of 1812. The federal government set aside large tracts of land between the Illinois and Mississippi Rivers for war veterans. While many veterans never made it to Illinois to stake their claims, the introduction of these federal lands to the open market sparked the settlement of central and western Illinois.
Before these developments American settlers had concentrated along southern Illinois' waterways, mirroring the French pattern of settlement. Most early settlers sought to maintain close contact with available water and timber, and feared pushing into the unknown prairies. Many southerners followed the Ohio and Tennessee Rivers west and north to the new Illinois country, and forged lasting ties with their southern homeland.
In 1818 Illinois officials succeeded in producing a population count sufficient to support a petition for statehood. Nathaniel Pope, Illinois' quick-thinking Territorial Delegate to the United States Congress, moved the new state's northern boundary north from a position near the southernmost tip of Lake Michigan to its present position forty-one miles to the north. In doing so, Pope secured Lake Michigan coastline and the Chicago-Illinois River portage, each of which proved very important in American economic development.
In 1820 55,211 souls made Illinois their home (a figure considerably below the 60,000 supposedly required for admission to statehood, suggesting Illinois officials' chicanery in 1818). By 1830 population had increased to 157,445. The 1830s proved to be a decade of enormous growth, and by 1840 476,183 Illinoisans lived on the prairie. By 1850 Illinois had grown to include over 850,000 inhabitants; on the eve of the Civil War, in 1860, over 1.7 million occupied the state.
The large growth of the period after 1830 stemmed largely from improvements in the American transportation system. The 1825 completion of New York State’s Erie Canal, linking the Hudson River with Lake Erie, greatly facilitated westward migration from New England, and opened northern Illinois to a generation of Yankee immigrants.
While many southerners in Illinois reveled in their individual liberty and casual living arrangements, Yankee settlers brought a firm set of social and cultural norms west with them.
Foremost among these was the idea that women should refrain from working outside the home and focus their energies upon their families, while men’s earnings supported the entire household. A scarcity of labor had pushed many frontier men and women into roles in which they shared the farm and house work alike. Yankee ideals of feminine domesticity and civilization pushed this notion of household economy aside, and set the tone for social life well into the twentieth century.
Improved transportation networks also helped new settlers to push beyond Illinois, making the state, and especially Chicago, an important supplier of tools and other things that they could not fashion with their hands. By 1850 railroads had begun to link Chicago with the new west, and populated the city with large warehouses storing both products harvested in western fields and forests as well as eastern manufactured goods for sale to western settlers.
By 1860 Illinois was no longer the frontier. Farmers had turned the soil on a majority of the Prairie State's acreage. Small towns and cities dotted the landscape, and Chicago had grown into a large city. In the years after the Civil War Americans would use improved transportation networks to continue to populate new, more westerly lands, subdue their Indian inhabitants, and fashion their distinctive system of legal, political, and cultural institutions.
- 1. Richard White The Middle Ground: Indians, Empires, and Republics in the Great Lakes Region, 1650-1815 (Cambridge: Cambridge University Press, 1991) |
Mobile Based Quiz App Review of Related Literature
The prevalence of mobile technologies is in itself a motivator to exploit them for learning. Mobile technologies are already widespread among children (NOP 2001). It makes sense, then, for an educational system with limited information and communication technology (ICT) resources to make the most of what children bring to the classroom.
(Hennessy 1999).Mobile technologies provide an opportunity for a fundamental change in education away from occasional use of a computer in a lab towards more embedded use in the classroom and beyond.
Sharples (2003) suggests that rather than seeing them as disruptive devices, educators should seek to exploit the potential of the technologies children bring with them and find ways to put them into good use for the benefit of learning practice.
Soloway et al (2001) have further argued that to make any difference in the classroom at all, computers must be mobile and within ‘arm’s reach’. The nature of learning is closely linked to the concept of mobility.
Vavoula and Sharples (2002) suggest that there are three ways in which learning can be considered mobile: “learning is mobile in terms of space, ie it happens at the workplace, at home, and at places of leisure; it is mobile between different areas of life, ie it may relate to work demands, self-improvement, or leisure; and it is mobile with respect to time, ie it happens at different times during the day, on working days or on weekends”.
The supposed value of mobiles also arises from the manner in which they facilitate lifelong learning. Mobiles can support the great amount of learning that occurs during the many activities of everyday life, learning that occurs spontaneously in impromptu settings outside of the classroom and outside of the usual environment of home and office. They enable learning that occurs across time and place as learners apply what they learn in one environment to developments in another (Sharples et al.)
(Sharples et al., 2007.) Mobile phones theoretically make learner-centered learning possible by enabling students to customize the transfer of and access to information in order to build on their skills and knowledge and to meet their own educational goals.
(Nyiri, 2002; Sharples et al., 2007) Given that social interaction is central to effective learning, as indicated by theories of new learning, mobile phones should also impact educational outcomes by facilitating communication. Mobiles permit collaborative learning and continued conversation despite physical location and thus advance the process of coming to know, which occurs through conversations across contexts and among various people. Via mobile technology, learners engage in conversation whereby they resolve differences, understand the experiences of others, and create common interpretations and shared understanding of the world.
(Geddes, 2004) In promoting educational modalities that accord with the theories of new learning, mLearning should offer an appeal aspect that also impacts educational outcomes. MLearning can be particularly appealing for those who have not succeeded in traditional learning environments; it can attract those not enamored by traditional learning approaches that are generalized and decontextualized in nature. Meaning is also beneficial in that it can provide immediate feedback and thus provide continued motivation for those who are not motivated by traditional educational settings. Moreover, mLearning presents an appeal simply because the use of mobile technology in and of itself presents something new and exciting for a great array of learner.
O’Neil, Wainess, & Baker, 2005) Games can also be adapted based on students’ needs. Appropriate scaffolding can be provided in games through the use of levels. Supports are embedded into games such that easier levels are typically played first, advancing on to more complex levels as the player achieves mastery. For example, scaffolding is built into the science mystery game Crystal Island by allowing students to keep records of the information they have gathered and the hypotheses they have drawn (Ash, 2011). Other scaffolding can be achieved through the use of graphics, such as navigation maps, which can lower a player’s cognitive load while playing the game.
(Black & Wiliam, 1998). Games also are built with clear goals and provide immediate feedback (Dickey, 2005). This allows players to change their game play in order to improve their performance and reach their goals. The idea of immediate feedback is also prominent in good formative assessment processes. Students will improve their work when given constructive feedback.
(Gee 2003. Spires 2008) GAMES frequently cited as an important mechanisms in teaching because they can accommodate a wide variety of learning style within a complex decision making context (Squirre 2006). The skills and context of many games take advantage of technology that is familiar to students and use relevant situation.
(Cf. Klopfer et al., 2011) Game design needs to adapt to different target groups, contexts, etc. (Adams, 2010). This in particular applies to the context of educational games. There is a vital need for tailoring learning offers (i.e. educational games) to learners’ needs, capabilities and according to learning targets. Intelligent adaptive game mechanisms generally reflect this need. To a certain degree, this also applies to the patterns Level or Score. This way, the pattern approach reflects varying target groups or contexts. A more specific analysis, e.g. the extent to which individual patterns reflect learners’ needs or capabilities, is needed. Though future research needs to verify the effectiveness of mobile learning games and to corroborate their educational value in order to motivate students to use such tools for teaching. Otherwise, the educational system may run the risk of disengaging future learners.
(Wang et al., 2008) There are a number of mobile game-based learning projects that have already tested and evaluated the effects of mobile games on students’ learning. Only few trace their findings back to individual game mechanisms or patterns in order to better understand why a game is successful. Instead, reports often reason effects with the use of the game itself, e.g. “students found the use of Lecture Quiz engaging, they perceived they learn more using such games.
Mobile technologies offer learning experiences which can effectively engage and educate contemporary learners and which are often markedly different from those afforded by conventional desktop computers. These devices are used dynamically, in many different settings, giving access to a broad range of uses and situated learning activities. The personal nature of these technologies means that they are well suited to engaging learners in individualized learning experiences, and to giving them increased ownership (and hence responsibility) over their own work. Most previous reviews of mobile technologies for learning categorize examples of use according to curriculum area. We believe that the benefits of mobile technologies for learning encompass more than just what an individual can do with a device, and that there is thus a need for a wider review of new and emerging practices and how these relate to theories and paradigms previously established for the use of computers in education.
(Goodyear 2000) The close relation of learning to the context and the situation in which the learning need arises has been widely discussed in the literature (Brown et al1989; Lave and Wenger 1991) and the benefits of just-in-time, situated learning have been explored. Nyiri(2002) notes that knowledge is information in context and since mobile devices enable the delivery of context-specific information they are well placed to enable learning and the construction of knowledge.
(Lederman & Fumitoshi, 1995) However, learning does not just end with the game. Debriefing is critical to using games in education as it provides the connection between learning in the game and applying those skills to other contexts. Teachers can facilitate the transfer of skills by leading pre- and post-game discussions which connect the game with other things students are learning in class (Ash, 2011). Ke (2009) concluded that instructional support features are necessary in order for the lessons learned in computer games to transfer to other contexts. Video games can be used to create deeper learning experiences for students, but they do not provide the entire experience. Steinkuehler & Chmiel (2006) suggest that games will not replace teachers and classrooms, but they might replace some textbooks and laboratories.
The concept of mobile learning (mLearning) – understood for the purposes of this article as learning facilitated by mobile devices – is gaining traction in the developing world. The number of projects exploring the potential of mobile phone-facilitated mLearning in the developing world is steadily growing, spurred in part by the use of mobile technology in the educational sector in the developed world which has expanded from short-term trials on a small scale to large-scale integration. However, there remains a lack of analysis that brings together the findings of the rising number of mLearning projects in the developing world.
With the increasing attention now being given to the role of mobiles in the educational sector in developing countries, there is a need at this juncture to take stock of the available evidence of the educational benefits that mobile phones provide in the developing world. Consequently, this article explores the results of six mLearning projects that took place in several developing countries in Asia – the Philippines, Mongolia, Thailand, India, and Bangladesh – both because most developing-country mLearning interventions are being undertaken in Asia and because developments in Asia seem to indicate that the region could become the global leader in educational uses of mobiles (Motlik, 2008). In exploring how mobile phone-facilitated mLearning contributes to improved educational outcomes, this article examines two specific issues: 1) the role of mobiles in improving access to education, and 2) the role of mobiles in promoting new learning, those new learning processes and new instructional methods currently stressed in educational theory. Of note, the projects reviewed deal with both formal and non-formal education as defined by Dighe, Hakeem, and Shaeffer (2009, p. 60).
Review of Related Study
Despite improvements in educational indicators, such as enrolment, significant challenges remain with regard to the delivery of quality education in developing countries, particularly in rural and remote regions. In the attempt to find viable solutions to these challenges, much hope has been placed in new information and communication technologies (ICTs), mobile phones being one example. This article reviews the evidence of the role of mobile phone-facilitated mLearning in contributing to improved educational outcomes in the developing countries of Asia by exploring the results of six mLearning pilot projects that took place in the Philippines, Mongolia, Thailand, India, and Bangladesh. In particular, this article examines the extent to which the use of mobile phones helped to improve educational outcomes in two specific ways: 1) in improving access to education, and 2) in promoting new learning. Analysis of the projects indicates that while there is important evidence of mobile phones facilitating increased access, much less evidence exists as to how mobiles promote new learning
Skills Arena (Lee et al 2004) is a mathematics video game, implemented using the Nintendo Game Boy Advance system that supplements traditional curricular and teaching methods.
(Lonsdale et al 2003; 2004) MOBIlearn, a major European research project, is focusing on the context-aware delivery of content and services to learners with mobile devices.
(Keegan, 2002) In so much as mLearning exerts an impact on educational outcomes by increasing access, mLearning represents a continuation and improvement of distance learning through increased utility and applicability MLearning, the literature suggests, broadens the availability of quality education materials through decreased cost and increased flexibility while also enhancing the efficiency and effectiveness of education administration and policy.
Credits to the authors and developers of the project
You may visit our facebook page for more information, inquiries and comments.
Hire our team to do the project. |
How does a solar cell work? Is it possible to create one using simple lab apparatus?Asked by: Megha
AnswerSolar cells (photovoltaics), use the energy from light photons to create electrical potential between two layers of silicon crystal. The atomic nature of silicon, with some added impurities, is what makes it all possible. The outer orbital electron shell of a silicon atom contains four electrons. Since it takes eight electrons to fill the electron shell, a silicon atom is continually looking for four electrons to bond with. This it finds by bonding covalently with other atoms of silicon forming a characteristic crystalline structure. Silicon atoms thusly joined are very stable and are not electrically conductive, but this is where the impurities come in. By 'doping' the silicon with substances such as phosphorus and boron, entirely different electrical properties are introduced into the silicon creating semi-conductive material.
For instance, when phosphorus joins with silicon, it creates an N-type semi-conductive material because phosphorus has five electrons in its outer shell. The silicon wants four of them but that leaves one electron hanging out by its lonesome and giving the molecule a negative charge. If boron joins with silicon, it creates P-type semi-conductive material (positive charge), as boron has three electrons in its outer shell. Even though silicon bonds with it, it leaves an electron 'hole,' where the molecule is positively charged and is still seeking an electron.
If layers of phosphorus impregnated silicon and boron-impregnated silicon are joined together with metal leads or conduits, an electrical potential can be created with some help from light. When light photons strike the phosphorus layer containing the extra electrons, those electrons can be sheered off and freed. When they are, they immediately recognize the potential in the boron layer and head that way. If a load (some work that you want to have done with electricity), happens to be connected in between these two layers where the potential has been created, then the migrating electrons are useful electrical current.
Solar cells are a wonderful alternative energy sources but have definite limitations. Since not all visible light is useful for this process, most of the sunlight energy can not be used to free electrons in the solar cells. Much of it is reflected or passes through not hitting the desired electron target. In addition, the electrical potential is very small and even with the most efficient solar cells; they must be chained together in large arrays to produce enough electricity to be useful. Because of the nature in which they produce their electricity, solar cells do experience a slight drop in effectiveness but they essentially never wear out. Then of course the most obvious problem: what do you do if the sun is not shining?
The nature work required to fabricate semi-conductive materials is probably beyond the realm of simple lab equipment. But many solar cell companies will give away broken cell fragments for the asking if you are looking for something to play with.
Also, if you are an educator, contact the Florida Solar Energy Center.
- How Solar Cells Work by NoOutage.com
- Bond and Molecular Polarity by Brenda Wojciechowski and Paul Cerpovicz (Georgia Southern University)
Answered by: Stephen Portz, Technology Teacher, Space Coast Middle School, FL |
The market offers a broad choice of childcare providers and preschools with different approaches to early childhood education. In most cases, the objective is to prepare children for Kindergarten and life beyond, although the ways to achieve the goal may vary between schools.
For many years, play-based education has gained momentum in the educational community and from experts in early childhood education. It only takes a quick online search to notice the abundance of content about play-based learning to realize that it is an area of varying opinions and continuous research.
To simplify the discussion, the concept of play-based learning can be summarized as learning while at play.
Why Play-based Learning?
Children are naturally inclined to play. Using this natural disposition in a learning context, a play-based activity is a developmental opportunity that fosters children’s excitement and motivation to:
Play-based learning is a hands-on learning approach, in which children are encouraged to go through trial and errors to solve problems.
Dr Kathy Hirsh-Pasek, Director of the Infant Language Laboratory at Temple University, says that “playful learning engages and motivates children in ways that support better developmental outcomes and strategies for life-long learning. If we hope to groom intelligent, socially skilled, creative thinkers for the global workplace of tomorrow, we must return play to its rightful position in children’s lives today.”
The Benefits of Play-based Learning
Children are stimulated through play. They become active learners that are exposed to challenging problems that they have to solve through exploration.
It is important to note that a good balance between child-initiated free play and teacher-guided play with intentional thematic teaching is essential to enable and maximize benefits such as:
Play-based learning supports positivity in the acquisition process and builds upon the natural curiosity of children about the nature of things and their environment. It boosts children’s enthusiasm and rewards persistence, imagination, and creativity to overcome problems. Ultimately, this impacts children’s self-esteem.
Play-based Learning vs Directed Instruction
Compared to directed instruction, failures are not a source of stress leading to demotivation. Failures are an opportunity for more experimentation. Failures have no negative impact on self-esteem which would have disastrous long-term consequences on a child’s life.
It does not mean that directed instruction has no place in education. It means that young children have more involvement and input into their knowledge and skill acquisition in a system where play is the vehicle for learning. NAEYC(The National Association for the Education of Young Children) offers interesting examples to connect play to learning.
Play-based activities offer a great framework for teachers to introduce STEMor STEAM while at play. STEAM is the acronym for Science, Technology, Engineering, Art, and Mathematic. STEM or STEAM activities are perfectly complementary to a play-based learning.
Are You Interested to Learn More about Play-based Education?
If you are interested to learn more about play-based learning and how we implement it at Willowdale Children’s Academy, do not hesitate to contact us. |
Converts electric current from one unit to another e.g. from miliamperes (mA) to amperes (A) or vice versa.
- Electric resistivity units
More than 170 restivity units. Here you found common units such as ohm times metre (Ω × m) or ohm times inch (Ω × inch), but also less known for example statohm times inch (statohm × inch).
- Electromagnetic waves
Table shows common classification of electromagnetic waves based on frequency (wavelength). Also, example methods of producing/generating and applications for given wavelengths are presented.
More than 25 frequency units. Common SI units such as hertzs (Hz), kilohertz (kHz), gigahertz (GHz), but also some less known like radians per seconds, RPM (rotatinions per minute) or degrees per hours.
Generates sound with specified frequency and waveform type (sine, triangle etc.).
Table shows resistivity values of common materials (substances).
Table shows common Morse codes for letters, digits and selected special codes such as international SOS signal.
Calculator encodes given text message into Morse codes or vice versa. You can also hear (beeps) and see (as sequence of light pulses) your Morse encoded message. For better readability currently played Morse character is higlighted (marked).
Current, voltage, resistance: calculations related to Ohm law. Enter known values (e.g. voltage and resistance of conductor) and we'll show you step-by-step how to transform basic formula and find out missing value (e.g. current)
Table shows properties of common periodic signals (sine, square, triangle etc.) such as absolute mean value, effective value or shape factor.
Converts electrical resistance value from one unit to another e.g. from ohms (Ω) to megaoms (MΩ) or vice versa.
Calculator decodes parameters of the resistor (resistance value, tolerance, temperature coefficient) painted as colored bands on the resistor and vice versa.
Converts voltage from one unit to another e.g. from milivolts (mV) to volts (V) or vice versa.
- Wire of a given length resistance
Calculator shows relation between wire dimensions (length, cross-surface area), kind of material (resistivity) and resistance of the final conductor. |
Neurology is a medical specialty focused on the brain, spinal cord and the body’s network of nerves. Together these comprise the “nervous system,” which essentially controls everything that happens in the body and everything the body does — from digestion and breathing to moving, thinking and learning.
The brain is the hub of the nervous system: Like a “mission control center” it interprets and responds to the messages the peripheral nerves send it from inside and outside the body. The spinal cord is often a kind of first stop and relay station for these signals, sending them up to the brain from there, but also able to respond to some on its own (we call these “reflexes”). Let’s say you put your hand over a flame; peripheral nerves in the finger send the message “HOT!” over larger nerves in the arm and eventually to even larger nerves in the neck where the signal enters the spinal cord. The spinal cord forwards the “HOT!” message up to the brain at the same time it sends a message to the arm muscles to pull the hand away. By the time the brain knows there was something “HOT!” going on, the hand is already out of the fire, and it all happens in a fraction of a second.
Hundreds of diseases and disorders may affect the nervous system and cause a wide range of symptoms, from weakness, numbness or pain to problems moving, speaking, breathing, swallowing, seeing, hearing or remembering. The specific problems that may develop and the symptoms and severity vary depending on what part or parts of the system are affected and the underlying cause.
Causes of neurologic problems include genetic defects (e.g., involving nerve or muscle function), brain or spinal cord injuries or tumors, infections (e.g., meningitis, encephalitis), diseases or disorders of blood vessels (e.g., stroke, brain hemorrhage, migraine), loss of the lining around nerves (multiple sclerosis), and diseases that damage or destroy the brain’s nerve cells (e.g., Alzheimer’s disease, Parkinson’s disease). The root causes of some of these issues remain unclear, although with stroke we do know that some of the responsible factors are the same ones that increase the risk for heart attacks (i.e., family history, obesity, elevated cholesterol, high blood pressure, diabetes and smoking).
Many neurologic disorders and their associated symptoms have the potential to cause significant impairment, and many are challenging to treat. As with all health problems, but particularly for potentially-treatable conditions, an accurate diagnosis is the critical first step in the care of patients with signs or symptoms of a neurologic disorder. Although primary care physicians are trained to recognize possible neurologic problems and can treat or manage some disorders of the nervous system, a neurology specialist is typically consulted at some point in the care process.
Physicians who specialize in the diagnosis and nonsurgical management of disorders of the nervous system are called neurologists. Physicians in other disciplines will often consult a neurologist as the principal specialist involved in evaluating a patient with a suspected neurologic disorder. Neurologists are experienced in carrying out and interpreting results of specialized neurologic tests, such as brain or spinal cord imaging, studies of the electrical activity of the brain (electroencephalography) or peripheral muscles and nerves (electromyography, nerve conduction studies), evaluations to identify problems during sleep (e.g., sleep-related movement or breathing disorders), and spinal fluid tests (spinal tap).
Once a diagnosis is reached, the neurologist’s role in patient care varies depending on the severity and complexity of the diagnosis. For example, a neurologist may initially treat a patient who has suffered a head injury or stroke or who has newly diagnosed migraine and then may serve as a consultant on that patient’s ongoing treatment and management. A neurologist may be the principal care provider for patients with uncommon or difficult-to-treat conditions or with disorders that require frequent care (e.g., epilepsy, multiple sclerosis, Parkinson’s disease).
Neurologists do not perform surgery but may recommend surgical treatment. In this case, a patient would be referred to a neurosurgeon — a physician who specializes in surgical treatment of disorders of the brain, spinal cord, and other nervous system components. |
Internal Resistance And Matching In Voltage Sources.
Internal Resistance And Matching In Voltage Sources - Both the terminal voltage of a voltage source and the current depend on the load, i. e. on the external resistance. The terminal voltage is measured as a function of the current and from it the internal resistance and no-load voltage of the voltage source are determined and the power graph plotted. Tasks To measure the terminal voltage Ut of a number of voltage source as a function of the current, varying the external resistance Re, and to calculate the no-load voltage U0 and the internal resistance Ri. 1.1 Slimline battery 1.2 Power supply 1.2.1 Alternating voltage output 1.2.2 Direct voltage output To measure directly the no-load voltage of the slimline battery (with no external resistance) and its internal resistance (by power matching, Ri = Re). To determine the power diagram from the relationship between terminal voltage and current, as illustrated by the slimline battery. What you can learn about Voltage source Electromotive force (e.m.f.) Terminal voltage No-load operation Short circuit Ohm's law Kirchhoff's laws Power matching |
The Gradual Decline of Coral Reefs in the UAE: Efforts for Preservation
By: MacKenzie DiLeo/Arab America Contributing Writer
The United Arab Emirates is home to the largest and most significant coastal and marine habitats, most notably coral reefs. Coral reefs are among the most productive and biologically diverse ecosystems on the planet, and they support about 25% of known marine species. In addition to their biological benefits, coral reefs also provide serious economic value. They contribute to providing social, economic and environmental benefits to millions of people through services like the provision of livelihoods and food security via fisheries, revenue from tourism, and even preventing shoreline erosion and protection from natural disasters. However, due to local and global pressures, coral reefs are facing a rapid decline.
Coral Reefs in the UAE
Coral reefs in the UAE exemplify some of the highest thresholds for coral bleaching and mortality in the world. The coastlines of the UAE rank 38th in the world in terms of their reef size. A majority of these exceptionally large reefs are located in Abu Dhabi, which is the UAE’s largest emirate. Their coral reefs range from about 34 different coral species. Nonetheless, these same reefs are threatened by urban and industrial encroachment and climate change. Given their high thresholds, however, studies have shown the potential of UAE coral reefs to thrive at molecular, physiological, and ecological levels even in unfavorable circumstances. Because of their adaptability and high tolerance to extreme conditions, the coral reefs in the UAE (specifically in Abu Dhabi) are important to researchers and marine scientists who are focused on the sustainability of reefs across the globe.
As with many other locations across the Gulf and various other coastlines, coral reefs are crucial to the UAE for its budding marine life. The UAE also depends on them to protect against damaging waves. Not to mention the high tourist revenue the country also receives for its highly attractive coral reefs for visitors from all over to admire. Without them, the UAE would suffer a huge loss in terms of revenue and having a prosperous habitat for its rather large marine life population.
Preserving these Coral Reefs on an International Level
With coral reefs serving as an important biological and economic necessity, preservation through efficient and effective management should be a top priority. There have been international efforts in place to address the decline of the reefs, including environmental agreements/programs and international partnerships/networks. Some of these efforts have been performed by the 2000 International Coral Reef Action Network, the United Nations Environment Program, and UNEP World Conservation Monitoring Center. ICRAN launched to address the world’s decline in reefs on a broad scale. The UNEP and its monitoring center also work together to take action to reverse the decline of the reefs. Despite these international efforts, however, greater effort needs to be instituted to directly improve the conservation of reefs, specifically in the UAE.
Coral Reef Preservation in the UAE
The UAE is currently playing a leading role in efforts to preserve coral reefs across its region and the Arabian Gulf. Their main efforts are concentrated on harvesting coral and attempting to create artificial reefs. In April 2019, the UAE officially announced its intentions to begin a major project of creating the world’s largest artificial reef off the coast of Fujairah. Conservationists in the Florida Keys have also committed to helping the UAE with this extensive project. The nation hopes to finish the project within the decade, but an official date of when the project will be completely finalized is yet to be announced. The project itself will help to restore coral coverage to original levels by reimplementing new colonies. The plan proposes placing 300,000 mature adult colonies in the location of the artificial reef.
With Florida’s help, the UAE will also implement a new technique that makes coral grow quicker. This technique is known as micro-fragmenting, where scientists place one or two polyps onto ceramic disks. The coral then invests all of its energy into growing laterally and covering the surface of the disk in just six to eight months. This way the coral also grows outward instead of upward, which allows it to grow much faster than it otherwise would. While the time it may take to create these reefs could be lengthy, other countries who are also dependent on their coral reefs should consider such a project to preserve its well-being.
Check out Arab America’s blog here! |
The Asian tsunami of 2004 was caused by tectonic activity beneath the Indian Ocean. A fault twenty miles below the ocean surface ruptured, forcing one of the plates to be thrust upwards by as much as 40 feet. The ocean above was forced upwards and the displaced water moved out as a series of giant ripples. From the land, the first sign of a tsunami is the water being dragged out to sea. The vertical wall of the tsunami destroyed everything in its path.
Students should create annotated diagrams showing tectonic movement leading to a tsunami. They can also model the behaviour of a tsunami or other wave as it enters shallower water, explaining why this brings about the cresting and then breaking of the wave. Considering the energy transfers taking place may help explain the destructive nature of a tsunami and could link to wave and energy topics in physics. |
Lectins are proteins that recognize and bind specific carbohydrates found on the surfaces of cells. They play a role in interactions and communication between cells typically for recognition. Carbohydrates on the surface of one cell bind to the binding sites of lectins on the surface of another cell. Binding results from numerous weak interactions which come together to form a strong attraction. A lectin usually contains two or more binding sites for carbohydrate units. In addition, the carbohydrate-binding specificity of a certain lectin is determined by the amino acid residues that bind the carbohydrate. Lectins are specific carbohydrate-binding proteins: - Enormous diversity of carbohydrates have biological significance: Different monosaccharides can be joined to one another through any several -OH groups. Extensive branching is possible. Many more different oligosaccharides can be formed from 4sugars than oligopeptides from 4 amino acids - Lectins promote interactions between cells: Lectin is to facilitate cell-cell contact Lectin and carbohydrates are linked by a number of weak non-covalent interactions C-type(calcium required): calcium ion on the protein acts a bridge between protein and sugar through direct interactions with sugar -OH groups Carbohydrates-binding specificity of a particular lectin is determined by the amino acid residues that bind the carbohydrates. - Influenza virus binds to Sialic acid residues: Influenza virus recognizes sialic acid residues linked to galactose residues that are present on cell-surface glycoproteins. These carbohydrates are bounded to hemagglutinin, viral protein. (virus is engulfed by the cell and starts to replicate) Neuraminidase, enzyme proteins that cleaves the glycosidic bonds to sialic acid residues of hemogglutin, free virus to infect new cells and spreading the infection.
Lectins are capable of binding to many different types of carbohydrates. Because of this capability, the way that a lectin binds to carbohydrates, the materials necessary for binding, and the strength of the bond varies. Some of the various forms of binding are discussed below.
-Monosaccharides and disaccharides have shallow grooves to which lectins bind, making the affinity of the bond low. Because of the difficulty that lectins face when binding to these carbohydrates, a subsite multivalency (which is a spatial extension of the grooves) is necessary to achieve binding. This extension makes it so that the contact site on the carbohydrate is embedded into a more complex contact region. This type of binding works most efficiently with small lectins, as evidenced by the lectin, hevein, which is only 43 amino acids long. Rapid binding kinetics also facilitate the binding of lectins to carbohydrates. An example of this is the binding of sialyl Lewisx (a tetrasaccharide) to P-selectin. Rapid binding kinetics allows for spatial complementarity to be reached between a low-energy conformation of the carbohydrate and the prearranged binding site of the lectin.
-The shape of the binding sites in carbohydrates plays a factor in its bondage to lectins. An example of this is the case of galectin-1 binding to ganglioside GM1 (a pentasaccharide). Nuclear Magnetic Resonance and other molecular modeling techniques were used to analyze the bond between these two molecules. The images found showed that two branches of the carbohydrate are bonded to the lectin. The α2, 3-sialylgalactose linkage is able to adopt three different, low-energy conformers. One of these conformers is energetically favorable for the binding of galectin- to ganglioside GM1. This process is evidence that lectins prefer certain conformations (shapes) when deciding how to bind to a carbohydrate. This evidence shows that oligosaccharides have limited flexibility. This limited flexibility makes oligosaccharides very favorable ligands, seeing as they avoid entropic penalties.
-Core substitutions have been found to occur in N-glycans. These substitutions are added to specific positions on the carbohydrate during its course to being assembled. These substitutions have been found to prominently affect the properties of glycans. The glycan properties are so affected, that they do not even need to be in the presence of lectins in order to be noticed. These substitutions, resulting in changes of certain parts of the carbohydrate, act as molecular switches governing the shape of glycans.
-Branching also introduces molecular switches. This property is most exemplified in the glycoside cluster effect. Enhancing the numerical valency of a molecule results in an increase in affinity. The type of branching appears to have a significant effect on this increase in affinity.
Importance of Carbohydrates in Cell Communication
Carbohydrates contain abundant information as a result of the various composition and structures that are possible. These diverse compounds result from the many OH groups available for linkage, which further allow for extensive branching. Additionally, the substituent attached to the anomeric carbon can assume either an alpha or beta configuration. The presence of these various carbohydrates on cell surfaces allows for effective cell-to-cell communication.
Functions of Lectins
Lectins are known to be very widespread in nature. They can bind to soluble carbohydrates or carbohydrate functional groups that are a part of a gylcoprotein or glycolipid. Lectins typically bind these carbohydrates with certain animal cells and sometimes results in glycoconjugate precipitation.
In animals, lectins regulate the cell adhesion to glycoprotein synthesis, control protein levels in blood, and bind soluble extracellular and intracellular glycoproteins. Also, in the immune system, lectins recognize carbohydrates found specifically on pathogens, or those that are not recognizable on host cells. Clinically, purified lectins can be used to identify glycolipids and glycoproteins on an individual's red blood cells for blood typing.
C-Type lectins are those that require a calcium ion. The calcium ion helps bind the protein and carbohydrate by interacting with the OH groups found on the carbohydrate. Calcium can also form a linkage between the carbohydrate and glutamates in the lectin. Binding is further strengthened through hydrogen bonds that form between the lectin side chains and the OH groups of the carbohydrate. Carbohydrate recognition and binding is made possible by a homologous domain consisting of 120 amino acids. These amino acids determine the specificity of carbohydrate binding.
C Type lectins carry a wide range of functions such as cell to cell adhesion, immune response to foreign bodies and self-cell destruction. C Type lectins are categorized into various different subgroups specific to the different protein functional domains. These lectins are calcium ion dependent and share linear structural homology in their carbohydrate-recognition domains. Among Eukaryotes and the animal kingdom, this wide range of protein families including endocytic receptors, collectins, and selectins is found most abundantly. The differences in members of the family vary in the different kinds of carbohydrate complexes that are recognized with high polarity and affinity. C type lectins are involved with immune defense mechanisms and help protect an organism against tumorous cells.
P-Type lectins contain a phosphate group. CD-MPR and CI-MPR are the only two members of the P-lectin family, cation-dependent and cation-independent. The main function of P-type lectins in eukaryotic cells involves delivering newly synthesized soluble acid hydrolyses to the lysosome. They do this by binding to mannose 6-phosphate residues found on the N-linked oligosaccharides of the hydrolyses.
MPRs (Mannose-6-phosphate receptors) were discovered when studies on mucolipidosis II, a lysosomal storage disorder, were conducted. Hickman and Neufeld found that fibroblasts from ML II patients were able to absorb lysosomal enzymes excreted by normal cells, whereas fibroblasts from normal patients were not able to absorb the lysosomal enzymes. Hickman and Neufeld hypothesized that the lysosomal enzymes had a recognition tag that allowed for receptor-mediated uptake and transport to lysosomes. These tags later became known as MPRs.
CI-MPR is about 300 kDA and exists as a dimer. The overall folding of CI-MPR is similar to that of CD-MPR, but unlike CD-MPR, CI-MPR is cation-independent. In addition, CI-MPR binds to proteins that have the MPR tag, IFG-II (a peptide hormone), and other non lysosomal hydrolases. The N-terminal three domains of CI-MPR exists as a monomer, and forms a tri-lobed disk that has significant contact with one another. This attribute of the tri-lobed disk is vital in maintaining the structure of its sugar binding site. Phosphorylated Glycan Microarray demonstrates that CI-MPR shows little disparity between glycans having one or two phosphomonoesters when it comes to binding. This is unlike CD-MPR, which has been shown to have affinity towards glycans with two phosphomonoesters. In addition, CI-MPR binds to ligands at the cell surface, unlike CD-MPR. Overall, all of the ligand binding sites of CI-MPR are located on the odd-numbered domains. Four signature residues in CD-MPR and domain 3 of CI-MPR are conserved, and have been found to react with Man-6-P in the same manner, suggesting that the Man-6-P binding pockets are similar. One difference that has been found is the fact that the pocket in CD-MPR contains Mn 2+, whereas the binding pocket in CI-MPR does not. This could be the reason why CI-MPR is cation-independent.
CD-MPR is a 46 kDA cation-dependent homodimer. Three disulfide linkages formed by six cysteine residues in the extracellular region of CD-MPR are key to the folding of the homodimer. Because the 15 contiguous domains of the extracystolic region are similar in size and amino acid sequence when compared to each other, it is understood that CD-MPR and CI-MPR have similar tertiary structures. In fact, CD-MPR domains 1, 2, 3, 11, 12, 13 and 14 of CI-MPR have the same fold in the extracystolic domain. The overall fold of the CD-MPR monomer consists of a flattened beta barrel consisting of two antiparallel beta sheets, one with four strands, and the other with five strands. The CD-MPR dimer consists of two five stranded antiparallel beta sheets. E133, Y143, Q66, and R111 have been found to be essential in Man-6-P binding via mutagenesis studies of CD-MPR. CD-MPR’s binding and unbinding mechanism is similar to that of the oxy-to-deoxy transition of hemoglobin. The overall movement has been described as to be a “scissoring and twisting” motion in between the two subunits of the dimer interface. These two subunits are connected via a salt bridge. Absence of this salt bridge results in a weaker bind with lysosomal enzymes, signaling the importance of ionic interactions between the two subunits in binding.
Selectins are a type of C-Type lectins that play a role in the immune system. Selectins consist of L, E, and P forms that bind to carbohydrates found on lymph-node vessels, endothelium, and activated blood platelets. They behave analogously to C type lectins in that both have a high affinity for calcium binding and are responsible for immune responses. Selectins are sugar binding polymers that are adhesive among other cells which causes it to be highly effective in targeting an inflammatory response for a localized region. Selectins target only specific kinds of binding sites, but thus allows it to be effective in conjunction with leukocyte cascading to minimize invasively targeting an infected region.
Examples of Lectins
Embryos are attached to the endometrium of the uterus through L-Selectin. This activates a signal to allow for implantation.
E. coli are able to reside in the gastrointestinal tract by lectins that recognize carbohydrates in the intestines.
The influenza virus contains hemagglutinin which recognizes sialic acid residues on the glycoproteins located on the surface of the host cell. This allows the virus to attach and gain entry into the host cell.
Gabius, Hans-Joachim, Sabine Andre, Jesus Jimenez-Barbero, Antonio Romero, and Dolores Solis. "From Lectin Structure to Functional Glycomics: Principles of the Sugar Code." Trends in Biochemical Sciences 36.6 (2011): 298-313. Print. |
in the Early Years
ERIC Identifier: ED446336
Publication Date: 2000-10-00
Author: Lu, Mei-Yu
Source: ERIC Clearinghouse on Reading English and Communication Bloomington IN.
THE SOCIAL ROOT OF LANGUAGE DEVELOPMENT
This digest, written from a social interaction perspective, provides readers an overview of children's language development in the first five years of their life. The primary function of language, according to Vygotsky (1962), "in both adults and children is communication, social contact" (p.19). Through daily interaction with other language users, children learn how to use language to convey messages, to express feelings, and to achieve intentions which enable them to function in a society. Muspratt, Luke, and Freebody (1997) argue that the language that members of a specific community use reflects the values and beliefs that are embedded in their culture and ideologies; in the same way, the culture and dominant ideologies within learning contexts also have a strong impact on the learners' perceptions of the language learning process. In other words, language is a cultural tool which provides the means for members of a group to retain their shared identity and to relate with each other. Through the process of language learning, parents socialize their children into socially and culturally appropriate ways of behaving, speaking, and thinking.
The process of language acquisition for young children is built upon a variety of experiences. From birth, parents and caregivers involve infants in communicative exchanges. These exchanges accompany activities shared by adults and infants, such as bathing, feeding, and dressing. During these activities, parents and caregivers comment on the infants' actions and often repeat and exaggerate their vocalizations (Fernald & Mazzie, 1991). Such communicative exchanges between adults and infants function as a form of social interaction. This social interaction helps build intimacy between adults and infants, enhances infants' interests in their environment, and provides them with stimulation for later language development (Burkato & Daehler, 1995).
THE FIRST YEAR
Crying is the earliest form of infant vocalization. But after only a few weeks of experience with language, infants begin to vocalize in addition to crying: they coo. Infants generally begin to coo at about one month of age (Shaffer, 1999). Cooing is repeating vowel-like sounds such as "oooooh" or "aaaaah." Infants coo when their parents or caregiver interact with them. At around 3 or 4 months, infants start to add consonant sounds to their cooing, and they begin to babble at between 4 and 6 months of age. Babbling consists of consonant and vowel sounds. Infants are able to combine these consonant and vowel sounds into syllable-like sequences, such as mamama, kaka, and dadadada (Berk, 2000; Shaffer, 1999). Through interacting with parents or caregivers by such cooing and babbling, infants develop a sense of the role of language in communication by the end of the first year. The linkage between communication and sound-making signals the onset of true language (Glover & Bruning, 1987).
THE SECOND AND THIRD YEAR
In the beginning of the second year, children's first words emerge. The first words are also called "holophrases" because children's productive vocabulary usually contains only one or two very simple words at a time, and they seem to utter single words to represent the whole meaning of an entire sentence (Shaffer, 1999). Children's first words are usually very different from adults' speech in terms of the pronunciation, and these first words are most frequently nominals--labels for objects, people, or events (Bukatko & Daehler, 1995). In addition, children's first words are quite contextual. They may use a single word to identify something or somebody under different conditions (such as saying "ma" when seeing mother entering the room), to label objects linked to someone (saying "ma" when seeing mother's lipstick), or to express needs (saying "ma" and extending arms for wanting a hug from the mother). In the initial stage of the first-word utterance, children produce words slowly. However, once they have achieved a productive vocabulary of ten words, children begin to add new words at a faster rate, called "vocabulary spurt" (Barrett, 1985).
By their second birthday, children begin to combine words and to generate simple sentences (Bukatko & Daehler, 1995). Initially, the first sentences are often two-word sentences, gradually evolving into longer ones. Children's first sentences have been called "telegraphic speech" because these sentences resemble the abbreviated language of a telegram. Like the telegram, children's first sentences contain mainly the essential content words, such as verbs and nouns, but omit the function words, such as articles, prepositions, and pronouns, auxiliary verbs (Berk, 2000).
Although children's first sentences seem to be ungrammatical in terms of adult standards, they are far more than strings of random words combined. Instead, they have a structure of their own. A characteristic of the structure is that some words, called "pivot words," are used in a mostly fixed position, and are combined with other less frequently used words referred to as "open words," which can be easily replaced by other words (Braine, 1976). For example, a child may use "more" as a pivot word, and create sentences such as, "more cookie, "more car," and "more doggie."
Creativity also plays an important role in this first sentence stage. Research has revealed that many of children's early sentences, such as "allgone cookie," and "more read" are creative statements which do not appear in adult speech (Shaffer, 1999). Like the first-word creation, context plays an important role in understanding children's first sentences because both require context in order that understanding can occur. As children's use of simple sentences increases, the amount of single-word use declines, and their sentences become increasingly elaborate and sophisticated. (Glover & Bruning, 1987).
THE PRESCHOOL YEARS
By the time children are 3 1/2 to 4 years of age, they have already acquired many important skills in language learning. They have a fairly large working vocabulary and an understanding of the function of words in referring to things and actions. They also have a command of basic conversational skills, such as talking about a variety of topics with different audiences. Nevertheless, language development, especially vocabulary growth and conversational skills, continues (Glover & Bruning, 1987). It is generally agreed that vocabulary learning is not accomplished through formal instruction. Instead, the meaning of new words is usually acquired when children interact with other more skilled language users during such natural situations as riding, eating, and playing (Beals & Tabors, 1995). From these activities, children are able to construct hypotheses when hearing unfamiliar verbal strings. They then test these hypotheses by further observation or by making up new sentences themselves. Finally, through feedback and further exposure, children revise and confirm their hypotheses (Bukatko & Daehler, 1995).
The development of conversational skills also requires children's active interaction with other people. To communicate with others effectively, children need to learn how to negotiate, take turns, and make relevant as well as intelligible contributions (Schickedanz, Schickedanz, Forsyth, & Forsyth, 1998). Through interacting with other more experienced language users, children modify and elaborate their sentences in response to requests for more information (Peterson & McCabe, 1992). As children interact with their playmates, their conversations usually include a series of turn-taking dialogues (Glover & Bruning, 1987). In addition, young children learn to adjust their messages to their listeners' level of understanding (Shatz & Gelman, 1973).
By the time children enter elementary school, their oral language is very similar to that of adults (Shaffer, 1999). They have acquired the basic syntactic, semantic, and pragmatic elements of their native language. Language development will continue, however, from early childhood through adolescence and into adulthood.
In summary, language learning is both a social and a developmental process. To acquire a language, children must interact with other more competent language users as well as explore various aspects of the linguistic system. During the early years of language learning, children also create, test, and revise their hypotheses regarding the use of language. Parents and early childhood educators should provide these young learners with developmentally appropriate language activities, offer opportunities for them to experiment with different aspects of language learning, and honor their creativity.
This project has been funded at least in part with Federal funds from the U.S. Department of Education under contract number ED-99-CO-0028. The content of this publication does not necessarily reflect the views or policies of the U.S. Department of Education nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government. ERIC Digests are in the public domain and may be freely reproduced.
Barrett, M. D. (1985). Issues in the study of children's single word speech. In M. D. Barrett (Ed.), "Children's single-word speech". Chichester, England: Wiley.
Beals, D. E., & Tabors, P. O. (1995). Arboretum, bureaucratic and carbohydrate: Preschoolers' exposure to rare vocabulary at home. "First Language", 15, 57-76.
Berk, L. E. (2000). "Child development" (5th ed.). Boston: Allyn & Bacon.
Braine, M. D. S. (1976). Children's first word combinations. "Monographs of the Society for Research in Child Development", 41 (Serial No. 164).
Bukatko, D. & Daehler, M. W. (1995). "Child Development: A thematic approach." Boston: Houghton Mifflin Company.
Fernald, A., & Mazzie, C. (1991). Prosody and focus in speech to infants and adults. "Developmental Psychology", 27, 209-221.
Glover, J. A. & Bruning, R. H. (1987). "Educational Psychology". Boston, MA: Little, Brown & Company.
Muspratt, S., Luke, A., & Freebody, P. (1997). "Constructing critical literacies". Cresskill. NJ: Hampton Press.
Peterson, C. & McCabe, A. (1992). Parental styles of narrative elicitation: Effect on children's narrative structure and content. "First Language", 12, 299-321.
Schickedanz, J. A., Schickedanz, D. I., Forsyth, P. D., & Forsyth, G. A. (1998). "Understanding children and adolescents". (3rd ed). Boston: Allyn and Bacon.
Shaffer, D. R. (1999). "Developmental psychology: Childhood & adolescence" (5th ed.). Pacific Grove, CA: Brook Cole Publishing Company.
Shatz, M. & Gelman, R. (1973). "The development of communication skills: Modifications in the speech of young children as a function of listener". Monographs of the Society for Research on Child Development, 38 (Serial No, 152).
Vygotsky, L. S. (1962). "Thought and language". (E. Hanfmann & G. Vakar, Eds. & Trans.). Cambridge, MA: MIT Press.
Menu Page |
Parenting the Next Generation |
One of the most interesting characteristics of gases is that regardless of their individual chemical properties, all gases basically follow the same set of gas laws. These laws describe the relationships between pressure, volume, temperature and the amount of a gas. According to these rules, gases will behave in a predictable way when one or more of these factors change. In order to understand how a decrease in both pressure and temperature will affect a fixed amount of a gas, we must first understand the laws that govern the behavior of gases.
Boyle's Law explains how the pressure and volume of a gas are related. This law states that when the temperature and mass of a gas sample are held constant, the pressure of the gas and its volume have an inverse relationship. So as the pressure increases, the volume of the gas decreases. As the pressure decreases, the volume increases.
Charles' Law deals with the relationship between temperature and volume of a gas. This law says that when the pressure and mass of a gas are constant, the volume of a gas is directly proportional to the temperature as measured in kelvins. As the temperature increases, the volume increases. As the temperature decreases, the volume does too.
Amontons' Law explains how pressure and temperature are related. When a gas sample is held at a constant volume, the pressure and temperature have a direct proportional relationship. As the pressure of the gas increases, the temperature increases, and as the pressure decreases, the temperature decreases as well.
Avogadro's hypothesis says that equal volumes of gases at the same temperature and pressure will have the same number of gas molecules, regardless of the type of gas. The relationship between the number of gas molecules and the volume of the gas is directly proportional. As the number of molecules increase, the volume of the gas increases.
Ideal Gas Law
All of these gas laws are combined to make the Ideal Gas Law. This law gives the relationship between pressure, volume, temperature and amount of a gas. The Ideal Gas Law is represented as PV=nRT, where P is the pressure, V is the volume, n is the number of moles of the gas, R is a constant known as the universal gas constant and T is the temperature in kelvins.
Pressure and Temperature Decreases
While a decrease in pressure will cause an increase in the volume of a gas, a decrease in temperature will cause a decrease in volume. The net effect on volume can be determined using the ideal gas law. The formula PV=nRT can be solved for volume. The resulting formula is V=nRT/P. The effect of pressure and temperature decreases is found by inserting these values, along with the number of moles of the gas, into this equation. This will give the final volume of the gas. |
Brightfield Digital Image Gallery
Zygnema Green Algae
In the world of filamentous freshwater algae (division Chlorophyta), the genus Zygnema, with its two stellate chloroplasts per cell, is a standout. Found often alongside Spirogyra, another still-water green algal genus, Zygnema species are classified as conjugate algae (phylum Gamophyta) because of their means of sexual reproduction by conjugation.
Forming green or yellow-brown mats of macroscopic threads or filaments, Zygnema can reproduce asexually, sexually, or vegetatively. Akinetes, spore-like bodies with very thick cell walls, are produced when the environment becomes unsuitable, allowing these green algae to withstand droughts and harsh winters via this means of asexual reproduction. Alternatively, in deteriorating conditions, sexual reproduction can commence when two side-by-side filaments grow conjugation tubes toward each other. DNA from each plant moves toward the center of the tube, fuses, and forms a zygospore. The zygospore sinks into the sediments of the still waters, awaiting more favorable habitat conditions before emerging as new filamentous algal strands. Both scalariform and lateral conjugation occurs after zygospore formation. There are many species in the genus, and individual types of Zygnema are distinguishable by the shape, size, and other characteristics of the zygospores they produce.
Fragmentation of Zygnema filaments accounts for vegetative reproduction, particularly when available nitrates and phosphates ("nutrients") are available in sufficient quantities. As with Spirogyra, thick mats and green clouds of Zygnema indicate over-fertilization or "enrichment" of water bodies, often by contaminated stormwater runoff. Large blooms can greatly influence the chemistry of a water body, particularly in microhabitats within the pond, ditch, spring, or vernal pool.
During the day, the hair-like strands of green algae produce relatively large volumes of dissolved oxygen, even raising the dissolved oxygen concentration above the saturation level (supersaturation). At night, as with other green plants, the metabolic processes reverse and the masses of green algae consume oxygen and produce carbon dioxide gas as a cellular respiratory waste product. On cloudy days or at night, the rapid creation of carbon dioxide in the water column can change the pH of the water too rapidly for obligatory aquatic organisms such as fish, resulting in stress, and abnormal behaviors that make them much more susceptible to piscivores. During winter months in temperate regions, particularly under the cover of ice and snow, the algal mats die and decompose, creating a significant biological oxygen demand due to respiring bacteria feeding on the decaying organic plant matter. Severe competition for limited dissolved oxygen resources, particularly in shallow ponds, between the bacteria and the larger aquatic organisms, such as fishes and tadpoles, often results in a phenomenon referred to as "winterkill", a die-off of fish populations that remains largely undiscovered until the following spring.
Cynthia D. Kelly, Thomas J. Fellers and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
Questions or comments? Send us an email.
© 1995-2013 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, software, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners.
This website is maintained by our |
SSL is a security protocol used for securing communications on the Web. A protocol is a set of rules or procedures. SSL technology takes a message and runs it through a set of steps that "scrambles" the message. This is done so that the message cannot be read while it is being transferred. This "scrambling" is called Encryption.
When the message is received by the intended recipient, SSL unscrambles the message, checks that it came from the correct sender (Server Authentication) and then verifies that it has not been tampered with (Message Integrity).
SSL makes use of Digital Certificates to authenticate one or both parties of an Internet transaction. A digital certificate is a means of binding the details about an individual or organization to a public key and it serves two purposes:
- First, it provides a cryptographic key that allows another party to encrypt information for the certificate's owner.
- Second, it provides a measure of proof that the holder of the certificate is who they claim to be - because otherwise, they will not be able to decrypt any information that was encrypted using the key in the certificate. |
Researchers in Switzerland have applied the principles behind cellular communication to mammalian cells. By reprogramming the cells with a specialized series of genes and proteins that allow for two-way communication, researchers have crafted cells that can talk to one another, sending messages via chemical signals rather than electronic transmission. The hope is that this two-way communication system can be harnessed to fight cancer, overriding orders sent by tumors with preprogrammed messages sent from other cells.
The research, done at ETH-Zurich, builds on similar concepts already tested in yeast cells, but this is the first time this sort of reprogramming has been performed in much more complicated mammalian cells. By activating genes that produce certain proteins under a given condition, scientists more or less turn the cells in question into simple logic devices that understand “If you receive a high concentration of this molecule, take this action.” They then linked the cells together so they could send these chemical signals directly to one another, much the way cell phones transmit information electronically between two designated points. One cell sends the information, the other receives it, acts on it, and responds, closing the circuit and ending the cellular “call.”
Researchers are hopeful that the work will one day result in a treatment for cancer, where cells in communication with one another can hijack orders sent by tumor cells. By instead sending messages that prevent cells from creating new blood vessels, communicating cells could effectively starve out tumors, interfering with their development or even destroying them altogether.
“Communication is extremely important in controlling blood vessels, and we hope to be able to use synthetic ‘cell phones’ to correct or even cure disease-related cell communication systems precisely in the future with a ‘therapeutic call’,” says study lead author Martin Fussenegger.
- This is cool, but not wooly mammoth cool
- It’s definitely more practical than paramecium Pong, though
- Beats a genetically altered salmon, though |
Radium, discovered in 1898 by Marie and Pierre Curie, is luminescent, giving rise to its first commercial application as a luminous paint on clock faces, watches, aircraft switches and instrument dials so that they would glow in the dark. At the time of its discovery, scientists were unaware of the danger Radium posed and carried vials of it in their pockets and handled it freely without precaution. Treated as calcium by the body, Radium is deposited in the bones where the radioactivity degrades marrow and mutates bone cells. Marie Curie’s death from aplastic anemia in 1934 is blamed on improper handling of radium and lack of ventilation to prevent the accumulation of radon, its decay product gas, also radioactive. Prior to the discovery of these serious adverse health effects Radium was used as an additive to products like toothpaste and hair creams as well as in foods and health “cures.”
My interpretation of Radium was inspired by the story of the “Radium Girls” who, early 1920s shaped the points on their brushes with their lips when painting clock and watch faces. Within two years, they had all died of bone cancer. An editorial cartoon from a Herbst newspaper of the period depicts young girls dipping their brushes in dishes of Radium offered by skeletons. The red rhinestone lips, radiation symbol and numerous clock faces stitched with “glow in the dark” thread, informs the sad story. |
By Jonathan Fildes
Science and technology reporter, BBC News
Fragile particles rarely seen in our Universe have been merged with ordinary electrons to make a new form of matter.
Antiparticles are the mirror image of ordinary particles
Di-positronium, as the new molecule is known, was predicted to exist in 1946 but has remained elusive to science.
Now, a US team has created thousands of the molecules by merging electrons with their antimatter equivalent: positrons.
The discovery, reported in the journal Nature, is a key step in the creation of ultra-powerful lasers known as gamma-ray annihilation lasers.
"The difference in the power available from a gamma-ray laser compared to a normal laser is the same as the difference between a nuclear explosion and a chemical explosion," said Dr David Cassidy of the University of California, Riverside, and one of the authors of the paper.
"It would have an incredibly high power density."
As a result, there is a huge interest in the technology from the military as well as energy researchers who believe the lasers could be used to kick-start nuclear fusion in a reactor.
Di-positronium was first predicted to exist by theoretical physicist John Wheeler and its component "atoms" - positronium - were first isolated in 1951.
These short-lived, hydrogen-like atoms consist of an electron and a positron, a positively charged antiparticle.
Positron Emission Tomography makes use of antiparticles
Antiparticles are the mirror image of ordinary particles.
There is an antiparticle for each type of particle in the Universe. For example, a positively charged proton has a corresponding negatively charged antiproton.
Conventional thinking states that both antimatter and matter should have been created in equal quantities at the birth of the Universe.
The dominance of matter in our world is one of science's most enduring mysteries.
Antimatter only makes fleeting appearances in our Universe when high-energy particle collisions take place, such as when cosmic rays impact the Earth's atmosphere. They are also made in the lab in particle accelerators such as Europe's nuclear research facility, Cern.
These appearances are always short lived because antiparticles are destroyed when they collide with normal matter. The meeting leaves a trace, often as high energy x-rays or gamma-rays.
These emissions are used today in PET (positron emission tomography) scanners to study activity in the brain.
The transient nature of antiparticles has made creating and studying di-positronium problematic.
"We've known about this molecule; we're not surprised that it exists but it's taken us more than 50 years to create it in the lab," said Dr Cassidy.
To make the molecule, Dr Cassidy and his team used a specially designed trap to store millions of the positrons.
A burst of 20 million were then focused and blasted at a porous silica "sponge".
"It's like having a trickle of water filling up a bath and then you empty it out and you get a big flush," said Dr Cassidy.
As the positrons rushed into the voids they were able to capture electrons to form atoms. Where atoms met, they formed molecules.
"All we are really doing is implanting lot of positrons into the smallest spot we can, in the shortest time, and hoping that some of them can see each other," said Dr Cassidy.
By measuring the gamma-rays that signalled their annihilation, the team estimated that up to 100,000 of the molecules formed, albeit for just a quarter of a nanosecond (billionth of a second).
Dr Cassidy believes that increasing the density of the positronium in the silicon would create an exotic state of matter known as a Bose-Einstein condensate (BEC).
Bose-Einstein condensate are like a super-atom
BECs are usually produced by supercooling atoms so that they merge and begin to behave like one giant atom.
They have been used in many experiments such as the 2003 Harvard study in which scientists were able to trap light.
"At even higher densities, one might expect the material to become a regular, crystalline solid," wrote Professor Clifford Surko, of the University of Californian, San Diego, in an accompanying article.
Taking it one step further, scientists could use the spontaneous annihilation of the BEC, and the subsequent outburst of gamma-rays, to make a powerful laser.
"A gamma-ray laser is the kind of thing that if it existed people would find new uses for it everyday," said Dr Cassidy.
He highlighted an experiment at the National Ignition Facility (NIF) in the US where scientists envisage using 192 lasers to heat a fuel target to try to kick-start nuclear fusion.
"Imagine doing that but you no longer need hundreds of lasers," he said. |
Humans actually have four nostrils — two exterior nostrils, or nares, and two internal nostrils, called choanae, which are part of the posterior nasal aperture. Choanae connect the nose to the throat and help with breathing. Rarely, humans or animals can be born with choanal atresia, or blocked choanae, and require surgery to be able to breathe properly.
More facts about nostrils and choanae:
- Choanae each contain about 1,000 nasal hairs.
- Fish don't have choanae, but they do have two pairs of exterior nostrils — one set near the jaw, and one set near the eyes.
- Several species of aquatic birds, such as pelicans, have very narrow, slit-like nostrils. This keeps them from getting water up their noses when they dive.
More Info: www.medicinenet.com
Free Widgets for your Site/Blog |
In an evolutionary novelty, a flightless prehistoric bird found only in Jamaica used its weighty wing bones to clobber rivals during territorial disputes. Researchers examined several partial skeletons of Xenicibis xympithecus, an extinct wading bird about the size of a large chicken that lived some 10,000 years ago. The bone at the tip of the birds' wings—the "hand" bone—was so thick and curved, it appeared deformed. Xenicibis used its hefty hand bones for battle, swinging them like clubs, the researchers posit. Indeed, two of the fossils—a hand bone and upper arm bone—showed wear and tear consistent with fighting. Other birds use their wings as weapons, too, but none wield their hand bones like clubs. The most likely targets of these powerful swings, the researchers report online today in the Proceedings of the Royal Society B, were other Xenicibis, although the bird may have also used its clublike wings to protect its eggs and young from predators. |
Discuss equality and diversity in early years practice
The purpose of this essay is to give the reader an insight into equality and diversity in early years practice. There will be some information on the history of discrimination, Acts and Legal documents will be looked at including the EYFS, Surestart, Green Papers, Ofsted and Government websites. The outcomes for different groups that have been discriminated against and how effective the policies have been and how they have affected practice.
Groups of people who might be discriminated against include, but are not exhaustive, homeless people or travellers, females, homosexuals, disabled people, people of a different race, different religions and beliefs, all historically were given few legal rights.
Equality is the state of being equal (Pearson, 1999) but is not to be confused with uniformity and does not mean to be the same. Equality is not about making everyone the same; difference (diversity) is good.
Some history of discrimination to the people mentioned above include: women were considered to be less responsible, not so able and less important than their male counterparts until the early twentieth century, improvements were seen with women being able to vote but social and legal inequalities slowly changed in the second half of this century (Lindon, 2006). Homosexuality was illegal before about 1950 when it was decriminalised, with lesbianism not acknowledged until the early 19th Century [Stonewall, no date). The Disability Discrimination Act 1995 came into force with several amendments since then. This Act made significant changes to employers and employees including small business employers becoming exempt as well as occupational exclusions, for example the police, prison officers and barristers, with further amendments more recently (Great Britain. Legislation, 1995). The Race Relations Act 1976 protects all racial groups regardless of their colour, ethnic or national origins, nationality... |
Esophageal Cancer Health Center
Esophageal cancer begins in cells in the inner layer of the esophagus. Over time, the cancer may invade more deeply into the esophagus and nearby tissues.
Cancer cells can spread by breaking away from the original tumor. They may enter blood vessels or lymph vessels, which branch into all the tissues of the body. The cancer cells may attach to other tissues and grow to form new tumors that may damage those tissues. The spread of cancer cells is called metastasis. See the Staging section for information about esophageal cancer that has spread.
Growths in the wall of the esophagus can be benign (not cancer) or malignant (cancer). The smooth inner wall may have an abnormal rough area, an area of tiny bumps, or a tumor. Benign growths are not as harmful as malignant growths:
- are rarely a threat to life
- can be removed and probably won't grow back
- don't invade the tissues around them
- don't spread to other parts of the body
- may be a threat to life
- sometimes can be removed but can grow back
- can invade and damage nearby tissues and organs
- can spread to other parts of the body |
|Printer Friendly Format | Lesson Plans A-Z | Lesson Plans By Topic | Lesson Plans By Location|
Making Tracks (FBCEC)
|Topic||Laboratory and Hands-on Activities - Arts and Crafts|
Wildlife - Mammals
This popular make-and-take activity combines an art project with learning about the tracks of common Ozark mammals. Participants will make track molds in sand and then cast them with plaster of Paris.
|Grade Level||K - 12|
|Recommended Setting||Indoor or outdoor classroom|
|Location||Fred Berry Conservation Education Center, Yellville, AR|
Education Program Coordinator, 870-449-3484
|Duration||45 minutes - hour|
|Suggested Number of Participants||Up to 24|
Area suitable for working with sand, water and plaster of Paris.
- Learn facts about Ozark mammals.
- Learn to recognize Ozark mammal tracks.
- Create a take-home project.
Pre-marked cup for water to be added to bag of plaster
Rubber track replicas
1 take-out sandwich box for each participant
Track guides, mammal cards, mammal skins and posters (optional)
Water in pitchers and spray bottles
1 Ziploc bag with pre-measured plaster of Paris for each participant
Mammals are warm-blooded, fur-bearing animals. They give live birth to their young and provide milk to them through mammary glands. Mammal watching can be fun, but they are sometimes elusive. Developing tracking skills can help locate mammals for hunting, photography or simply watching.
- Display five to 10 numbered track replicas. Give participants a list of mammals to see how many they can match with the track.
- Tracks can indicate that animals have been in an area. Discuss how tracks can reveal which animals have been in an area and can help estimate populations or learn about an animal’s habits or activities.
- Instruct participants to fill the sandwich box about half full of sand.
- Sand should be “sand castle damp”. If it is not damp enough, participants may use a spray bottle to moisten the sand. NOT TOO WET!
- When sand is damp, participants should select a track replica and press it into the sand to create the impression of a mammal track on the sand surface. Carefully remove the rubber track replica. If the track doesn’t look complete or has smeared, smooth the sand and try again.
- Next, an adult may help each participant add the proper amount of water to the Ziploc bag of plaster. (Use cup with mark–approximately one part water to two parts plaster.) After the water is added to the plaster, make sure the zip bag is securely closed and gently knead the mixture until the plaster is completely moistened. It should be the consistency of pancake batter. If the mixture is too thick, add a small amount of water. If it is too thin, add a small amount of plaster from an extra bag.
- Promptly open one corner of the zip bag and pour plaster into the impression of the track. Completely cover the track with plaster and lightly smooth the plaster if there are lumps.
- Close the sandwich container and label with the participant’s name and the name of the mammal track that was made.
- Instruct participants to wait several hours or overnight (depending upon weather conditions) before removing the plaster cast from the box.
- After removing the plaster cast from the sand, an old toothbrush can lightly remove excess sand. It is not necessary to remove the sand if a rough surface is preferred.
- After completing the activity, participants may like to shuffle the track replicas and match them with the animal names.
If time allows, take participants on a short walk to identify animal tracks.
- Name at least two Arkansas mammals that have webbed hind tracks.
- Describe the unique characteristics of opossum tracks.
- Compare and contrast bobcat tracks to coyote tracks.
Mammal – any of a class of higher vertebrates, including man, that produce milk for their young, have fur or hair, are warm-blooded and, with the exception of the egg-laying monotremes, bear young alive
Track – a footprint of wildlife |
Category : Nature & Life
Heat stroke (also known as heatstroke, sun stroke or sunstroke) is a severe heat illness, defined as hyperthermia with a body temperature greater than 40.6 °C (105.1 °F) due to environmental heat exposure with lack of thermoregulation. Heat Stroke can occurs as a progression of mild symptoms of the heat related sickness such as heat cramps and heat exhaustion. It can cause damage to internal organs including the brain. Although heat stroke mainly affects people who are above 60 years of age. It can also affect healthy young people.
In combination with dehydration, heat strokes results from prolonged exposure to high temperature, which leads to failure of body’s temperature control system. The common symptoms of heat stroke include vomiting, nausea, seizures, disorientation and convulsions, and at times unconsciousness which may also lead to a coma. If you come across any person suffering from heat stroke, immediately provide first aid until the victim is taken to the nearest hospital.
The most common symptoms is that the core body temperature will be above 105 degree Fahrenheit. But fainting may be the first sign.
- Agonizing headache
- Red, hot and dry skin
- Shallow breathing
- Behavioural changes such as staggering
- Muscle weakness
- Headedness and dizziness
- Rapid heartbeat which may be too weak or strong
- Nausea followed by vomiting
Risk Factor For Heat Stroke
Heat stroke is most likely to affect older people over 60 years of age, who live in homes lacking of air conditioning or good air flow. Other high risk groups include people of any age who do not drink enough water.
The risk of heat related sickness increases as the heat index climbs beyond 32 degree Celsius. So ensure to check the weather report and also remember not to expose yourself to direct sunlight when the sun is hot over the head.
Those who live in an urban area, especially prone to develop heat stroke during a prolonged heat wave should be extra cautious.
Age- Infants and kids up to 5 year of age and adults over the age of 60 years are particularly
vulnerable as they are slower to adjust to heat than other people.
Medications- it include diet pills, antihistamines, anticonvulsants, beta-blockers and
vasoconstrictors. Drug abuse such as cocaine and methamphetamine also increases the risk of heat strokes.
FIRST AID FOR HEAT STROKES
When a person shows the symptoms of heat stroke, immediate medical help is required and any delay in seeking medical help can be fatal. So, before starting any first aid, call ambulance to transport the person to a hospital.
Do not hesitate to initiate the first aid treatment like cooling the body temperature to below 101-102 degree Fahrenheit. Try following methods given below for cooling the body temperature:
- wet the skin with water
- Fan air over the patient
- Apply ice packs to armpits, neck and back as these areas are rich with blood vessels close
to skin,so cooling them may reduce the body temperature
- immerse the patient in a shower or tub of water
When the heat index is high, try to stay in cool environment. But if you have to go outdoor, follow steps given below to prevent heat stroke:
- use sunscreen or sun protection cream
- wear light-coloured cloths. Loose fitting clothes and always wear a hat or carry an
- Drink lots of extra fluids; at least six glasses of water, vegetable juice, fruit juice.
Heat related sickness can also result from depletion of salt, so it is recommended to drink
an electrolyte rich drink during heat and humidity.
- Take extra precautions when exercising or working outdoors. Generally it is recommended to
drink two glasses of fluids, two hours before exercise and another glass of water just before exercise. And during exercise, consume one glass of water every 20 minutes, even if you do not feel thirsty.
- Reschedule outdoor activity. If possible, shift outdoor work to the coolest time of the day, either to early morning or after sunset. |
- assign buffering to a stream
#include <stdio.h> void setbuf(FILE *stream, char *buf);
int setvbuf(FILE *stream, char *buf, int type, size_t size);
The setbuf() function may be used after the stream pointed to by stream (see Intro(3)) is opened but before it is read or written. It causes the array pointed to by buf to be used instead of an automatically allocated buffer. If buf is the null pointer, input/output will be completely unbuffered. The constant BUFSIZ, defined in the <stdio.h> header, indicates the size of the array pointed to by buf.
The setvbuf() function may be used after a stream is opened but before it is read or written. The type argument determines how stream will be buffered. Legal values for type (defined in <stdio.h>) are:
Input/output to be fully buffered.
Output to be line buffered; the buffer will be flushed when a NEWLINE is written, the buffer is full, or input is requested.
Input/output to be completely unbuffered.
If buf is not the null pointer, the array it points to will be used for buffering, instead of an automatically allocated buffer. The size argument specifies the size of the buffer to be used. If input/output is unbuffered, buf and size are ignored.
For a further discussion of buffering, see stdio(3C).
If an illegal value for type is provided, setvbuf() returns a non-zero value. Otherwise, it returns 0.
A common source of error is allocating buffer space as an “automatic” variable in a code block, and then failing to close the stream in the same block.
When using setbuf(), buf should always be sized using BUFSIZ. If the array pointed to by buf is larger than BUFSIZ, a portion of buf will not be used. If buf is smaller than BUFSIZ, other memory may be unexpectedly overwritten.
Parts of buf will be used for internal bookkeeping of the stream and, therefore, buf will contain less than size bytes when full. It is recommended that stdio(3C) be used to handle buffer allocation when using setvbuf().
See attributes(5) for descriptions of the following attributes: |
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Marine mammals are a diverse group of 120 species of mammal that are primarily ocean-dwelling or depend on the ocean for food. They include the cetaceans (whales, dolphins, and porpoises), the sirenians (manatees and dugong), the pinnipeds (true seals, eared seals and walrus), and several otters (the sea otter and marine otter). The polar bear, while not aquatic, is also usually considered a marine mammal because it lives on sea ice for most or all of the year.
Marine mammals evolved from land dwelling ancestors and share several adaptive features for life at sea such as generally large size, hydrodynamic body shapes, modified appendages and various thermoregulatory adaptations. Whales are the largest mammals in the world. Different species are, however, adapted to marine life to varying degrees. The most fully adapted are the cetaceans and the sirenians, which cannot live on land.
Despite the fact that marine mammals are highly recognizable charismatic megafauna, many populations are vulnerable or endangered due to a history of commercial use for blubber, meat, ivory and fur. Most species are currently in protection from commercial use.
There are some 120 extant species of marine mammals, generally sub-divided into the five groups bold-faced below. Each group descended from a different land-based ancestor. The morphological similarities between these diverse groups are a result of convergent and parallel evolution. For example, although whales and seals have some similarities in shape, whales are more closely related to deer than they are to seals.
- Order Sirenia: Sirenians, belonging to Afrotheria, a group that includes elephants and hyraxes
- family Trichechidae: manatees (3 species, however, only one is actually a marine mammal)
- family Dugongidae: dugong (1 species)
- Order Cetacea: Cetaceans, belonging to Cetartiodactyla, a group that includes hippopotamuses, deer, and pigs.
- Order Carnivora,
- superfamily Pinnipedia, belonging to Caniformia descended from a bear-like ancestor.
- family Mustelidae, belonging to Caniformia and most closely related to other otters and weasels
- family Ursidae, belonging to
- Order Desmostylia
Several groups of marine mammals existed in the past that are not alive today. In addition to the ancestors of the modern day whales, seals, and manatees, there existed desmostylians, cousins of the manatees, and Kolponomos, a species of clam-eating marine bears not related to the modern polar bear. A Polar Bear weighs up to 1 ton.
Since mammals originally evolved on land, their spines are optimized for running, allowing for up-and-down but only little sideways motion. Therefore, marine mammals typically swim by moving their spine up and down. By contrast, fish normally swim by moving their spine sideways. For this reason, fish mostly have vertical caudal (tail) fins, while marine mammals have horizontal caudal fins.
Some of the primary differences between marine mammals and other marine life are:
- Marine mammals breathe air, while most other marine animals extract oxygen from water.
- Marine mammals have hair. Cetaceans have little or no hair, usually a very few bristles retained around the head or mouth. All members of the Carnivora have a coat of fur or hair, but it is far thicker and more important for thermoregulation in sea otters and polar bears than in seals or sea lions. Thick layers of fur contribute to drag while swimming, and slow down a swimming mammal, giving it a disadvantage in speed.
- Marine mammals have thick layers of blubber used to insulate their bodies and prevent heat loss. Sea otters and polar bears are exceptions, relying more on fur and behavior to stave off hypothermia.
- Marine mammals give birth. Most marine mammals give birth to one calf or pup at a time.
- Marine mammals feed off milk as young. Maternal care is extremely important to the survival of offspring that need to develop a thick insulating layer of blubber. The milk from the mammary glands of marine mammals often exceeds 40-50% fat content to support the development of blubber.
- Marine mammals maintain a high internal body temperature. Unlike most other marine life, marine mammals carefully maintain a core temperature much higher than their environment. Blubber, thick coats of fur, bubbles of air between skin and water, countercurrent exchange, and behaviors such as hauling out, are all adaptations that aid marine mammals in retention of body heat.
The polar bear spends a large portion of its time in a marine environment, albeit a frozen one. When it does swim in the open sea it is extremely proficient and has been shown to cover 74 km in a day. For these reasons, some scientists regard it as a marine mammal.
- A 2005 Report by the National Academy of Sciences entitled Marine Mammal Populations and Ocean Noise, is available for free online reading and research
- University of Washington Libraries Digital Collections -- Freshwater and Marine Image Bank -- Aquatic Mammals An ongoing digital collection of images related to marine and aquatic mammals.
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| |
Shared Reading Tips
Tips for Parents/Guardians
|· Reading homework is the most important part of your child’s homework.
· Find a quiet comfortable corner with no distractions-turn off mobile.
· There are different types of shared reading that you can do with your child.
· If a child is reading without expression try asking them to read the same paragraph again but with feeling. Reading and acting out lines is a great way to build fluency. This can be great fun if you really exaggerate and use different accents.
In school, your child will be exposed to the following strategies. You could try out these ideas at home. Questioning your child about the book is a very important part of the reading process. It shows that they understand what they have just read. Some form of questioning should be used with every book your child has read.
In the Oxford Reading Tree Scheme inside cover there are questions to ask also.
Questions to ask before, during and after reading:
· Start with a conversation about the book. Discuss the book, the title, the pictures.
· Prediciting: What do you think the book is about? What do you think will happen next? Why do you say that?
During Reading: Making Connections:
· Can you make a connection between this story and something that happened to you? Does this remind you of another book/story/film?
· Discuss the sights, the sounds, smells, taste and touch of the images in their minds created by the story.
· You can help your child make sense of the story by talking about is happening, explaining and asking questions.
· Your child can start to ask questions about the story and what is hapeening. Questions such as: I wonder why..? Why do you think…? Who?What? When?What did that mean?
· Using the clues in the story get the child to make a judgement or deduction or reading between the lines.
· Asking your child to retell the story in their own words in the correct order-having a beginning, middle and end in the retelling.
These activities can be used throughout the reading process in any order where appropriate. |
Definition - What does Lake Effect mean?
A lake effect is an weather change triggered by lakes or other similar large bodies of water that causes temperature changes in wine growing areas. Water has a specific heat capacity, so water heats up and cools down slower than land. This property of water causes cool breezes to blow along the shores of the lake during the summer through thermal convection. During fall, the warm air above the water body flows onshore creating a warmer environment, extending the grape growing season.
WineFrog explains Lake Effect
The lake effect also protects roots of the vine as it helps insulate them and protects them from freezing temperatures. The Lake effect is one of the many factors that affect the quality of wine through the vineyard. Climate and topography are among the most important factors that alter the character of any wine. The extended growing season triggered by the lake effect helps further ripen the grapes.
The lake effect and lake effect snow are commonly associated. When the water is still warm even after the harvest is over, the cold dry air passing over the water body picks up moisture and is known to help create lake effect snow. As a result, the vines of the grapes are insulated and are protected from freezing temperatures, due to the early snowfalls. |
Soil density is a term that measures the volume and mass of soil types. Some types of soil are commonly found in one geographic area and not another, and some trees, flowers, plants and gardens do better in one type of soil over another. If you know what type of soil you have in your yard, you'll be able to cater your fertiliser or soil amendment applications to improve the soil for your tree, flower and vegetable growth for optimal health and wellness.
Other People Are Reading
According to the Environmental Science Division of the Argonne National Laboratory, the density of soils is determined by wetness or amount of water found in soil particles. Soil density may also be determined by measuring the mass of liquids, solids and gases such as air in a soil sample. You can also test for minerals and other organic matter in soil samples. This test can be performed by sending a soil sample to a local agriculture centre with measuring equipment ,such as an agricultural university, or submitting a soil sample to your local nursery, which may be able to test onsite or send your sample to a testing facility for results.
Soil types are defined as sandy, sandy loam, loam, silt loam, clay loam and clay soils, according to the Argonne National Laboratory. The density of any particular soil sample measures mass per unit or volume of the provided sample. The density of your soil sample will be determined by the components found in that sample. For example, minerals found in soil samples may range from 2.60 to 2.75 grams per cubic centimetre or g/cm^3 according to The Globe Program, or the Global Learning and Observation to Benefit the Environment interactive teaching method for educational science programs in schools.
According to SoilWeb of the University of British Columbia, the mineral density of soils may range between 2.6 and 2.7g/cm^3 though soils high in heavier metals like iron oxide may have a density of between 5.2 and 5.3g/cm^3.
A soil that contains a large amount of organic materials may have a density of 1.3g/cm^3. A soil low in clay components and high in sand is considered a loamy sand soil, while a soil sample that contains approximately 20 per cent clay and sand and 60 per cent silt is considered a silty loam type soil, according to the University of Wisconsin. Silty clay may be defined as a soil sample containing approximately 45 per cent measurements of silt and clay, and 10 per cent sand.
- 20 of the funniest online reviews ever
- 14 Biggest lies people tell in online dating sites
- Hilarious things Google thinks you're trying to search for |
Environmental Issues: Global Warming
All Documents in Global Warming Tagged water
- Climate Change and Water Resource Management
Adaptation Strategies for Protecting People and the Environment
- From urban and agricultural water supplies to flood management and aquatic ecosystem protection, global warming is affecting all aspects of water resource management in the United States. Rising temperatures, loss of snowpack, escalating size and frequency of flood events, and rising sea levels are just some of the impacts of climate change that have broad implications for the management of water resources. Reducing the global warming pollution that causes climate change is a critical step we must take, but water resource managers and elected officials must act now to prepare for the impacts of the warming that have already occurred or are unavoidable. Get document in pdf.
- Thirsty for Answers
Preparing for the Water-related Impacts of Climate Change in American Cities
- Cities across the United States should anticipate significant water-related vulnerabilities based on current carbon emission trends because of climate change, ranging from water shortages to more intense storms and floods to sea level rise.
- Rising Tide of Illness: How Global Warming Could Increase the Threat of Waterborne Diseases
- Although there is little public discussion of the problem, disease outbreaks caused by contaminated water occur regularly. Researchers estimate that, including unreported cases, between 4 and 33 million waterborne gastrointestinal illnesses occur each year in the United States. Global warming is projected to increase the risk of more frequent and more widespread outbreaks of waterborne illnesses, due to higher temperatures and more severe weather events. To help prevent increased occurrence of water-related illnesses, the CDC should improve surveillance of waterborne disease outbreaks, the Environmental Protection Agency (EPA) should improve water quality regulations, and Congress should act to limit emissions of global warming pollutants. We need to act now to protect public health today while preparing for the impacts of climate change.
Get document in pdf.
Documents Tagged water in All Sections
- Advancing America’s Clean Water Legacy
The Administration is strengthening clean water protections.
- The Administration should continue to move forward to strengthen protection for the waters that so many communities depend upon for drinking, swimming, fishing and economic activity.
- Waste Less, Pollute Less: Using Urban Water Conservation to Advance Clean Water Act Compliance
- In many parts of the United States, cities and suburbs -- and the wastewater and stormwater utilities that serve them -- are among the largest sources of water pollution. They need hundreds of billions of dollars to repair, maintain, and improve their infrastructure to comply with Clean Water Act standards that protect public health and the environment.
- Connecting Water, Sanitation, and Hygiene with Fresh Water Conservation and Climate Resilience
The Need to Facilitate Integration in Development Assistance
- Integrated solutions can help end extreme poverty and ensure long-term access to basic human needs such as food, clean water, and sanitation facilities. Currently, the development sector all too often addresses WASH, climate resilience, and fresh water conservation as separate issues. Fortunately, though, awareness about the importance of integrated efforts to solve these challenges in development projects is increasing.
- Proceed with Caution: California’s Drought and Seawater Desalination
- Some observers wonder whether the long-term answer to California’s drought lies in the ocean through the
promotion of seawater desalination. This paper offers an overview of the science and policy related to seawater desalination and demonstrates why this option is generally the least promising option for drought relief.
For additional policy documents, see the NRDC Document Bank.
For older publications available only in print, click here.
Sign up for NRDC's online newsletter
This Is Global Warming
Watch the Video »
Our new video shows the effects of global warming in the world today.
NRDC Gets Top Ratings from the Charity Watchdogs
- Charity Navigator awards NRDC its 4-star top rating.
- Worth magazine named NRDC one of America's 100 best charities.
- NRDC meets the highest standards of the Wise Giving Alliance of the Better Business Bureau.
- Europe takes first step towards Paris agreement: Commits to next round of emissions cuts
- posted by Jake Schmidt, 10/24/14
- NRDC helping to launch local climate resiliency effort in North Carolina
- posted by Luis Martinez, 10/23/14
- What Senators Talk About When They Talk About "Collusion"
- posted by David Goldston, 10/16/14 |
How do free electrons originate?January 20th, 2010 in Physics / Plasma Physics
Scientists at Max Planck Institute of Plasma Physics (IPP) in Garching and Greifswald and Fritz Haber Institute in Berlin, Germany, have discovered a new way in which high-energy radiation in water can release slow electrons. Their results have now been published in the renowned journal, Nature Physics. Free electrons play a major role in chemical processes. In particular, they might be responsible for causing radiation damage in organic tissue.
When ionising radiation impinges on matter, large quantities of slow electrons are released. It was previously assumed that these electrons are ejected by the high-energy radiation from the electron sheath of the particle hit - say, a water molecule. In their experiment the Berlin scientists bombarded water clusters in the form of tiny ice pellets with soft X-radiation from the BESSY storage ring for synchrotron radiation. As expected, they detected the slow electrons already known. In addition, however, they discovered a new process: Two adjacent water molecules work together and thus enhance the yield of slow electrons.
First the energy of the X-radiation is absorbed in the material: A water molecule is then ionised and releases an electron. But this electron does not absorb all of the energy of the impinging X-ray photon. A residue remains stored in the ion left behind and causes another electron to be released just very few femtoseconds later. (A femtosecond is a millionth of a billionth of a second. For example, the electrons in a chemical process take a few femtoseconds to get re-arranged.) This process is known as autoionisation, i. e. the molecule ionises itself.
The Max Planck scientists have now discovered that two adjacent water molecules can work together in such an autoionisation process. Working in conjunction, they achieve a state that is more favourable energy-wise when each of them releases an electron. What happens is that the molecular ion produced first transfers its excess energy to a second molecule, which then releases an electron of its own. This energy transfer even functions through empty space, no chemical bonding of the two molecules being necessary.
This discovery did not really come as a surprise. More than ten years ago theoreticians at the University of Heidelberg around Lorenz Cederbaum had predicted this "Intermolecular Coulombic Decay". It had already been observed in frozen rare gases. Identifying it beyond doubt now in water called for a sophisticated experimentation technique by which the two electrons produced are identified as a pair.
By demonstrating that the process is possible in water - thus presumably in organic tissue as well - the IPP scientists might now be able to help clarify the cause of radiation damage. “Slow electrons released in an organism may have fatal consequences for biologically relevant molecules,” states Uwe Hergenhahn from the Berlin IPP group at BESSY: “It was just a few years ago that it was found that deposition of such electrons can cut organic molecules in two like a pair of scissors. Very little is known as yet about how this and other processes at the molecular level give rise to radiation damage. What is clear, though, is that this constitutes an important field of research.” Intermolecular Coulomb decay is also important for other chemical processes: The paired action of a water molecule and a substance dissolved in the water could clarify how dissolving processes function at the molecular level.
The results of the IPP scientists were recently published in the renowned journal, Nature Physics. The same issue also features a complementary experiment in which a research group at the University of Frankfurt observed intermolecular Coulombic decay in the tiniest possible water cluster conceivable, comprising just two water molecules.
More information: Nature Physics, Online Publication: 10. January 2010, dx.doi.org/10.1038/nphys1500
Provided by Max-Planck-Gesellschaft
"How do free electrons originate?." January 20th, 2010. http://phys.org/news183205589.html |
One of the world's most important scientific papers was published in the journal Nature on April 25, 1953, 60 years ago today. The entire paper was just one page! In the short communication, James Watson and Francis Crick not only detailed the definitive structure of DNA (deoxyribonucleic acid) but also proposed the unzipping mechanism by which the molecule could replicate itself. They initially announced they had found "the secret of life" at the Eagle Pub in Cambridge, England. (See the DNA diagram from the actual paper to the right)
These two brilliant men figured out the chemical structure of DNA without doing any experiments. They started by carefully digested and then synthesizing the world's literature. They attended conferences and sought data anywhere they could find it. They were ridiculed for building stick models instead of conducting costly and time consuming lab investigations. The solution to the beautiful double helix structure of DNA arose via the collaboration of biologist Watson and physicist Crick. Neither one of them had an elite understanding of chemistry. They even challenged and eventually proved wrong the triple helix structure proposed by famous chemist Linus Pauling.
Sixty years later, their discovery and the subsequent research on recombinant DNA and genomic sequencing has transformed our lives. Today, we take for granted our ability to "grow" human insulin in bioreactors and target specific cancers with molecular designer drugs. It is appropriate to pause for just a moment today and thank the two dreamers who via their hard work and intuition discovered the structure of DNA. Let's hope we can find and encourage many more dreamers like Watson and Crick.
References: Watson and Crick, Nature April 1953 |
Beavers, polar bears, geese and moose are among a few of the more common animals living in Canada. Beavers are semi-aquatic mammals that are important in Canada's history. Polar bears are a vulnerable species living in Canada's far north. Approximately 200 mammal species live in Canada.Continue Reading
The Canadian lynx is one of the most elusive animals residing in Canada. Weighing 20 pounds and standing 20 inches tall, the lynx preys upon squirrels, rabbits and grouse within the northern woods of Canada. Whitetail deer can be found living among the lynx, along with caribou and elk.
Many moose live southeast of Canada's northern woods. Moose can weigh in at over 1,000 pounds and sport enormous antlers.
Both grizzly and black bears are present in British Columbia, Canada, with black bears being the more prevalent. Canada's wolf packs live primarily in the Yukon. Wolves hunt in packs and are not afraid of preying on game larger than themselves, such as Canada's bison, which can weigh over 2 tons.
Some of the smaller mammal species in Canada include raccoons, voles, moles, rabbits, red fox and bats. Wolverines are ferocious predators that prey upon many of these smaller mammals in Canada. Rodents such as pocket gophers and dusty-footed wood rats are also common prey for the wolverine.Learn more about Canada |
Living is learning. All living beings learn as they grow. Human beings learn more than any other creatures. A newborn human baby behaves like any other animal offspring. But he changes his behavior very quickly and shows a kind of behavior quite different from that of an adult. This difference in behavior is due to learning.
An individual lives in the society. He interacts with the society. He influences his environment and the environment influences him. This is called interaction. The change of behavior is the result of such interaction.
Learning is change. It is called modification of behavior. All changes in knowledge, skills, habits, interests, attitudes and tastes are the product of learning. That is, consists of all changes in thinking feeling and doing in course of life.
Every living being has its native behavior. The human being similarly has his own native behavior. But this behavior changes according to the needs and conditions of the environment. An insect has its fixed set of behavior and behaves similarly in all conditions. A dog barks when a stranger comes, but it plays with its master when he comes to it. This difference in its behavior is the result of its adjustment to the new situations or its varying behavior under different environments. In other words, adjustment to new conditions is a kind of learning. Those who can learn can adjust easily with the new situations.
An individual learns throughout his life. The human child is the most helpless of all creatures and such helplessness provides the best opportunity for his learning. He is also found helpless for a longer period than that of any other creature. This gives a greater scope for learning.
The world is full of problems. Human beings face problems in their every day life and try to solve them with their knowledge, understanding, reasoning, skills and techniques of adjustment. Human life is thus a continuous process of learning.
Learning is an essential as well as fundamental process of life. It is a process by which the individual acquires various habits, knowledge, skills and attitudes that are necessary to meet the demands of life. The ultimate aim of all learning is to change one's behavior suited to the new situations. The existing behavior may change and new behavior may be formed.
According to Skinner learning is a "process of progressive behavior adaptations". Munn has considered learning as "more or less permanent incremental modification of behavior which results from activity, special training or observations".
Kimble has similarly said", learning refers to a more or less permanent change in behavior which occurs as a result of practice". Crow and Crow defined learning as "the acquisition of habits, knowledge and attitude".
According to Mc Connel learning is "the modification of behavior through experience".
On the whole, learning can be defined as the process of effecting changes in behavior that brings about improvement in our relations with the environment. One of the main aims of education is to effect desired changes in the behavior of children. To acquire vocabulary, memories a poem, learn the basic skills of arithmetic, operate a machine and so on are the examples of learning. Education seeks to achieve these learning objectives for the benefit of the individuals. |
A single sheet of graphene, comprising an atom-thin lattice of carbon, may seem rather fragile. But engineers at MIT have found that the ultrathin material is exceptionally sturdy, remaining intact under applied pressures of at least 100 bars. That’s equivalent to about 20 times the pressure produced by a typical kitchen faucet.
The key to withstanding such high pressures, the researchers found, is pairing graphene with a thin underlying support substrate that is pocked with tiny holes, or pores. The smaller the substrate’s pores, the more resilient the graphene is under high pressure.
Rohit Karnik, an associate professor in MIT’s Department of Mechanical Engineering, says the team’s results, reported today in the journal Nano Letters serve as a guideline for designing tough, graphene-based membranes, particularly for applications such as desalination, in which filtration membranes must withstand high-pressure flows to efficiently remove salt from seawater.
“We’re showing here that graphene has the potential to push the boundaries of high-pressure membrane separations,” Karnik says. “If graphene-based membranes could be developed to do desalination at high pressure, then it opens up a lot of interesting possibilities for energy-efficient desalination at high salinities.”
Karnik’s co-authors are lead author and MIT postdoc Luda Wang, former undergraduate student Christopher Williams, former graduate student Michael Boutilier, and postdoc Piran Kidambi.
Today’s existing membranes desalinate water via reverse osmosis, a process by which pressure is applied to one side of a membrane containing saltwater, to push pure water across the membrane while salt and other molecules are prevented from filtering through.
Many commercial membranes desalinate water under applied pressures of about 50 to 80 bars, above which they tend to get compacted or otherwise suffer in performance. If membranes were able to withstand higher pressures, of 100 bars or greater, they would enable more effective desalination of seawater by recovering more fresh water. High-pressure membranes might also be able to purify extremely salty water, such as the leftover brine from desalination that is typically too concentrated for membranes to push pure water through.
“It’s pretty clear that the stress on water sources is not going away any time soon, and desalination forms a major source of fresh water,” Karnik says. “Reverse osmosis is among the most efficient methods of desalination in terms of energy. If membranes could operate at higher pressures, this would allow higher water recovery at high energy efficiency.”
Turning the pressure up
Karnik and his colleagues set up experiments to see how far they could push graphene’s pressure tolerance. Previous simulations have predicted that graphene, placed on porous supports, can remain intact under high pressure. However, no direct experimental evidence has supported these predictions until now.
The researchers grew sheets of graphene using a technique called chemical vapor deposition, then placed single layers of graphene on thin sheets of porous polycarbonate. Each sheet was designed with pores of a particular size, ranging from 30 nanometers to 3 microns in diameter.
To gauge graphene’s sturdiness, the researchers concentrated on what they termed “micromembranes” — the areas of graphene that were suspended over the underlying substrate’s pores, similar to fine meshwire lying over Swiss cheese holes.
The team placed the graphene-polycarbonate membranes in the middle of a chamber, into the top half of which they pumped argon gas, using a pressure regulator to control the gas’ pressure and flow rate. The researchers also measured the gas flow rate in the bottom half of the chamber, reasoning that any increase in the bottom half’s flow rate would indicate that parts of the graphene membrane had failed, or “burst,” from the pressure created in the top half of the chamber.
They found that graphene, placed over pores that were 200 nanometers wide or smaller, withstood pressures of 100 bars — nearly twice that of pressures commonly encountered in desalination. As the size of the underlying pores decreased, the researchers observed an increase in the number of micromembranes that remained intact. Karnik says the this pore size is essential to determining graphene’s sturdiness.
“Graphene is like a suspension bridge, and the applied pressure is like people standing on that bridge,” Karnik explains. “If five people can stand on a short bridge, that weight, or pressure, is OK. But if the bridge, made with the same rope, is suspended over a larger distance, it experiences more stress, because a greater number of people are standing on it.”
“We show graphene can withstand high pressure,” says lead author Luda Wang. “The other part that remains to be shown on large scale is, can it desalinate?”
In other words, can graphene tolerate high pressures while selectively filtering out water from seawater? As a first step toward answering this question, the group fabricated nanoporous graphene to serve as a very simple graphene filter. The researchers used a technique they had previously developed to etch nanometer-sized pores in sheets of graphene. Then they exposed these sheets to increasing pressures.
In general, they found that wrinkles in the graphene had a lot to do with whether micromembranes burst or not, regardless of the pressure applied. Parts of the porous graphene that lay along wrinkles failed or burst, even at pressures as low as 30 bars, while those that were unwrinkled remained intact at pressures up to 100 bars. And again, the smaller the underlying substrate’s pores, the more likely micromembranes in the porous graphene were to survive, even in wrinkled regions.
“As a whole, this study tells us single-layer graphene has the potential of withstanding extremely high pressures, and that 100 bars is not the limit — it’s comfortable in a sense, as long as the pore sizes on which graphene sits are small enough,” Karnik says. “Our study provides guidelines on how to design graphene membranes and supports for different applications and ranges of pressures.”
This research was supported, in part, by the MIT Energy Initiative and the U.S. Department of Energy. |
Called Mattangs, Medos, and Rebbelibs, these ancient stick charts were made from the midribs of coconut fronds by the master navigators of what’s now known as the Marshall Islands in Micronesia. The intersections of sticks, sometimes with the inclusion of small shells, indicate key locations as well as essential information about the water itself. From The Met:
Slingshots of the Oceanic and How did Polynesian wayfinders navigate the Pacific Ocean?
…these objects were memory aids, created for personal use or to instruct novices, and the significance of each was known only to its maker. The charts were exclusively used on land, prior to a voyage. To carry one at sea would put a navigator’s skill in question.
Marshallese navigation was based largely on the detection and interpretation of the patterns of ocean swells. Much as a stone thrown into a pond produces ripples, islands alter the orientation of the waves that strike them, creating characteristic swell patterns that can be detected and used to guide a vessel to land. It is the presence and intersection of swells and other aquatic phenomena, such as currents, that are primarily marked on the charts. |
Frankenstein (Study Guide) From Glencoe Literature Library. Features an impressive 30-page Frankenstein study guide in PDF format. Full of activities, analysis questions, vocabulary review, literature groups, and even art connections.
It's Alive - Scene from the 1931 Classic
Simonsays teach.com: Frankenstein These printable educators' resources feature discussion questions and activity suggestions are designed to stimulate discussion, creativity, and interest that extends beyond the pages of the book into related historical, scientific, or social concerns.
Ideas for Analyzing Frankenstein Offers questions for "Investigation and Analysis" and even suggestions for computer analysis of the text.
Discussion Topics From UCSB Department of English
My Hideous Progeny: Mary Shelley's Frankenstein Summary, a title explanation, character descriptions and information about the genre of Gothic literature as well as text of Frankenstein in a fully annotated HTML format.
(mock trial) Students stage a mock trial of Victor Frankenstein for negligence, malpractice,
and emotional and physical distress.
Frankenstein: Penetrating the Secrets of Nature From the National Library of Medicine, this exhibition looks at the world from which Mary Shelley came, how popular culture has embraced the Frankenstein story, and at how Shelley's creation continues to illuminate the blurred, uncertain boundaries of what we consider "acceptable" science. Contents: The Birth of Frankenstein; Frankenstein: The Modern Prometheus; The Celluloid Monster; Promise and Peril
Tales of the Supernatural From Edsitement, students explore the origins and development of horror and Gothic fiction, investigate how shared imaginative concerns link the members of a literary period, examine the evolution of a literary tradition, and compare works of literature from different eras. |
ALEX Lesson Plans
Subject: Social Studies (11), or Technology Education (9 - 12)
Title: How Hate Changes Society
Description: Government classes usually focus on the workings of the United States Government alone. In this unit of study, students will compare the United States government with that of pre-Nazi Germany. This unit will demonstrate to students how misguided public policy can become when dissent and debate are silenced.
Thinkfinity Interactive Games
Subject: Social Studies
Title: Xpedition Hall
Description: Built around the National Geography Standards, Xpedition Hall is a virtual museum that is filled with interactive exhibits designed to provoke reflection on how human beings shape and are shaped by the world in which we live. The Hall contains one exhibit for each of standard, and groups these exhibits into six galleries, which correspond to the standards six underlying '' elements.'' Students use can use an image or map of the Hall to navigate through the exhibits.
Thinkfinity Partner: National Geographic Education
Grade Span: K,1,2,3,4,5,6,7,8,9,10,11,12 |
Help support New Advent and get the full contents of this website as an instant download or CD-ROM. Includes the Catholic Encyclopedia, Church Fathers, Summa, Bible and more all for only $19.99...
Analysis (ana="up" or "back", and lyein, "to loose") means a separation; it is the taking apart of that which was united, and corresponds exactly to the Latin form "resolution" (re + solvere). Its opposite is synthesis (syn, "together", and tithenai, "to put", hence, a "putting-together", a "composition"). According to this etymology, analysis, in general, is the process by which anything complex is resolved into simple, or, at least, into less complex parts or elements. This complex may be:
(1) In the case of a concrete object, we must distinguish three degrees of analysis. Sometimes a real separation or isolation is effected. To resolve a chemical compound into its elements, or white light into the elementary colours, to dissect an organism, to take a machine to pieces, is to proceed analytically. But frequently actual isolation is impossible. Thus the factors of a movement or of a psychological process cannot be set apart and studied separately. If the process occurs at all, it must be a complex one. We may, however, reach an analytical result by means of different successive syntheses, i.e. by variations in the grouping of the elements or circumstances. In order to ascertain the individual nature of any determined element, factor, or circumstance, it is maintained in the state of permanency, while the accompanying elements, factors, or circumstances are eliminated or changed; or, on the contrary, it may be eliminated or modified, while the others remain constant. The four methods of induction belong to this form of analysis. It is also in a large measure the method of psychological experiment and of introspective analysis. Finally, it may be impossible to effect any real dissociation of a concrete thing or event, either because it cannot be reached or controlled, or because it is past. Then mental dissociation and abstraction are used. In a complex object the mind considers separately some part or feature which cannot in reality be separated. Analogy and comparison of such cases with similar instances in which dissociation has been effected are of great value, and the results already ascertained are applied to the case under examination. This occurs frequently in physical and psychological sciences; it is also the method used by the historian or the sociologist in the study of events and institutions.
(2) When the complex is an idea, analysis consists in breaking it up into simpler ideas. We are in the abstract order and must remain therein; consequently, we do not take into consideration the extension of an idea, that is, its range of applicability to concrete things, but its intension, or connotation, that is, its ideal contents. To analyze an idea is to single out in it other ideas whose ideal complexity, or whose connotation is not so great. The same must be said of analytical reasoning. The truth of a proposition or of a complex statement is analytically demonstrated by reverting from the proposition itself to higher principles, from the complex statement to a more general truth. And this applies not only to mathematics, when a given problem is solved by showing its necessary connection with a proposition already demonstrated, or with a self-evident axiom, but also to all the sciences in which from the facts, the effects, and the conditioned we infer the law, the cause, and the condition. Principle, law, cause, nature, condition, are less complex than conclusion, fact, effect, action, conditioned, since these are concrete applications and further determinations of the former. A physical law, for instance, is a simplified expression of all the facts which it governs. In one word, therefore, we may characterize analysis as a process of resolution and regression; synthesis, as a process of composition and progression.
The confusion that has existed and still exists in the definition and use of the terms analysis and synthesis is due to the diverse natures of the complexes which have to be analyzed. Moreover, the same object may be analyzed from different points of view and, consequently, with various results. It is especially important to keep in mind the distinction between the connotation and the denotation of an idea. As the two vary in inverse ratio, it is clear that, in an idea, the subtraction of certain connotative elements implies an increase in extension. Hence connotative analysis is necessarily an extensive synthesis, and vice versa. Thus, if my idea of a child is that of "a human being under a certain age", by connotative analysis I may omit the last determination "under a certain age"; what remains is less complex than the idea "child", but applies to a greater number of individuals, namely to all human beings. In order to restrict the extension to fewer individuals, the connotation must be increased, that is, further determinations must be added. In the same manner, a fact, when reduced to a law, either in the physical, the mental, or the historical order, is reduced to something which has a greater extension, since it is assumed to rule all the facts of the same nature, but the law is less complex in connotation, since it does not share the individual characters of the concrete events.
The necessity of analysis comes from the fact that knowledge begins with the perception of the concrete and the individual, and that whatever is concrete is complex. Hence the mind, unable to distinctly grasp the whole reality at once, must divide it, and study the parts separately. Moreover the innate tendency of the mind towards unification and classification leads it to neglect certain aspects, so as to reach more general truths and laws whose range of application is larger. The relative usefulness of analysis and synthesis in the various sciences depends on the nature of the problems to be solved, on the knowledge already at hand, on the mind's attitude, and on the stage of development of the science. Induction is primarily analytic; deduction, primarily synthetic. In proportion as a natural science becomes more systematic, i.e. when more general laws are formulated, the synthetic process is more freely used. Previous analysis then enables one to "compose", or deduce future experience. Where, on the contrary, the law has to be discovered, observation and analysis are dominant, although, even then, synthesis is indispensable for the verification of hypotheses. Some sciences, such as Euclidean geometry, proceed synthetically, from simple notions and axioms to more complex truths. Analysis has the advantage of adhering more strictly to the point under investigation; synthesis is in danger of going astray, since from the same principle many different conclusions may be drawn, and a multitude of real or possible events are governed by the same law. For this same reason, however, synthesis, in certain sciences at least, is likely to prove more fruitful than analysis. It also has the advantage of starting from that which has a natural priority, for the conditioned presupposes the condition. When the result is already known, and the relation between a principle and some one conclusion thus ascertained, synthesis is a great help in teaching others. In synthesis the strictness of logical reasoning is required. Accuracy and exactness in the observation of phenomena, attention to all their details, the power of mental abstraction and generalization are qualities indispensable in the analytic process.
The literature of analysis includes all works on logic and on the methods of the sciences. We give only some few references. DUGALD STEWART, Philosophy of the Human Mind, P. II, iv, § 3; WUNDT, Logik (2d ed., Stuttgart, 1895), II, i; DUHAMEL, Des méthodes dans les sciences de raisonnement (Paris, 1865-73); BAIN, Logic, P. II, Induction (2d ed., London, 1873); ROBERTSON, art. Analysis in Encyclopædia Britannica, 9th ed. — On psychological analysis, see, among others, ROYCE, Outlines of Psychology, iv, §§ 40-47 (New York, 1903).
APA citation. (1907). Analysis. In The Catholic Encyclopedia. New York: Robert Appleton Company. http://www.newadvent.org/cathen/01450a.htm
MLA citation. "Analysis." The Catholic Encyclopedia. Vol. 1. New York: Robert Appleton Company, 1907. <http://www.newadvent.org/cathen/01450a.htm>.
Transcription. This article was transcribed for New Advent by Douglas J. Potter. Dedicated to the Sacred Heart of Jesus Christ.
Ecclesiastical approbation. Nihil Obstat. March 1, 1907. Remy Lafort, S.T.D., Censor. Imprimatur. +John Cardinal Farley, Archbishop of New York.
Contact information. The editor of New Advent is Kevin Knight. My email address is webmaster at newadvent.org. Regrettably, I can't reply to every letter, but I greatly appreciate your feedback — especially notifications about typographical errors and inappropriate ads. |
Tuesday, 23 October 2012
The Bell Curve in Education
The Bell Curve in Education is used to evaluate the students that are going well in their studies and students that require improvement in some areas. The bell curve gets its name after the teacher or statistician graphs some test scores, which results in the shape of the graph resembling a bell. Normal distribution is a common term used in the statistics and education world that refers the perfect shape of the curve, with no slant. The higher the middle arch gets, the more students have gained average scores.
Below is a link for further information:
k12academics. (2012). Bell Curve Grading, last accessed: 21/10/2012, http://www.k12academics.com/education-assessment-evaluation/bell-curve-grading |
If the 12th Planet is riding at the mid-point of its long and narrow orbit, during most of its slow motion between the Sun and the Suns dead twin, then how it that it
can have an effect on the planets and moons in the solar system when it is only moving slowly from that virtual standstill? The outer planets were discovered only
because slight perturbations in the known planets were observed and analyzed to point to another body in motion, farther out. But these perturbations were
extreme, in comparison to an inbound object on a virtual straight line path, as their path of these outer planets were from side to side, thus causing a more
noticeable motion in the perturbed bodies. Other than perturbing toward Orion, the direction of the inbound 12th Planet, by all the planets in the solar system, there
is little steady evidence that the 12th Planet exists. But as it begins its passage, in the few short years prior to its passage, palpable changes are evident. The Earths
core is heating up, the plates giggling into a lock so that quakes in one ricochet into the neighboring plate, and volcanic activity increasing as the core of the Earth
swirls about. Europa, one of Jupiters moons, is noted to be heating up too. How can an object so distant affect the planets and moons?
Human theories about the motion of the planets in their orbits, their placement, are based on theories that have little basis in fact. All slung into position when the solar system first formed, and motion and centrifugal force are holding it all in place. This is nonsense, as we have explained, and mans theories fail to account for the vast majority of factors that actually hold the motion of suns and solar systems in place in an equilibrium established coming out of any local Big Bang. Mankind has faint explanation for why all the planets line up in the ecliptic, though this clearly is a flow of particles and the planets are in the backwash. Earths magnetic field does not point in the direction it does by accident, nor does the field simply encompass Earth. It goes far beyond the solar system, into several nearby systems and beyond. Gravity, which holds the planets close to their Sun but also keeps them apart by the repulsion force, is little understood by man who failed to understand this phenomenon in the context of a particle flow. They are still clinging to the theory, without basis, that the Sun has magnetic reversals, as this is an explanation for why wandering poles are evident on the crust of the Earth, when pole shifts are the obvious explanation and Hapgood long ago presented this. Thus, they cannot conceive of an equilibrium in the solar system, being out of touch as they are with so many basic issues.
When the 12th Planet is riding the mid-point of its orbit, the equilibrium exists. When it begins to approach, several particle flows are changed, and as these particle flows envelope and influence Earth and the other planets in the system, these changes become evident. Gravity and magnetic particles are only a couple of the flows affected. Now, in any given equilibrium, change is noticed where the equilibrium is taken for granted. Thus, the fact that the Earth rotations, has x temperature in its core, points in x direction, is taken as normal. When its temperature raises, this is noticed, and commented upon. Why would the temperature not raise, when the core is pulled in more direction, has more activity, and is thus exuding more heat particles. Magnetic diffusion is another change noticed, but this is easily explained in the context of yet another magnetic planet coming closer, so that magnetic particles are flowing more here, less there, in the vicinity of Earth. Thus, it is not so much what causes every change, upon the approach of the 12th Planet, as it is a mystery of why mankind is astonished. He is asleep on his assumptions, and only waking when they move about! Thus, the approach of the 12th Planet is evidenced by changes in the solar system because the equilibrium is being changed, the status quo altered, when it moves from a virtual standstill mid-way in its path to begin a passage. This equilibrium should be viewed as a net reaching out into the Universe, encompassing not only a local solar system but a galaxy. Why do the galaxies stay where they are? This is not a local affair!
Note: added during the Aug 24, 2002 Live ZetaTalk IRC Session.
We have stated that Planet X is disrupting the Earth's equilibrium, and the equilibrium of other planets and their moons, from afar. We have stated in explanation of
this that mankind little realizes what an equilibrium in a solar system means, understanding little of the factors involved. We have stated that mankind assumes the
planets are staying where they are due to centrifugal force and motion established a long time ago, and has no explanation for the ecliptic or the steady and
undegrading orbits of the planets. We have stated that should mankind understand all the factors involved in the solar system equilibrium, they would not be
surprised at the changes, but they are so far from understanding that shock is the reaction.
But a valid question, in this, is how the Earth could continue to be affected to the point of expressing magnetic diffusion, when the Sun, the giant magnetic influence, stands between the Earth and Planet X. Is this not a buffer? Would the magnetism not return to normal during these times? A disturbed equilibrium is not a simple thing, a wire placed between the planets such that cutting this wire returns all to what man considers to be normal. A disturbed equilibrium is many, virtually thousands, of factors, pulling in all directions, piling up and spilling over slowly, disbursing in directions and then returning. Particle flow is something mankind does not understand, assuming gravity to be a force, not a flow, and magnetism creating magnetic fields for unexplained reasons. Thus, when the Earth moves such that the Sun is between Planet X and itself, all the many factors throughout the solar system continue to push and pull, unchanged, and unchanged disruption in the equilibrium. |
Fires are classified by the types of fuel they burn.
Class A Fires consist of ordinary combustibles such as wood, paper, trash or anything else that leaves an ash. Water works best to extinguish a Class A fire.
Class B Fires are fueled by flammable or combustible liquids, which include oil, gasoline, and other similar materials. Smothering effects which deplete the oxygen supply work best to extinguish Class B fires.
Class C Fires. Energized Electrical Fires are known as Class C fires. Always de-energize the circuit then use a non-conductive extinguishing agent. Such as Carbon dioxide.
Class D Fires are combustible metal fires. Magnesium and Titanium are the most common types of metal fires. Once a metal ignites do not use water in an attempt to extinguish it. Only use a Dry Powder extinguishing agent. Dry powder agents work by smothering and heat absorption.
Class K Fires are fires that involve cooking oils, grease or animal fat and can be extinguished using Purple K, the typical agent found in kitchen or galley extinguishers.
An easy way to remember these types of Fires is (beat) Class A leaves an Ash, (beat) Class B boils,(beat) Class C has current (beat), and Class D has Dense Material (beat), And don’t forget the most overlooked, Class K for Kitchen.
See our Fire Safety & Firewatch Products
Return to the Workplace Safety Center |
Natural law is a common understanding of human nature and ethics. Humans are part of nature, so we are capable of perceiving and living by natural rules, and applying those rules in a universal way. Universally self-explanatory principles of equality, sovereignty, and dignity should guide our interactions with others.
Natural law based philosophy provides the foundation for natural rights or human rights, which undergird the Declaration of Independence, Constitution, and English and American systems of jurisprudence. Natural Law theories can be found in Greek, Roman, and ancient Buddhist texts. Plato, Aristotle, the Stoics, Cicero, Thomas Aquinas, Bacon, Grotius, Spinoza, Locke, Hobbes, and many others argued for various forms of natural law.
Natural Law philosophy should not be confused with the scientific laws of physics or biology. Human nature in the natural law sense, means that each of us has an innate tendency to behave in ways that are good for ourselves and good for others. We share common values and an understanding of ethics which derives from our nature. This is one of the things that makes us human.
We also have the free will to choose how to behave. Corruption represents a turning away from our true nature as humans. Things can go haywire if our understanding and feelings are corrupted by our upbringing, culture, and negative socialization.
The highest ideal is to unite your conduct with the good in nature; the interconnectedness and preciousness of life, and respect for yourself and others. |
This is a single spark chamber element from the stack of 28 modules used in the gamma-ray detector called EGRET (Energetic Gamma-Ray Experiment Telescope), one of four major instruments that flew on the Compton Gamma Ray Observatory satellite (CGRO). EGRET was responsible for producing an all-sky map of the gamma ray sky, locating new sources of gamma rays in the 20 to 30 billion electron volt energy range for closer study. Gamma-rays consist of energetic electromagnetic radiation that arises from a number of exotic processes that occur in particularly violent regions of the universe. These places include solar flares, nuclear reactions resulting from supernovae core collapse, the decay of radioactive particles in interstellar space, collisions of cosmic rays and interstellar gases and grains, annihilation events when matter and antimatter interact in the vicinity of neutron stars and black holes, and regimes in the cores of galaxies where supermassive black holes create intense gravitational acceleration. GRO was launched 1991 from the Space Shuttle Atlantis and provided data on celestial gamma-ray sources until it was commanded to re-enter the earth's atmosphere in June 2000 to minimize the chance of injury from the remnants of the 17-ton satellite, the largest civilian scientific payload ever flown on the Shuttle. This flight spare unit was manufactured by Ideas Inc., under contract to NASA's Goddard Space Flight Center. It was transferred to NASM in 1993. It is now on display in the Explore the Universe gallery.
EGRET element transferred from NASA's Goddard Space Flight Center |
The bonobo (Pan paniscus) is a great ape. Bonobos and chimpanzees are the closest living relatives of human beings. Together bonobos and chimpanzees make up the genus Pan.
In the wild, bonobos can be found in the Democratic Republic of the Congo, in the forests of the Congo River Basin.
Bonobos resemble chimpanzees very closely. However, a bonobo has longer hair, a smaller head and a flatter face than a chimpanzee. Bonobos have red lips.
A bonobo's body is slimmer than a chimpanzee's body, with longer legs than a chimpanzee has.
The first two toes on a bonobo's foot are webbed.
Like chimpanzees, bonobos walk on their knuckles. Sometimes they will walk on two legs. Bonobos are more likely to walk bipedally than chimpanzees are.
Bonobos can climb trees and brachiate (swing from tree to tree).
The average bonobo male weighs about 99 pounds (45 kilograms) and is about 3 feet 11 inches (119 centimeters) tall. The average bonobo female weighs about 73 pounds (33 kilograms) and is about 3 feet 7 inches (111 centimeters) tall.
Bonobos mostly eat fruit. They also eat other plant parts, insects, worms and mammals.
They are endangered because of habitat loss, hunting and the sale of babies as pets.
Bonobos live in large social groups. These groups break up into smaller subgroups that travel together looking for food. This is known as a fission-fusion society. Chimpanzees also have fission-fusion societies.
In bonobos, female-female and male-female social bonds tend to be stronger than male-male social bonds.
While female chimpanzees are subordinate to males, in bonobo society, females are dominant.
Males usually stay with their mother's social group when they become adults. The social ranking of a male and his mother are interconnected. Having a mother with a high social ranking will help to improve a young male's position in the group. As both the male and his mother get older, his rise in social ranking will help to improve her social position.
A female will leave her social group and join another one after she reaches puberty.
Sex plays a very important role in bonobo society. It is used as a way to diffuse conflicts.
Male-male and female-female sex is very common and sexual activity includes acts outside of intercourse, such as genital rubbing.
Bonobos have sex to avoid fighting over food or other objects.
When two bonobos have a disagreement about something, after a threat display, they will resolve their differences by having sex.
When a female enters a new social group after leaving her birth group, she will engage in genital rubbing with the other females in the group as a way of integrating herself into the new group.
Both male and female bonobos can have many sexual partners and mating takes place throughout the year. A female will have sex with any male in the group except her own son.
A bonobo female usually has one child at a time.
Parental care is provided by the mother.
Bonobo babies nurse until they are about four years old.
A bonobo reaches adulthood when it is about fifteen.
Like all great apes, bonobos are self aware.
Bonobos have been taught to use language.
Kanzi, a bonobo who was born in 1980, was the first non-human ape to learn to use language spontaneously, without formal training. He developed language skills the same way that a human child would.
When Kanzi was a baby, primatologist Sue Savage-Rumbaugh tried to teach his adoptive mother, Matata, to communicate using lexigrams - symbols that represent words. Kanzi stayed in the laboratory with his mother, but he was considered too young to be taught language.
Matata was never able to learn how to us lexigrams. However, when she was sent away to a breeding program for a short time, Kanzi spontaneously began using the lexigrams, asking for food and asking where his mother was.
Kanzi's half-sister, Panbanisha, who was born in 1985, also uses lexigrams. She can draw lexigrams with chalk.
Bonobos in captivity use tools.
Kanzi manufactures Oldowan-style stone tools and can build a fire. |
Human activity seriously damages riparian forests in Central and Eastern Europe
Riparian forests play an important role for both nature and humans. They preserve plant and animal species, prevent bank erosion and reduce the risk of floods by retaining water. The most serious causes of loss and destruction of riparian forests can be attributed to their clearing for agricultural use, replacement with hybrid plantations for intensive timber production, river bed correction and aggregate mining, which subsequently lead to dramatic changes in the river flow regime and erosion. Unauthorized and improperly conducted logging and construction of hydropower plants also seriously damage the riverine forests.
The significant ecological importance of these forests, the damage they have already suffered, and the threats they face today call for immediate efforts for their restoration.
Along the Danube River, the conservation work of WWF includes creation of new riparian forests through forestation activities using typical local species, and improving the structure and functions of existing forests. Restoration activities on the Danube islands aim to recreate conditions that are close to the natural environment. They improve the chances for regeneration and allow for an earlier start of natural processes. This leads to a better recovery of the ecosystem as a whole.
Along the Old Drava, WWF’s main objective is the conservation of the riparian habitat through improving the water regime and biodiversity status of floodplain forests along the oxbows. The restoration includes the stabilization of the water level through retention structures in the oxbow and the improvement of water supply from the main course of Drava River. The improved water levels secure favorable conditions for the alluvial forest and provide favorable ecological circumstances for other aquatic habitats.
A summary of the lessons learned from the WWF’s conservation work on restoration of riparian forests in the region can be found in the new publication. |
Getting started - What is CLIPC?
CLIPC provides access to climateclimate
Climate in a narrow sense is usually defined as the average weather, or more rigorously, as the statistical description in terms of the mean and variability of relevant quantities over a period of time ranging from months to thousands or millions of years. The classical period for averaging these variables is 30 years, as defined by the World Meteorological Organization. The relevant quantities are most often surface variables such as temperature, precipitation and wind. Climate in a wider sense is the state, including a statistical description, of the climate system. information of direct relevance to a wide variety of users, catering for consultant advisers, policy makers, private sector decision makers and scientists, but also interested members of the general public. This “one-stop-shop” platform allows you to find answers to questions related to climate and climate impactclimate impact
See Impact Assessment.
CLIPC information includes data from satellite and in-situ observations, climate models, data re-analyses, and transformed data products enabling impact assessments and asessment of climate changeclimate change
Climate change refers to a change in the state of the climate that can be identified (e.g., by using statistical tests) by changes in the mean and/or the variability of its properties, and that persists for an extended period, typically decades or longer. Climate change may be due to natural internal processes or external forcings such as modulations of the solar cycles, volcanic eruptions and persistent anthropogenic changes in the composition of the atmosphere or in land use. Note that the United Nations Framework Convention on Climate Change (UNFCCC), in its Article 1, defines climate change as: 'a change of climate which is attributed directly or indirectly to human activity that alters the composition of the global atmosphere and which is in addition to natural climate variability observed over comparable time periods'. The UNFCCC thus makes a distinction between climate change attributable to human activities altering the atmospheric composition, and climate variability attributa impact indicators. CLIPC complements existing services such as GMES/Copernicus pre-operational components, but focuses on datasets providing information on climate variabilityclimate variability
Climate variability refers to variations in the mean state and other statistics (such as standard deviations, the occurrence of extremes, etc.) of the climate on all spatial and temporal scales beyond that of individual weather events. Variability may be due to natural internal processes within the climate system (internal variability), or to variations in natural or anthropogenic external forcing (external variability ). See also Climate change on decadal to centennial time scales from observed and projected climate change impacts in Europe. With that, guidance information on the quality and limitations of all data products is provided.
Furthermore, CLIPC provides a toolbox to generate, compare, manipulate and combine indicators. Expanding climate data volumes are and will be supported with a distributed scalable system, based on international standards. An on-going user consultation process will feed back into all the products developed for some time into the future. Part of the toolbox is integrated with Climate-ADAPT.
A number of use cases has been developed to demonstrate how the functionalities of the CLIPC Impacts Indicator Toolbox can be used to identify, select, compare, combine and rank indicators, for applications in research or decision-support support in policy and practice situations.
CLIPC ensures that the provenance of science and policy relevant data products is well-documented. Clarity of provenance is supported by providing access to intermediate data products. Documentation includes information on the technical quality of data, on metrics related to scientific quality, and on uncertainties in and limitations of the data.
The CLIPC consortium was funded by the European Union’s Seventh framework programme (FP7) and brings together the key institutions in Europe working on developing and making available datasets on climate observations and modelling, and on impact analysis. CLIPC works closely with four concurrent FP7 projects developing pre-operational Copernicus Climate Change services for global re-analyses (ERA-CLIM2, UERRA, QA4ECV and EUCLEIA).
Go further to: |
A team of astronomers representing the American space Agency NASA discovered the asteroid, called 2016 HO3, which is a satellite, but rather, quasistatic our planet. Actually, it is orbiting the Sun, but because of the characteristics of the trajectory he’s already almost a hundred years revolves around the Earth, and according to calculations, will not leave her, at least a few centuries.
Heavenly body was found in April this year with the help of automatic the Hawaiian telescope Pan-STARRS 1. So far, scientists cannot safely be called its size and weight, but it is expected that in diameter it is unlikely to less than forty meters, or more than one hundred meters. About half the time the asteroid spends closer to the Sun than the Earth, and the other half away from him. This happens because the gravitational field of the Earth is constantly “corrects” its orbit.
The influence of Earth’s gravity enough for you to 2016 HO3 never been apart from her 200 times farther than the Moon. While he may not approach the planet at a distance of less than 38 times the distance from earth to the moon. As a result, the asteroid has long revolves around the Earth, while not presenting the danger to her
Previously, scientists already discovered asteroids, some time who quasistatically Land, including 2001 GO2, 2002 AA29, 2003 YN107 and 2004 GU9 (four-digit number in all cases refers to the year of opening). However, 2003 YN107 has already departed from the planet, is freed from its gravitational effects, and further most other quasispherical, present and future, awaits the same fate. The door to 2016 HO3 scientists call a significantly more stable. |
When delving into the world of translation in all its various forms the word interpretation often comes up when changing messages into different languages. This article will further explain the complex and difficult job of an interpreter, how the job is done and also what the difference between an interpreter and a translator is. Since an interpreter is working across languages, it is also an exchange of cultural messages. This means that the interpreter must be familiar with both cultures in terms of references, symbols and meanings to deliver a well interpreted message.
Firstly, the difference between a translator and an interpreter is that translation is often a one directional changing of languages. Often a translator will only translate into their native language, however, the act of interpretation requires this transaction to happen in two different languages. This demands that the interpreter is bilingual in both languages. They both require a very different skill set in order to succeed in the field of interpretation. Interpreters usually have no resources at hand to make these linguistic translations possible. Interpreters have the job of not just translating word for word into a new language but creating a connection between people, tone, intentions and emotions in what is being said.
When an interpreter is changing a message it’s crucial that they have strong decision making powers as there is no time for hesitation when interpreting. There are two main types of interpretation: consecutive and simultaneous. Consecutive interpreting often happens in settings where natural breaks occur. Often in meetings there will be a natural pause every 1-5 minutes and this is where the speaker will stop and information will be interpreted. Consecutive interpretation requires an excellent memory as to remember what has been said as well as notetaking abilities. Interpreters frequently develop their own methods of note taking and rather than using words they take notes in symbols and ideas that transverse languages.
In simultaneous interpretation there is about a half a sentence delay in the information that the interpreter is speaking or writing. This requires a great deal of skill as they must be knowledgeable in the general subject of what they are interpreting. Interpreters must also have a very wide vocabulary in both languages that they operate in and be able to express themselves full in both languages.
The job of an interpreter is extremely crucial to international affairs and requires a great deal of professionalism. Interpreters can often be found in diplomatic settings such as EU or UN meetings. This type of meeting requires simultaneous interpretation. Additionally interpretations can occur during court trials, face to face meetings and speeches.
Interpretation can occur in person, by telephone or even through video conferencing and internet based programs. Any restrictions to interpreters are lessened with access to the internet. Interpreters are well versed at dealing with different language and people and are a vital role when communicating across different cultures. |
Most urban sites have some amount of mineral soil in place when the time comes to install plant material, yet these soils are often assumed – erroneously – to be unsuitable.
Historically there have been two approaches to this situation. The default option is to ignore the problem, or make minor modifications such as digging the planting hole twice the size of the root ball and back filling with an imported soil. The second, and very expensive, option is to remove all the native soil and replace it with imported soil. But there is a third option – improving the existing, imperfect soil – that can be a suitable middle ground approach.
Understanding what can be modified
In order to evaluate this alternative, you must first identify what is wrong (if anything) with the existing soil. In this article the word “soil” is considered as mineral material with particle sizes classified as clay, silt, or sand in the USDA nomenclature system. The term “loam” means soil names with the modifier word “loam” in the USDA soil textural triangle. Note that the term “loam” does not require the soil to have any organic matter content. Even natural soils can have significant amounts of gravel and rock and still grow healthy forest trees, so some amount of rock and debris – maybe 15% or more by volume – can be present.
There are four basic types of soil conditions:
- Remnant loam soils, usually B or C horizons with fill and or paving added during the development history of the site. These soils may have elevated compaction but never experienced deep grading. These are excellent soils to reuse, often with minimum effort. Remnant soil including good topsoil may be buried under fill layers and can be reused if not too deep.
- Remnant and imported local loam soils that were deeply graded with horizons and soil types mixed together. These can be reused with appropriate modification.
- Remnant and imported soil material that are at the extremes of the soil textural triangle, particularly heavy clays, and stony/gravelly materials, and materials that contain significant building rubble. Significant stone, gravel and rubble would be soil with 15 to 30% by volume. Elevated pH and poor drainage may also be factors in the ability to reuse the soil. These are much more difficult soils to reuse and require understanding of the exact issues with the soil.
- Soil identified as containing hazardous chemicals. The threshold for levels hazardous to plants are typically much higher (parts per thousand to even parts per hundred) than levels hazardous to people, where levels at parts per million may be considered hazardous. Levels hazardous to people may require the soil to be removed even though plants can still grow well. Chemical hazards are a condition beyond the scope of this article – consult a soil expert before considering reuse.
Soil conditions don’t always reflect above-ground boundaries. An urban site may have some or all of these soil types distributed horizontally or vertically across the area. Furthermore, soil types may change dramatically over the course of construction, and soil problems identified in the initial site inspection maybe different after a long construction period.
There are four soil characteristics that must be met in order to make them suitable for planting:
- Compaction levels appropriate for root growth (<85% Proctor density)
- Adequate amount of organic matter (2.5% to 5%)
- Plant-appropriate soil nutrient levels and pH
- Adequate drainage
Identify soil types and conditions
Understanding both soil type and soil condition is the key to successfully reusing existing site soil. With planning, problematic characteristics can be remedied, or different plant selections made, to adapt to the conditions.
The first step in reusing existing soils is to identify the type and extent of the resource. Often in urban areas this is a challenging undertaking. Layers of paving and surface compaction may make typical investigations difficult, and understanding deeper soil conditions is hard work. The investigation of the soil profile, including about the soil below the surface soils, is critical. These deeper soils may be a good resource – or they may conceal soil problems that must be addressed.
You will probably need to use non-standard soil investigation methods. The following are good references on how to do an urban soil assessment.
- “Up By Roots – Healthy Soils and Trees in the Built Environment”, James Urban, ISA Publications, 2008” Part 1, Chapter 7, pp 117-144.
- “Managing Soils that Support Urban Trees (part one)”, ISA publications, 2015.
- “Soil Management for Urban Trees – Best Management Practices”, B. Scharenbrock, T. Smiley, W. Kocher, ISA Publications, 2014.
Soil protection and storage during construction
Once you’ve decided that the existing soils can be reused for planting, it is essential to protect that resource from further damage during the construction process.
On most projects there is a long period of time where a building is being constructed and the contractor is not likely to invest in protecting what otherwise be considered as “dirt”. They may not recognize that the soil still has use and value. In some markets, and particularly at smaller sites, the value of saving the existing soil may not net sufficient savings to offset the cost of working around a large volume of material.
On the other hand, soils can be protected, remediated, and stockpiled for future use. Soil stockpiles are the easiest way to preserve soil. Since the stockpile will be spread, loosening any compaction, it can be used for parking or material storage if grades permit. At tight urban sites, where only small amounts of soil are being reused, soil can be stored in 40 cubic yard dumpsters as the work proceeds down the street. Taking a flexible approach to soil storage areas may encourage the contractor to agree to its reuse.
Modifying existing soils
The two biggest problems common to urban soils are excessive compaction and low organic matter, both of which can be addressed.
Compaction can be loosened by turning or deep tilling with a backhoe. Compost, at a rate of about 15% by volume, can be added during the soil loosening to improve organic matter; the resulting soil can be excellent planting soil. Drainage problems can be solved by installing subsurface drainage and/or loosening subsurface hard pans.
High pH can also be an issue. PH between 7.5 and 8 is best dealt with by avoiding plants intolerant of alkaline soil; there is a wide range of plants that can grow at high pH. Chemical deficiencies can be adjusted using organic fertilizers, and compost by itself may be sufficient. Chemical toxicity levels are a different problem that needs special attention beyond the scope of this article.
Limitations of soil reuse
There are limitations to the reuse of existing soil.
The greatest is probably space. Construction sites are chaotic places where all available surfaces may be needed as the various aspects of the work are completed. Rarely is there much “free” space. If the soils are already compacted, or will later be graded or dug up as part of the process, leaving the soil in place and simply working on them may be acceptable. If the soil has to be placed in a truck, hauled offsite, stored, and then trucked back to the site, even if this is a small distance, cost savings can evaporate.
The second limitation is a cultural bias against the idea of soil reuse. Designers have been taught to assume that all urban soil is bad and should be replaced. Some may not have the confidence or experience to undertake the relatively simple process of soil analysis defined above – but only a small amount of experience is needed to become familiar with using the tools needed to understand the types of soils that are suitable. I know that this is a skill any designer can develop with a little bit of practice and experience.
There is a cultural bias at the construction level as well. Many contractors have not had sufficient experience with soil reuse and may bid it at no savings, or even a higher cost, than installing imported soil.
How to specify soil reuse
If you decide to reuse site soil you will need three things: a soil reuse plan, details, and specifications.
The soil reuse plan should identify the areas of soil that are to be retained and reused.
For soil to be reused with minimum grading or soil movement, protecting the soil from further compaction and contamination must be detailed, including any fencing and restrictions on use by the contractor.
If the soil is to be stripped and stockpiled, the extent and depth of the soil removal should be noted. Any required storage areas must be noted and coordinated with other work and the contractor.
In general, the quality and color of the various soils to be reused should be noted in the plans and specifications. Adding color photos of the range of acceptable soil color and profile to the specification can be useful. Provisions for inspection may be needed to confirm that the assumptions of soil quality are accurate. It may be reasonable to put in an add/deduct unit price to cover the possibility that more soil needs to be removed or imported than the assumed quantity.
The newly released soil specifications and details developed by the Urban Tree Foundation are a great resource for this, and include many of the required provisions for soil reuse and modification. Up By Roots also contains a detailed discussion on reusing existing soils.
Reused soils may be equal or superior to imported soil for many reasons.
They typically have greater amounts of clay and silt which, if it is not pulverized by soil blending machines, can offer improved water retention and equal drainage over high sand imported soil. These soils, when loosened by a backhoe, retain large soil peds and the fracturing improves drainage.
In addition, the environmental benefits of reusing soil are significant and include the impact on landfill site disturbance at another location, and the carbon footprint of removing and importing soil. Reuse of existing resources is an important first principle in sustainable development and I urge designers and contractors to actively pursue opportunities to do this on their projects.
James Urban, FASLA, is the author of “Up By Roots.” |
Cancer is an illness caused by an abnormal growth of cells in the body. In the UK, someone is diagnosed with cancer every two minutes, and there are 293,000 cases of cancer every year. Access to cancer services including treatment, care and support is vital to mitigating the effects of any form of cancer, but there are inequalities in the delivery and take up of cancer services, particularly by minority ethnic groups including African communities.
- Breast cancer is more common among African women than white women, and research has shown that African women between the ages of 15-64 years have significantly poorer survival rates.
- Evidence also shows that African women develop breast cancer 10-20 years earlier than white women.
- In the UK studies have shown that men of African descent have approximately two to three times the risk of being diagnosed or dying from prostate cancer than white men.
- These increased rates are likely to be due a mixture of hereditary and lifestyle factors.
The disproportionate levels of breast and prostate cancer in people of African origin warrant a specific, targeted and immediate response. Many of the risk factors for cancer, such as poor diet, are linked to wider inequalities – economic, social and health, and therefore addressing cancer in the African community needs to be linked with addressing these wider inequalities. |
80 years after it was first theorized by an Italian physicist, evidence of what appears to be Majorana Fermions was discovered by scientists from Stanford University in California and University of California.
According to a theory, when the Big Bang happened 13.7 billion years ago it was believed to have created matter and antimatter in equal quantities. Thus, for every electron, there is a positron, and for every quark, there would have been an antiquark.
The theory further suggests that the laws of nature required matter and antimatter be created in pairs. It was said that the two were mirror images of each other, but with opposite electric charge and other quantum numbers despite having the same mass.
For some unexplained reasons, within a milli-fraction of a second after the Big Bang, matter outnumbered its opposing particle by a hair.
Apparently, for every billion antiparticles, there were a billion and one particles. This phenomenon resulted in the annihilation of antimatter within a second of the creation of the universe, leaving behind only matter.
Up to this day, scientists are still trying to uncover the reason behind this ‘asymmetry’ that resulted only in the survival of matter.
Majorana Fermion: The Future of Quantum Computing
In 1937, theoretical physicist Ettore Majorana theorized the existence of another class of particle known as fermions. Fermions are not matter nor antimatter. Instead, they are both.
On Friday, an article published in the prestigious journal Science reported that researchers found evidence of the existence of Majorana fermions.
The team of scientists labeled the quasiparticle as ‘Angel Particle,’ after the infamous bomb made of matter and antimatter in the Dan Brown thriller Angels and Demons.
For the experiment, Professor Shoucheng Zhang and his team from Stanford University used a thin film of a topological insulator, which conducts electricity on its edges but is insulating within, and coupled it with a layer of superconductor where electrons can flow without resistance. Then, a magnet was swept over the stack. The layer of materials showed varying electrical conductivity in “discrete jumps of the size expected for Majorana fermions.”
“The experiment came out exactly in the way we predicted,” Zang said.
To set the records straight, what the researchers discovered were just quasiparticles and not the actual Majorana fermions. According to a statement from Giorgio Gratta, a Stanford physics professor:
“The quasiparticles they observed are essentially excitations in a material that behave like Majorana particles. But they are not elementary particles and they are made in a very artificial way in a very specially prepared material. It’s very unlikely that they occur out in the universe, although who are we to say? On the other hand, neutrinos are everywhere, and if they are found to be Majorana particles we would show that nature not only has made this kind of particles possible but, in fact, has literally filled the universe with them.”
Quasiparticles are not particles. They are used by scientists as a stand-in for particles that might not actually be there but whose surroundings are registering effects that make it seem as though they are there.
In the past, several experiments also showed traces of Majorana fermions. However, what Zang and his team accomplished gave scientists a glimpse of a different side of the quasiparticle. Taylor Hughes, another theoretical physicist from the University of Illinois, said:
“Certainly as far as chiral Majorana fermions go, this is the only definitive evidence that has been reported.”
The possible existence of fermions could lead to a technological revolution that will see quantum computers as a reality in the near future.
Apparently, a quantum bit or qubit of information can be stored in two separate ‘Angel Particles’ or Majorana fermions. Meaning, if one information got affected by interference, the other particle holding the same information will keep it safe.
A qubit can hold multiple bits of information. Now, imagine storing a qubit in a fermion.
Scientists believe that the discovery of Majorana fermions is a breakthrough that could boost the development of present day quantum computers like the D-Wave of Google and NASA. |
Bar-graph Temperature Monitoring System
Created: Sep 17, 2014
No description available.
Temperature sensors are used in a wide variety of scientific and engineering applications, especially measurement systems. It is a device that measures temperature or temperature gradient using a variety of different principles. This project uses an integrated circuit to sense and display the temperature.
The responsible for sensing the temperature is the IC LM35; it is a precision integrated-circuit temperature sensor, with a rate that ranged from -55°C to 150°C. It is calibrated directly in degrees Celsius (Centigrade), linearly +10 mV per °C Scale Factor. The output of LM35 is then connected to the SIGNAL pin of the IC LM3914. LM3914 is a display driver that senses analog voltage levels for ten output LEDs, LCDs, or vacuum fluorescents providing a linear analog display. It has a selectable bar or dot display modes and expandable to display one hundred output segments. Output current to the LEDs is regulated and programmable, thus, eliminating the need for resistors.
Two LM3914 are used in this project to support a 20-step output. The brightness of the output LEDs are controlled by the trimmer resistor configured at the pins REFOUT (pin 7) and REFADJ (pin 8) of the two LM3914. Another trimmer resistor is configured at pins RLO (pin4) and RHI (pin6) for offset voltage. Offset Voltage is the differential input voltage, which must be applied to each comparator of the LM3914 to bias the output in the linear region. |
Investigations into the nature of light constitute a venerable tradition. By the eighteenth century, various explanations, none broadly agreed, had existed to explain the phenomenon. (This was, in Kuhnian terms, a preparadigm phase.) Newton’s theory that light came in discrete bits, corpuscles, was pitted against the contradictory idea of Huygens that it was continuous. Dr Thomas Young, the father of physioptics, whom we have already met in connection with acoustic machines and as an influence on Faraday, here made his most fertile contribution to the scientific competencies underpinning modern communications systems by proving Huygens right. The proof was to stand for the next 104 years.
In 1801, Young was studying the patterns thrown on a screen when light from a monochromatic source, sodium, passed through a narrow slit. Areas lit through one slit darkened when a second slit, illuminated by the same sodium light source, opened. This phenomenon—interference—Young explained by assuming that light consists of continuous waves and suggesting that interference was caused when the crests of the waves from one slit were cancelled by the troughs emanating from the second. He was able to measure the wavelengths of different coloured lights, getting close to modern results. The importance of the concept of interference cannot be overstated since it is still current and lies at the heart of holography; but for television Young’s experiment was suggestive because, eventually, it allowed researchers to think of systems which treated light waves as telephony treated sound waves.
Television depends in essence on the photovoltaic (or photoemissive) effect, that is the characteristic possessed by some substances of releasing electrons when struck by light. The observation of this phenomenon is credited to a 19-year-old, Edmond Bequerel, in 1839, but it seems that his father, the savant Antoine Cesar, may have helped him to prepare his account for L’Academie des Sciences. Their |
A NEW STUDY FROM CONCORDIA has been testing whether early second-language education could promote higher acceptance levels of social and physical diversity. And what do you know — oui and si, it looks to be true.
Most young kids believe that human characteristics are innate. That kind of reasoning leads many to think that things such as native language and clothing preference are intrinsic rather than acquired.
But it seems like bilingual kids, especially those who learn another language in the preschool years, are more apt to understand that it’s what one learns, rather than what one is born with, that makes up a person’s psychological attributes. Unlike their one-language-speaking friends, many kids who have been exposed to a second language after age three believe that an individual’s traits arise from experience.
The Concordia study tested a total of 48 monolingual, simultaneous bilingual (learned two languages at once) and sequential bilingual (learned one language and then another) five- and six-year-olds.
These kiddos were told stories about babies born to English parents but who were later adopted by Italians, and also stories about ducks raised by dogs. The kids were then asked if those children would speak English or Italian when they grew up, and whether the babies born to dog parents would quack or bark. The kids were also quizzed on whether the baby ducks raised by dog parents would be feathery or furry.
The study predicted that sequential bilinguals’ own experience of learning language would help them understand that human language is actually learned, but that all children would expect other traits such as animal vocalizations and physical characteristics to be innate. But the results were a little surprising. Sequential bilinguals did demonstrate reduced essentialist beliefs about language — they knew that a baby raised by Italians would speak Italian. But they were also significantly more likely to believe that an animal’s physical traits and vocalizations are also learned through experience — for example, that a duck raised by dogs would bark and run instead of quack and fly.
Basically, monolinguals were more likely to think that everything is innate, while bilinguals were more likely to think that everything is learned.
This study provides an important demonstration that everyday experience in one aspect — language learning — can influence children’s beliefs about a wide range of domains, reducing children’s essentialist biases.
The study has important social implications because adults who hold stronger essentialist beliefs are more likely to endorse stereotypes and prejudiced attitudes; therefore, early second-language education could be used to promote the acceptance of human social and physical diversity.
So, in a nutshell, we’re offering you a good, scientifically backed-up excuse why you absolutely need to hit the road and travel with your kids more. It’s not for you; it’s basically for the benefit of all mankind. You’re welcome. |
The Solar System (sometimes stylized as "Sol System") consists of a star (the Sun), the planets orbiting it, together with moons, asteroids, comets, meteors and other objects gravitationally bound to the Sun. There are four terrestrial, or rocky planets, and four gas giants. The terrestrial planets are nearer to the sun. It might be described as a "failed double star with some rocky debris present" in general terms.
In Cosmic Terms, it's sometimes addressed as a member of the Solar Interstellar Neighborhood, Milky Way Galaxy, Local Galactic Group, Virgo Supercluster, Local Superclusters, Observable Universe.
In order of distance from the sun the planets are:
- Mercury .39 AU
- Venus .72 AU
- Earth 1.0 AU
- Mars 1.5 AU
- Jupiter 5.2 AU
- Saturn 9.5 AU
- Uranus 19 AU
- Neptune 29 AU
1 AU (astronomical unit) = 149 598 000 kilometers
Dwarf planets
Pluto used to be considered a planet, but now is classified as a "dwarf planet" together with the former "minor planet" (asteroid) Ceres, the discord-producing Eris and some other objects in the Kuiper Belt. The rebranding of Pluto as a dwarf planet in 2006 was a minor pop science and media based storm, with many traditionalists demanding that Pluto be kept as a planet. However, the reasoning behind the classification was the rate of discovery of new objects in the outer solar system. Each new object could have been classed as a planet, but undoubtedly this could lead to a considerable definition problem (prior to 2006, "planet" wasn't even properly defined, there was no pressing need to until this point) with planets shrinking until they counted as asteroids. At least one object, Eris, is known to be larger than Pluto, and Ceres already had its planetary status revoked decades previously. "Plutoid" is a sub-category of distant dwarf planets to at least acknowledge Pluto as the first one discovered.
List of dwarf planets and dwarf planet candidates in order of increasing distance to the Sun, as of October 2011:
- Ceres, the former largest and first discovered (in 1801) asteroid
- Pluto, the former "ninth planet", discovered in 1930
- Charon, Pluto's largest moon, big enough to pull the system's barycenter into the empty space between Pluto and Charon, discovered in 1978.
- Orcus, dwarf planet candidate, discovered in 2004. May be even slightly closer to the Sun on average than Pluto.
- Haumea, discovered in 2004, accepted as a dwarf planet in 2008
- Quaoar, dwarf planet candidate, discovered in 2002
- Makemake, discovered in 2005, accepted as a dwarf planet in 2008
- 2007 OR10, dwarf planet candidate, discovered in 2007
- Eris (a.k.a. 2003 UB313 or "Xena"), discovered in 2005, accepted as a dwarf planet in 2006. It was called "the tenth planet" in the media for some time, driving Planet X believers
- Sedna, dwarf planet candidate, discovered in 2003. Really, really, really far away from the Sun.
As of 2006, the number of known dwarf planets climbed to 44.
The Solar System is also home to hundreds of thousands of asteroids, concentrated mostly in a belt between Mars and Jupiter, as well as innumerable icy bodies in the outer reaches of the system, which become comets when they are perturbed into the inner system and made visible by evaporation of part of their bodies by the heat of the sun.
Kuiper belt
The Kuiper belt is a "disk" extending to around 50 AU from the Sun that contains thousands of small, icy planetoids, including Pluto. The Kuiper belt is believed to be the source of short-period comets.
Oort cloud
Surrounding the solar system is an immense cloud of ice bodies called the Oort cloud. It lies roughly 50,000 AU from the sun or about a quarter of the way to Proxima Centauri, the nearest star to our system. Its vast distance means that it is only loosely bound to the solar system and that it is easily disturbed by other stars. Such disturbances are believed to be the source of all long-period comets.
A disc-shaped cloud of comets occupies some of the space between the Oort cloud and the Kuiper belt. This inner cloud is called the Hills Cloud.
- Solar System Scale Model, a site-model of the Solar System where both the sizes and the distances to the Sun are on the same scale. Prepare for lots of scrolling; the page is estimated to be a half-mile wide on a standard 72 dpi screen.
- Bill Nye riding his bike along a scale model of the Solar System. "There's a lot of space in space!" |
A new study has found that Arctic Sea ice melt is creating a warming spiral, with the thinner winter sheets that replace long-term sea ice absorbing more solar heat and energy.
The paper by scientists at the Alfred Wegener Institute for Polar and Marine Research in Germany discovered that solar radiation through ‘first year ice’ was three times greater and allowed 50% more energy absorption than was with the case with ‘multi-year ice’.
This in turn could change the face of the Arctic.
“Ice melt and less sea ice cover will [themselves] make it more likely that more ice will melt in the next years ahead,” Marcel Nicolaus, one of the report’s authors, told EurActiv. “We see that light transmission through sea ice will increase in the future.”
While previous studies had indicated that solar radiation was melting sea ice at the surface, and a warmer ocean was melting it at the bottom, the new paper found that Arctic ice sheets were increasingly melting from within too.
“We showed here that the older multi-year ice is covered with fewer ponds at the surface, while the newer, younger ice has more ponds,” Nicolaus said.
“This albedo radiation transfer effect will be more pronounced in the future,” he added.
Increased Arctic light transmission will also affect sea life in the Arctic ocean, although more research is needed to understand how.
“We didn’t do biological studies but that’s definitely the way to go to connect these kinds of observations,” Nicolaus said, over the phone from his office in Bremerhaven. “I’m not a biologist and I can’t say how it could affect [sea life], I can just say that it will change it.”
Sea ice surface cover in the Arctic region plunged to a new record low of 24% last year, some 50% less than climate scientists at the UN’s Inter-governmental Panel on Climate Change (IPCC) had predicted.
In the 1970s when the first satellites tracked the frozen continent’s topology, sea ice usually covered about half of the ocean at its lowest point.
“A continuation of the observed sea ice changes will increase the amount of light penetrating into the Arctic Ocean, enhancing sea ice melt and affecting sea-ice and upper-ocean ecosystems,” says the report, which was published in the journal Geophysical Research Letters.
Some climate scientists believe that feedback loops could easily develop, as warmer oceans exponentially melt more ice and reduce the Arctic’s reflectivity of solar heat back into space, so warming the planet still further.
This process could potentially melt the pivotal land-based Greenland ice sheet – so increasing sea levels – and release methane hydrates, frozen beneath the surface of Arctic permafrost and on the sea’s floor.
Greenland’s ice sheet is already melting three times faster today than it did in the 1990s.
Rapid sea level rise
The IPCC’s fourth assessment report in 2007 warned that “partial loss of ice sheets on polar land and/or the thermal expansion of seawater over very long time scales could imply metres of sea level rise, major changes in coastlines and inundation of low-lying areas, with greatest effects in river deltas and low-lying islands.”
The report notably does not exclude rapid sea level rise within a century, given global temperature increases of between 1.9 to 4.6 degrees Celsius, which are well within the bounds of possibility.
The scientists at the Alfred Wegener Institute are at pains to stress that the Arctic thaw they have observed will not directly increase sea levels, as the transformation of ice into water does not affect its mass or volume.
But they add the caveat that the process could well contribute to an acceleration of more general global warming that melts land-based glaciers and causes run-off that swells the size of the world’s oceans.
“The more surface area covered with ice that we lose, the more energy we will absorb into the oceans and the more we will help these feedback processes to increase warming and more melting,” Nicolaus said.
* Oct. 2014: IPCC's Fifth Risk Assessment Synthesis Report to be published |
Learning through the use of technology is a hot topic, especially when it comes to enhancing traditional classroom learning. Popular trends in this direction include connectivism, blended learning, and mobile learning. Unfortunately, most public schools are not on the cutting edge of this trend and little data is available to determine how specific technologies are used in the classroom. Let’s explore the pros and cons of each. Below are some of the most common ways technology can enhance learning. Hopefully, these examples will spark some debate among educators.
What is techno-progressivism? Often called tech-progressivism, it is an active support of social and technological change. This stance is based on the belief that the future of work and society will be shaped by technological advancement. Although the term itself is broad and inclusive, it is important to note that it includes a number of sub-categories. Here are some of the main ones:
The term “Luddism” is a pejorative for those who are opposed to many forms of modern technology. It is often associated with people who are technologically phobic, and it is based on the historical legacy of the English Luddites. Neo-Luddism and technology can be confusing, but it is worth exploring. Here are a few things to know about this movement.
In this article I want to discuss the social aspect of the ecovillage movement. The dominant economic model ignores the social and environmental impacts of globalization, as these are externalized effects that have no economic value. These effects are a result of the global economic crisis that has afflicted our society. Therefore, we have to start thinking about how we can create a future where we can live in harmony with our environment.
Whether it’s helping with a medical decision, analyzing customer data, or managing an organization’s data, AI has been a popular topic of discussion in business and the tech world. AI solutions can help businesses become more efficient, reduce costs, and improve productivity. Before implementing AI, however, organizations should evaluate their requirements and find the right solution for their needs. Cloud-based AI solutions are more accessible than on-premise projects, but finding a qualified partner who specializes in the technology is challenging. Also, make sure you get a solution that has the appropriate long-term support.
Manufacturing technology is a broad field that encompasses many different types of industrial processes. Different types of manufacturing processes can improve business processes and streamline relationships with customers and suppliers. Other types of manufacturing technology can speed up production and increase the variety of products available to consumers. For example, wine production uses technology to improve product quality while reducing costs. Regardless of the industry, process technology plays a crucial role in the decisions made during a manufacturing process.
Information technology is a broad term, which refers to the use of computers, networks, software, and other devices to process and organize data and information. It also encompasses governance and policies. High-functioning information technology departments are almost invisible to the rest of the business, as they anticipate issues and create solutions proactively. Here are some of the different types of IT departments and their responsibilities. This section of the IT department includes networking, technical support, and data storage. |
Much of the Earth is covered with water, which based on geographical studies represent about 71 to 72% of the globe’s surface. About 96.5% of that water covering the Earth are held by oceans. Yet despite the vastness of the aquatic environments that serve as habitats to fish species and other marine life, the world is at risk of losing many of the fisheries that provide the food that billions of humans consume every day.
That is because researchers have found out that every year, as much as 77 billion kilograms of different fish species and other aquatic creatures are fished out of the oceans. Countries therefore have been warned about overfishing and the use of fishing methods that speed up the depletion of fish populations all over the world.
Although, governments now have fishing laws and agencies in place to manage their respective fisheries, the laws and sustainable fishing methods observed and adopted by Norway and Iceland have been noted as exceptional and worthy of emulation.
NORWAY – The World’s Second Largest Seafood Exporter
Norwegian government imposes strict laws for catching Norwegian Arctic cod. Actually the laws have been in place since 1816, which ensured the longevity in supply of the country’s local cod fish or skrei.
Every year, over 400 million skrei travel and migrate to the coast. However, the Norwegian Seafood Council imposes strict conditions that qualify only about 10 percent of the cod caught during fishing operations, which take place only between January and April. That is because only full grown and wild cod, without imperfections such as nicks and bruises, qualify as commercial skrei.
After undergoing inspections, the Marine Stewardship Council approves and certifies the caught skrei, to which the council requires packing within 12 hours. The remaining 90% of the skrei that migrated are then brought back to the Barents Sea to promote growth in the species’s population.
The most important aspect of the system is the Council’s strict policing of the sustainable processes that ensure the efficiency of the country’s fishing laws.
ICELAND – Foremost in the Development of the Fish Quota System
Like Norway, Iceland has had fishing regulations in place long before other countries depleted their fish supply. In 1901, the government of Iceland imposed a fishing zone limit of 3 miles, where only Icelanders have the right to fish. The purpose of which is to protect the country’s supply of cod and haddock, which at that time was already noted as diminishing. In 1976, the government of Iceland had expanded the fishing zone limit to 200 miles.
In 1995, Iceland also introduced a system that prescribe fishing quotas to regulate the volume of stock that every fishing vessel are allowed to haul in as catch. First off, the total allowable catch for a specific period of the year is established, which is 25 % of the stock available. Stock availability is determined twice during the year by scientists, who undertake re-evaluation of quotas. If the available stock falls as a result of the previous fishing operations, the limiting system automatically calls for the closure of the fishing ground.
Fishing as recreation and as a means of livelihood both require great understanding and knowledge, to which websites like https://www.northpolevoyages.com provide comprehensive information. |
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. |
Posted on Sep 19th 2018
13 Ways to Foster Excitement & Interest in Biology Lab
A recent study found that only 11.4 percent of students are interested in STEM careers upon entering high school, and that percentage goes down to 10 percent once students graduate. As science teachers, that news can be hard to hear. But one thing that can make a real difference is getting students engaged with hands-on labs throughout their high school careers. How do you inspire a student who might approach biology lab reluctantly? These teacher-tested ideas can help.
1. “Don’t just do labs, be scientists!”
Sometimes teaching is all about the tone you set. A lab coat and goggles can build excitement and take some of the “ick” out of touching a wriggly worm. Latex or nitrile gloves can help too. Making it about science and learning rather than what may be on the lab table in front of them is often just the distance that some kids need. High school teacher Lane M. says, “Anytime there’s potential for an ‘eeeww!’ moment, I anticipate and celebrate it. The kids love how I get into the ‘grossness’ and I think that my enthusiasm and lack of fear helps them to learn that it’s just part of the world we live in, and they relax that much more.”
2. Dive in without time for angst.
Sometimes kids do best when they can just jump into a lab without devoting too much thought to what they’re working with. One science teacher we know likes to keep things moving along at a fairly decent clip so students don’t have too much time to think about the concepts that might make them squeamish. For example, if you’re going to do an activity with owl pellets, don’t spend a lot of time dwelling on how the bones ended up in the pellet. Just jump in and see what you can find.
3. On the other hand, consider setting the stage.
You know your students best. While some might like to jump right in, others might do better with more context building and background knowledge. Reading a book, using flash cards, or playing with an anatomical model might help these kids.
4. Consider the bribe or at least the peppermint.
Okay, we don’t actually endorse full-on bribery, but there’s nothing wrong with a little extrinsic motivation every now and then. Teacher Gina D. suggests, “I give my students peppermints as each task is complete.” The peppermints are a reward, a distraction, and can also be helpful if the particular lab has an associated strong odor. Along the same lines, teacher Rachel M. suggests picking up “some nice-smelling soap or lotion for students to use after lab. If they participate and get the work done, they can use the special soap and lotion. It works for me.” Little treats to look forward to can often make wary students more willing.
5. Set a timer.
Knowing that it won’t last forever makes a difference whenever a person has to do something that they might find uncomfortable. Using a timer to help kids move from one step to the next can be helpful, particularly if kids know what the timeline looks like ahead of time. Seeing that only 20 minutes of the lab time is actually devoted to the portion that they may be nervous about can be comforting to some students.
6. Play background music.
Some science teachers we know swear by a soothing, instrumental soundtrack for lab time. Check out this list of educator-recommended Pandora stations for ideas.
7. Feed them the appropriate response.
You can help kids learn to channel their gut reactions. Sometimes kids respond to a situation negatively just because they don’t know how else to react. Science teacher Michelle K. writes, “I prompt my middle school kids to say ‘Oooh, science!’ whenever there is anything gross. Most of the time when kids say ‘eww’ or giggle, they just need to get something out because it’s the first time they’ve seen something like that. Having them say ‘Oooh, science!’ allows for a neutral response.”
8. Choose compelling concepts.
Help students understand the purpose of the lab, exactly what they can learn from it, and why it’s so important. They might be more willing to take risks that way. Tell them why this lab is fascinating. Tell them what you expect them to learn. Help them understand why it matters. If you can make a frog dissection sound like the most exciting and entertaining science lab they’re going to experience all year, then even your reluctant students just might go along with you for the ride.
9. Put the kids in charge.
No, not totally in charge—that could lead to disaster. But give your students some autonomy. Ask them to design their own lab. Teri at Crazy Teaching asked her physics students to propose and execute labs that answer key physics questions. Teri says, “Over the years, I had to learn that student self-designed labs are exercises in thinking and creativity, not about content and getting it ‘right.’ Instead, these labs are about giving students the opportunity to make their own mistakes, realize they made the mistakes, and then allow them to fix those mistakes. It's more about the process than the science to me, about seeing them learning how to learn.”
10. Use humor.
A little bit of humor can go a long way toward lightening up a tough topic. Biology teacher Mary M. reports that she hosts funny contests during bio lab time. She invites students to face off with battles for the longest intestine or the best stomach contents. Other teachers try to start the class off with a laugh. Having one of these images on the projector as students walk in the room might be just enough to set a more relaxed tone and relieve some of that nervousness that some kids are feeling.
11. Ask questions.
There’s almost no better place to implement the inquiry model of education than in the science lab. Asking students key questions to guide their learning rather than spilling out endless facts can up your engagement levels. If your students ask, “What are we supposed to do next?” turn it around. Ask them what they think they should do next. Make a list of “why” questions with your class. Getting students involved like this invites them into ownership of the activity. They’re going to learn it if they have to own it.
12. Teach the controversy.
We can dance around the issue all day, but the truth is biology lab can be controversial. Some students are ethically opposed to dissection. If you can address the issue directly with your students to help them explore the conflict, your student might be more willing to participate in the lab. Teach them how specimens for lab are usually ethically sourced. Talk about how under the right circumstances, these kinds of labs can offer the kinds of insights that kids can’t get anywhere else.
13. Allow an opt-out or alternative activity.
While there is rarely an authentic replacement for real hands-on learning, there can be some good substitutes. If you’ve tried all of the strategies above and still can’t get one or two students on board with your planned lab, offer those students an alternative activity. Sometimes it’s worth making an exception for one or two students so the rest of the class can have the benefit of real hands-on learning. Set your students up with a related video and ask them to review fundamental concepts in written form.
Science lab is the perfect vehicle for the kind of hands-on learning that captures the imagination of the majority of our students. Helping all of our students to enjoy lab can build a lasting love of science.
Like us, some students positively adore bio lab. Getting down and dirty with all of those lab materials is the perfect complement to their hands-on learning style and they don’t care about topic or content —they just love doing! But then, the opposite is also true. Some kids would rather [...] |
When injecting stem cells into a patient, how do the cells know where to go? How do they know to travel to a specific damage site, without getting distracted along the way?
Scientists are now discovering that, in some cases they do but in many cases, they don’t. So engineers have found a way to give stem cells a little help.
As reported in today’s Cell Reports, engineers at Brigham and Women’s Hospital (BWH) in Boston, along with scientists at the pharmaceutical company Sanofi, have identified a suite of chemical compounds that can help the stem cells find their way.“There are all kinds of techniques and tools that can be used to manipulate cells outside the body and get them into almost anything we want, but once we transplant cells we lose complete control over them,” said Jeff Karp, the paper’s co-senior author, in a news release, highlighting just how difficult it is to make sure the stem cells reach their destination.
So, Karp and his team—in collaboration with Sanofi—began to screen thousands of chemical compounds, known as small molecules, that they could physically attach to the stem cells prior to injection and that could guide the cells to the appropriate site of damage. Not unlike a molecular ‘GPS.’
Starting with more than 9,000 compounds, the Sanofi team narrowed down the candidates to just six. They then used a microfluidic device—a microscope slide with tiny glass channels designed to mimic human blood vessels. Stem cells pretreated with the compound Ro-31-8425 (one of the most promising of the six) stuck to the sides. An indication, says the team, Ro-31-8425 might help stem cells home in on their target.
But how would these pre-treated cells fare in animal models? To find out, Karp enlisted the help of Charles Lin, an expert in optical imaging at Massachusetts General Hospital. First, the team injected the pre-treated cells into mouse models each containing an inflamed ear. Then, using Lin’s optical imaging techniques, they tracked the cells’ journey. Much to their excitement, the cells went immediately to the site of inflammation—and then they began to repair the damage.
According to Oren Levy, the study’s co-first author, these results are especially encouraging because they point to how doctors may someday soon deliver much-needed stem cell therapies to patients:
“There’s a great need to develop strategies that improve the clinical impact of cell-based therapies. If you can create an engineering strategy that is safe, cost effective and simple to apply, that’s exactly what we need to achieve the promise of cell-based therapy.” |
Goal of the class:
To know more about this Spanish reality which sometimes is seen as a mere stereotype.
How did you structure the class?
Activity 1 (10min): Warm up. Students talk in partner about what would they do if
– The world ended in 24 hours
- They had 1 billion dollars
- They had one year on vacation
- They had a time travel machine
Activity 2 (15): A video from youtube is played on the screen. The clip is made with real images of bullfights and a song. After the first time, they speak briefly what it is about. Then, they are given the lyrics with some gaps that they have to fill in after listening the song the second time. Vocabulary is discussed before that.
Activity 3 (15): I show them news about the bullfight reality in Spain. We have a political open debate.
Activity 4 (15): I show them a powerpoint about the history of bullfights. Afterwards, they discuss in groups about the difficulty of erasing a old tradition in a country.
What technology, media or props did you use? (internet resources, playmobiles, handouts, etc.)
What worked well in this class? What did not work?
The class work well. Students were very curious about it. Even, I had to stop the political debate because they were too involved. I used a non-related warm-up activity, but you can use another one.
How could this class be improved/ modified?
This topic is very intense, there should be a very relaxing class either the day after or previous in order to counterbalance it.
If you have a more detailed lesson plan, please attach it below (OK to use target language for that). Please attach any handouts as well. |
Sir Isaac Newton's three laws of motion describe the motion of massive bodies and how they interact. While Newton's laws may seem obvious to us today, more than three centuries ago they were considered revolutionary.
Newton was one of the most influential scientists of all time. His ideas became the basis for modern physics. He built upon ideas put forth from the works of previous scientists including Galileo and Aristotle and was able to prove some ideas that had only been theories in the past. He studied optics, astronomy and math — he invented calculus. (German mathematician Gottfried Leibniz is also credited with developing it independently at about the same time.)
Newton is perhaps best known for his work in studying gravity and the motion of planets. Urged on by astronomer Edmond Halley after admitting he had lost his proof of elliptical orbits a few years prior, Newton published his laws in 1687, in his seminal work "Philosophiæ Naturalis Principia Mathematica" (Mathematical Principles of Natural Philosophy) in which he formalized the description of how massive bodies move under the influence of external forces.
In formulating his three scientific laws, Newton simplified his treatment of massive bodies by considering them to be mathematical points with no size or rotation. This allowed him to ignore factors such as friction, air resistance, temperature, material properties, etc., and concentrate on phenomena that can be described solely in terms of mass, length and time. Consequently, the three laws cannot be used to describe precisely the behavior of large rigid or deformable objects; however, in many cases they provide suitably accurate approximations.
Newton's laws pertain to the motion of massive bodies in an inertial reference frame, sometimes called a Newtonian reference frame, although Newton himself never described such a reference frame. An inertial reference frame can be described as a 3-dimensional coordinate system that is either stationary or in uniform linear motion., i.e., it is not accelerating or rotating. He found that motion within such an inertial reference frame could be described by three simple laws.
The First Law of Motion states, "A body at rest will remain at rest, and a body in motion will remain in motion unless it is acted upon by an external force." This simply means that things cannot start, stop, or change direction all by themselves. It takes some force acting on them from the outside to cause such a change. This property of massive bodies to resist changes in their state of motion is sometimes called inertia.
The Second Law of Motion describes what happens to a massive body when it is acted upon by an external force. It states, "The force acting on an object is equal to the mass of that object times its acceleration." This is written in mathematical form as F = ma, where F is force, m is mass, and a is acceleration. The bold letters indicate that force and acceleration are vector quantities, which means they have both magnitude and direction. The force can be a single force, or it can be the vector sum of more than one force, which is the net force after all the forces are combined.
When a constant force acts on a massive body, it causes it to accelerate, i.e., to change its velocity, at a constant rate. In the simplest case, a force applied to an object at rest causes it to accelerate in the direction of the force. However, if the object is already in motion, or if this situation is viewed from a moving reference frame, that body might appear to speed up, slow down, or change direction depending on the direction of the force and the directions that the object and reference frame are moving relative to each other.
The Third Law of Motion states, "For every action, there is an equal and opposite reaction." This law describes what happens to a body when it exerts a force on another body. Forces always occur in pairs, so when one body pushes against another, the second body pushes back just as hard. For example, when you push a cart, the cart pushes back against you; when you pull on a rope, the rope pulls back against you; when gravity pulls you down against the ground, the ground pushes up against your feet; and when a rocket ignites its fuel behind it, the expanding exhaust gas pushes on the rocket causing it to accelerate.
If one object is much, much more massive than the other, particularly in the case of the first object being anchored to the Earth, virtually all of the acceleration is imparted to the second object, and the acceleration of the first object can be safely ignored. For instance, if you were to throw a baseball to the west, you would not have to consider that you actually caused the rotation of the Earth to speed up ever so slightly while the ball was in the air. However, if you were standing on roller skates, and you threw a bowling ball forward, you would start moving backward at a noticeable speed.
The three laws have been verified by countless experiments over the past three centuries, and they are still being widely used to this day to describe the kinds of objects and speeds that we encounter in everyday life. They form the foundation of what is now known as classical mechanics, which is the study of massive objects that are larger than the very small scales addressed by quantum mechanics and that are moving slower than the very high speeds addressed by relativistic mechanics. |
- What is File Handling In Python?
- Common functions used in file handling in python:
- Common access modes of python file handling:
- Other access modes for python file handling:
- Methods used with file positions:
- More file operations
- Directory operations
What is File Handling In Python?
File Handling In Python– Python is generally preferred for its simplicity. Similarly, file handling concepts that are tougher in many other object-oriented languages is also made easy in python. Python files can be written both in the text as well as binary.
Download notes on file handling in python
The general syntax of working with files:
file_object = open(filename,access_mode)
file_object – indicates the object created for the file
open() – this method opens the file and performs the operation based on the access_mode
filename – the file which needs to be manipulated
access_mode – specifies the mode of operation such as read, write or append
Common functions used in file handling in python:
There are 4 common functions used in python file handling without which any operations in files cannot be performed. They are:
- read() – reads the file that has been passed to the file object
- write() – writes the content specified to the file
- open() – opens the file
- close() – closes the file that has been opened. It is always a good practice to close the file at the end.
Common access modes of python file handling:
There are 4 common modes of operation for file handing in python:
- x– indicates the creation of file
- r– indicates that the file has to be read
- w– indicates to write contents in the file. If the file already exists, then it just overwrites the file with the current contents.
- a– indicates that contents need to be appended. The file pointer will be at the end of the file. If the specified file is not present then it creates a new file and the file pointer will be at the start of the file.
Example 1: Creating a text file
A sample.txt file will be created in the folder the python file is located
Example 2: Writing a file
f=open("sample.txt","w") f.write("File handling in python") f.close()
The contents are written in the sample.txt file
Output in notepad–
Example 3: Reading a file
f=open("sample.txt","r") print(f.read()) f.close()
File handling in python
Example 4 : Appending a file
f=open("sample.txt","a") f.write("… Happy learning….") f.close()
Updated data in sample.txt will be==> File handling in python… Happy learning….
Other access modes for python file handling:
- rb – The file is read in binary format.
- r+ – the file can be used for both reading and writing contents in text format
- rb+ – the file can be used for both reading and writing contents in binary format
- wb – the file can be written in binary format
- w+ – the file can be used for both reading and writing in text format
- wb+ – the file can be used for both reading and writing in binary format
- ab – the file contents can be appended in binary format
- a+ – the file can be used for both reading and appending contents in text format
- ab+ – the file can be used for both reading and appending contents in binary format
Methods used with file positions:
There are 2 file methods that are used to deal with file positions:
The tell() is used for identifying which position the current file pointer is located. In other words, it’s the distance of the file pointer from the beginning of the file.
f=open("sample.txt","r") print(f.read()) # reads the file print(f.tell()) # the file is read thus the file pointer # is now at the end of the contents # in the file f.close()
File handling in python… Happy learning…. 45
The seek() method is used to change the current file position. This function can be written with either 1 or 2 parameters.
file_object(offset) file_object (offset, from)
offset – the offset value indicates the number of bytes or location of the file pointer to be moved. A positive number indicates right to left and a negative number indicates left to right.
from – the from parameter can be of only 3 values. If it is 0, then it indicates the pointer must be at the beginning of the file. If its 1, then the current position of the file pointer has to be considered. If it is 2, then it indicates the end of the file.
f=open("sample.txt", "r") print(f.read()) f.seek(20) print(f.tell())
File handling in python... Happy learning.... 20
f=open("sample.txt", "r") f.seek(5,0) print(f.read())
handling in python... Happy learning....
More file operations
The os module can be used for file operations and directory related operations in python. To perform those operations, the os module needs to be imported first.
- rename()– can be used to rename a file
- remove() – remove or delete a file
os.rename(old_file_name. new_file_name) os.remove(file_name)
import os os.rename( “sample.txt”, ”example.txt)
The text file sample.txt is changed to example.txt
import os os.remove(“example.txt”)
The example.txt file is deleted
There are 4 common directory operations which you can perform in python file handling:
- mkdir() – creates a new directory
- chdir() – changes the directory
- getcwd() – returns the current directory
- rmdir() – removes the directory
import os os.mkdir(“practice”)
Creates a new directory called practice
import os os.chdir(“/desktop/sample”)
Changes the location of the directory to the specified path
import os os.getcwd()
Returns the current working directory
import os os.rmdir(“/sample”)
Removes the specified directory
This post is contributed by Kushmitha Radhakrishnan.
So today we will learn how to auto login in a tab using Python. So, let’s first see…
In python, it’s very easy to deal with matrices due to its simple syntax and we create matrices…
Hello friends, do you know that there are many ways to make music players from simple to advanced…
Hello guys welcome again to our site where you get amazing source code absolutely free with explanation So,…
Hello friends, today we will make an age calculator using python’s GUI library tkinter so, let’s start. Python…
Hello guys, as usual we are sharing one more source code using which you can display images inside… |
The Desert Massasauga, Sistrurus catenatus edwardsii, is a subspecies of the Massasauga rattlesnake. It is found in the southwestern United States, primarily in Texas, New Mexico and Arizona. There are also small populations in Colorado, Oklahoma, California, and in northern Mexico. It is found in rocky, semi-arid and arid areas.
The Desert Massasauga has a light gray or white base color, with dark gray or gray-brown blotches. Their underside is typically entirely white. They are among the smallest of the rattlesnakes, growing to an average of 26 inches in length. They have eyes with vertical pupils and a distinctive, dark stripe that runs along the side of the head which passes over the eye. Like all rattlesnakes, they have a rattle on their tail composed of keratin, which gains a segment each time the snake shed’s its skin.
The diet of this snake consists primarily of rodents, lizards and frogs. Their rattles are significantly higher pitched than those of larger rattlesnakes, sometimes giving them the nickname buzztail. They are primarily nocturnal, especially during the summer when it is too hot for them to be active, but they will sometimes be found out sunning themselves.
Massasauga venom is more potent than that of many larger species of rattlesnake, but due to the lower yield of venom that the massasauga produces in each bite, it is typically not considered lethal in humans. However, the powerful hemotoxin can cause swelling, necrosis, and severe pain and should be treated immediately.
The Desert Massasauga is listed as a species of concern in Colorado, due to its limited range in the state, and it is protected by Arizona state law. It is listed as a sensitive species by the United States Forest Service.
Photo by LA Dawson |
Since 1988, the U.S. Government has set aside the period from September 15 to October 15 as National Hispanic Heritage Month to honor the many contributions Hispanic Americans have made and continue to make to the United States of America. Our Teacher's Guide brings together resources created during NEH Summer Seminars and Institutes, lesson plans for K-12 classrooms, and think pieces on events and experiences across Hispanic history and heritage.
Our literary glossary provides a comprehensive list of terms and concepts along with lesson plans for teaching these topics in K-12 classrooms. Whether you are starting with a specific author, concept, or text, or teaching a specific literary term, but do not have a lesson or activity for students to work with, teachers and students will find what they're looking for here.
This Teacher’s Guide provides compelling questions to frame a unit of study and inquiry projects on the Reconstruction Era, includes NEH sponsored multimedia resources, activity ideas that include use of newspapers from the time and interdisciplinary approaches to bring social studies, ELA, and music education together, and resources for a DBQ and seminar.
Poet. Orator. Actress. Activist. Writer. Singer. Phenomenal Woman. These and many more superlatives are used to describe the incomparable Maya Angelou. Gone too soon in 2014 at the age of 86, Dr. Angelou’s legacy will live on through the words she used to eloquently, powerfully, and honestly express emotions, capture experiences, and spread hope.
For more than 400 years, Shakespeare’s 37 surviving plays, 154 sonnets, and other poems have been read, performed, taught, reinterpreted, and enjoyed the world over. This Teacher's Guide includes ideas for bringing the Bard and pop culture together, along with how performers around the world have infused their respective local histories and cultures into these works.
EDSITEment brings online humanities resources directly to the classroom through exemplary lesson plans and student activities. EDSITEment develops AP level lessons based on primary source documents that cover the most frequently taught topics and themes in American history. Many of these lessons were developed by teachers and scholars associated with the City University of New York and Ashland University.
Our collection of resources is designed to assist students and teachers as they prepare their NHD projects and highlights the long partnership that has existed between the National Endowment for the Humanities and National History Day. Resources for the current theme and previous years are available.
Created through a partnership between the National Endowment for the Humanities and the Library of Congress, Chronicling America offers visitors the ability to search and view newspaper pages from 1690-1963 and to find information about American newspapers published between 1690–present using the National Digital Newspaper Program.
This collection of free, authoritative source information about the history, politics, geography, and culture of many states and territories has been funded by the National Endowment for the Humanities. Our Teacher's Guide provides compelling questions, links to humanities organizations and local projects, and research activity ideas for integrating local history into humanities courses. |
The Legionella bacteria is common in wet and humid environments. The optimal temperature range at which the bacteria form is 35-46 C, and thus they typically thrive in water that has a temperature of 30-50 C. There are many different species of Legionella bacterium, and far from all of them cause sickness in humans. Legionella bacteria can be the cause of two illnesses in humans:
- Legionnaires’ disease, an infection that produces pneumonia
- Pontiac fever, which resembles acute influenza
When designing air conditioning systems, how to reduce the risk of bacteria growth must be considered. All hot and cold-water systems should be provided with an effective water disinfection system which is able to remove both biofilm and kill free bacteria and other micro-organisms without affecting taste and smell of the water. National building codes, legislation and other national guidelines concerning hot water systems have to be observed as well.
There are various hygienic methods of minimising the risk of the bacteria growth and of killing them:
- Disinfection, e.g. chlorine dioxide
- Heat treatment by circulating hot water
- Filter system
Cold water systems:
When designing cold water systems, considering water temperature, retention time, pipe material and regular system maintenance is of great importance to prevent micro-bacterial growth.
Especially in large and tall buildings cold water often heats up to a level where a wide selection of bacteria can breed. Where the water main enters the building, the cold water has a temperature of 8-15 °C. After that point the water temperature starts to increase.
Depending on the consumption, the water temperature often reaches almost the same temperature as the surrounding air, and the cold water is likely to contain bacteria. Like other micro-organisms, legionellae live and feed in biofilm which is found inside pipes and tanks.
Many cold-water systems are at risk of getting infected but there is an increased risk of growth in systems where:
- Pipe and tank insulation are missing or in poor condition
- Cold and hot water pipes are co-insulated
- There are dead-ends where there is no water flow
- Roof top tanks and break tanks are used. Tanks should be located inside the building and should be sized with low retention time
- Water tank in organic material. The tank itself will serve as food source for bacteria
- Pipes are oversized. Stagnant water increases risk of bacteria growth
- Pipe material can rust. Rust is a good food source for bacteria
Cold water systems in buildings with a risk of scaling during low consumption periods should be provided with an effective water disinfection system.
Cooling water in cooling towers for air-conditioning purposes are often subject to micro-bacterial growth. A disinfection system should always be installed. Cooling towers and evaporative condensers are used to dissipate unwanted heat to the atmosphere through water evaporation. Water is sprayed into the cooling tower through spray nozzles and tiny airborne droplets are formed.
While falling through the tower, some of the water evaporates but some droplets, known as drift, are carried out of the tower by the air stream produced by the fans. The presence of drift has been detected as far as 6 km away from the cooling tower.
Legionella bacteria grow often in the water and are easily dispersed together with the drift. This water mist can be breathed into the respiratory system, causing risk of Legionella disease and Pontiac fever. Cases in which hundreds of persons are affected by one cooling tower have been reported.
Hot water systems:
When designing hot water systems, the water temperature, water retention time, pipe material and ensuring regular system maintenance are of great importance in order to prevent micro-bacterial growth.
All hot water systems are at risk getting infected but there is an increased risk of growth in systems where:
- Warm water remains more or less stagnant due to low consumption
- Biofilm inside tanks and pipes have been allowed to build up
- Water temperature is between 25 °C and 46 °C which is ideal for legionella
- There are dead-ends without flow
- There are sediment, rust, scale and sludge which provide good food sources for the bacteria
- Pipe and tank insulation are missing or in poor condition
- The system is not maintained properly
However, water temperatures in hot water tanks should always be kept at 60 °C. Temperatures at the tap pipes should be no less than 55 °C. If the water temperature exceeds 60 °C, undesirable scaling will occur in tanks and pipes.
When designing a hot water system, it should always be considered if a hot water exchanger can be used instead of a hot water tank. A hot water exchanger is often a plate exchanger and is characterised by having no water volume where bacteria can grow.
Micro-bacterial growth is a problem in hot and cold-water systems and cooling towers, and also in many other applications in commercial buildings such as water fountains, spas, swimming pools and fruit and vegetable moisturising systems.
Common for systems below are that legionella often grows in the water and aerosols easily are dispersed to the surroundings. All the systems should be provided with effective water disinfection systems, which are able to remove both biofilm and kill free bacteria and other micro-organisms.
- Water fountains:
Water fountains in shopping malls, airports, hotels, fun parks are subject to bacteria growth. Water is sprayed into the air and airborne droplets are formed and are easily inhaled into the lungs of the guest. Fountain water is the same temperature as the surrounding air 25 - 35 °C. At that temperature legionella and other bacteria grow easily in the water and biofilm.
Legionella are a particular problem in spa baths because the water is at an optimum temperature for the legionella bacteria to grow and because dirt, dead skin cells etc. from the people using the bath provide food for the bacteria. Furthermore, the piping for the air and water circulation provides a large surface area for the bacteria to grow on. The agitated water in spas forms aerosols in which legionella bacteria can be contained and inhaled.
- Fruit and vegetable moisturising:
In order to maintain fruit and vegetables fresh as long as possible, water is sprayed into the air in many groceries and supermarkets. This procedure is not only able to reduce moisture and weight loss of fruit and vegetables also promote re-hydration. Re-hydration enables fresh produce to regain the moisture already lost since harvest and therefore extend fruit and vegetable life dramatically.
Use the Grundfos sizing tool to ensure correct sizing of your disinfection system to combat Legionella in the water system.
How to fight legionella in Commercial Buildings
Download our application guide about the measures that need to be taken in order to ensure that water systems in commercial and residential buildings are kept safe. It describes what legionella is and the sources of legionella in a commercial building, and it shows how these sources are treated with the best possible effect. |
Imagine putting a seed in a freezer, waiting 30,000 years, and then taking the seed out and planting it. Do you think a flower would grow?
Amazingly, scientists have just managed to do something very similar. They found the fruit of an ancient plant that had been frozen underground in Siberia — a region covering central and eastern Russia — for about 31,800 years. Using pieces of the fruit, the scientists grew plants in a lab. The new blooms have delicate white petals. They are also the oldest flowering plants that researchers have ever revived from a deep freeze.
“This is like regenerating a dinosaur from tissues of an ancient egg,” University of California, Los Angeles biologist Jane Shen-Miller told Science News.
The plant has a long history. Back when mammoths and woolly rhinoceroses roamed the land, an Arctic ground squirrel buried seeds and fruits in an underground chamber near the Kolyma River in northeastern Siberia. The ground became permafrost, a layer of soil that stays frozen for a long time.
Recently, Russian scientists dug out the old burrow and found the plant remains 38 meters (125 feet) below the surface. Back at the lab, the team fed nutrients to tissue from three of the fruits to grow shoots. Then the scientists transferred the shoots to pots filled with soil. The plants produced seeds that could be used to grow even more of them.
The ice-age plants look similar to a modern relative called the narrow-leafed campion, or Silene stenophylla. But the ancient flowers are slightly different: Their petals are a bit narrower and have a less fringed shape. It’s possible that the regrown plants belong to a different species but are closely related to S. stenophylla, botanist Bengt Oxelman of the University of Göteborg in Sweden told Science News.
It’s important for scientists to know that plant tissues can still be revived after being frozen for a long time. That’s because many researchers are trying to preserve the seeds of modern plants by freezing them and then storing them in giant lockers at various spots around the globe. One such endeavor, an underground facility in Norway, is called the Svalbard Global Seed Vault. It stores hundreds of thousands of frozen seeds. If a plant ever goes extinct, scientists could resurrect it by pulling its seeds from the Svalbard or other vaults.
“No one knows how long [frozen seeds] are viable for, but freezing is basically the format for all seed conservation attempts nowadays,” Sarah Sallon told Science News. She is the director of the Louis L. Borick Natural Medicine Research Center at the Hadassah Medical Organization in Jerusalem. It’s a good thing that at least some plants are tough enough to survive the ordeal.
fruit A seed-containing reproductive organ in a plant.
permafrost A layer of soil that stays frozen for a long time. |
A research team at the University of Wisconsin–Madison has identified a new way to convert ammonia to nitrogen gas through a process that could be a step toward ammonia replacing carbon-based fuels. The discovery of this technique, which uses a metal catalyst and releases—rather than requires—energy, was reported in Nature Chemistry and has received a provisional patent from the Wisconsin Alumni Research Foundation.
The electrochemical conversion of ammonia to dinitrogen in a direct ammonia fuel cell (DAFC) is a necessary technology for the realization of a nitrogen economy. Previous efforts to catalyse this reaction with molecular complexes required the addition of exogenous oxidizing reagents or application of potentials greater than the thermodynamic potential for the oxygen reduction reaction—the cathodic process of a DAFC. We report a stable metal–metal bonded diruthenium complex that spontaneously produces dinitrogen from ammonia under ambient conditions. The resulting reduced diruthenium material can be reoxidized with oxygen for subsequent reactions with ammonia, demonstrating its ability to spontaneously promote both half-reactions necessary for a DAFC.—Trenerry et al.
The scientists found that the addition of ammonia to a metal catalyst containing the ruthenium spontaneously produced nitrogen; no added energy was required. This process can be harnessed to produce electricity, with protons and nitrogen gas as byproducts. In addition, the metal complex can be recycled through exposure to oxygen and used repeatedly.
We figured out that, not only are we making nitrogen, we are making it under conditions that are completely unprecedented. To be able to complete the ammonia-to-nitrogen reaction under ambient conditions—and get energy—is a pretty big deal.—John Berry, corresponding author
Ammonia has been burned as a fuel source for many years. During World War II, it was used in automobiles, and scientists today are considering ways to burn it in engines as a replacement for gasoline, particularly in the maritime industry. However, burning ammonia releases toxic nitrogen oxide gases.
The new reaction avoids those toxic byproducts. If the reaction were housed in a fuel cell where ammonia and ruthenium react at an electrode surface, it could cleanly produce electricity without the need for a catalytic converter.
For a fuel cell, we want an electrical output, not input. We discovered chemical compounds that catalyze the conversion of ammonia to nitrogen at room temperature, without any applied voltage or added chemicals. This is the first process, as far as we know, to do that.—Christian Wallen, co-author
We have an established infrastructure for distribution of ammonia, which is already mass produced from nitrogen and hydrogen in the Haber-Bosch process. This technology could enable a carbon-free fuel economy, but it’s one half of the puzzle. One of the drawbacks of ammonia synthesis is that the hydrogen we use to make ammonia comes from natural gas and fossil fuels.—Michael Trenerry, lead author
This is changing, however, as ammonia producers attempt to produce “green” ammonia, in which the hydrogen atoms are supplied by carbon-neutral water electrolysis instead of the energy-intensive Haber-Bosch process.
As the ammonia synthesis challenges are met, according to Berry, there will be many benefits to using ammonia as a common energy source or fuel. It’s compressible, like propane, easy to transport and easy to store. Though some ammonia fuel cells already exist, they, unlike this new process, require added energy, for example, by first splitting ammonia into nitrogen and hydrogen.
The group’s next steps include figuring out how to engineer a fuel cell that takes advantage of the new discovery and considering environmentally friendly ways to create the needed starting materials.
This work was supported by the US Department of Energy (DOE).
Trenerry, M.J., Wallen, C.M., Brown, T.R. et al. “Spontaneous N2 formation by a diruthenium complex enables electrocatalytic and aerobic oxidation of ammonia.” Nat. Chem. doi: 10.1038/s41557-021-00797-w |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.