content
stringlengths 275
370k
|
---|
An artist's illustration of the Azhdarchid pterosaur species Quetzalcoatlus northropi
Credit: Wikimedia Commons/Mark Witton and Darren Naish
Giant toothless pterosaurs with wingspans stretching 39 feet (12 meters) across ruled the skies 60 million years ago, and new research suggests that these ancient flying creatures once had a worldwide presence, and likely played an important role in the Late Cretaceous ecosystem.
Despite their formidable size, the pterosaurs in the Azhdarchidae family had no teeth. The new research suggests they replaced their toothed relatives as the dominant species when high levels of carbon dioxide killed off important microscopic marine creatures, leading to a mass extinction about 90 million years ago.
"This shift in dominance from toothed to toothless pterodactyloids apparently reflects some fundamental changes in Cretaceous ecosystems, which we still poorly understand," Alexander Averianov, from the Russian Academy of Sciences, wrote in a new study of this type of pterosaur. [Photos of Pterosaurs: Flight in the Age of Dinosaurs]
Fossil records show that pterosaurs were likely the first airborne vertebrates and they took to the skies around 220 million years ago. Some were so large they likely had to get a running start before taking off and had a hard time landing, according to research presented at the 2012 Geological Society of America meeting. The name Azhdarchidae comes from the Persian word "aždarha," which means dragon.These toothless creatures lived during the Late Cretaceous Period, about 70 million years ago.
Scientists know little about pterosaurs, since their fossil record is largely incomplete. Pterosaur bones are more fragile than other dinosaur bones and there are few that have survived. Most Azhdarchidae pterosaur fossils are preserved in soft sediment deposits called Konservat-Lagerstätten. These kinds of fossils are rare for the Late Cretaceous, so paleontologists have a hard time piecing together the pterosaur lineage.
"Azhdarchidae currently represents a real nightmare for pterosaur taxonomists," Averianov wrote in the paper.
In a 2008 review of Azhdarchidae, scientists examined 32 bones, but Averianov examined 54 known Azhdarchidae fossils: 51 bones and three fossilized tracks. The giant birds likely lived in a variety of environments, but after closely examining the sediments in which the fossils were discovered, Averianov discovered that most of the toothless pterosaurs probably lived near lakes and rivers and along coastlines.
About 13 percent of the pterosaur fossils were found in lake sediments, 17 percent from river sediments, 17 percent from coastal plains, 18 percent from estuaries and 35 percent from marine sediments.
Most Azhdarchidae species are only defined based on a few fragmented bones. The more complete skeletons scientists have discovered are not very well preserved. This lack of fossils led researchers to create an "inflated" number of pterosaur species, according to Averianov. After reviewing the taxonomy, Averianov found that paleontologists created separate species of Azhdarchidae based on sparse fossil evidence and may have misclassified some of the bone fragments.
In an effort to learn more about the evolution of pterosaurs, scientists created an online database of fossils called PteroTerra,which maps out the distribution of these ancient creatures using Google Earth.
The new taxonomy research was published Aug. 11 in the journal ZooKeys. |
Digital Circuits/Number Representations
From Wikibooks, open books for an open world
Decimal Numbers A decimal number system is composed of ten digits from 0 to 9 and each digit has its own weight and quality. These are the numbers which have base 10. In decimal number system every digit has a weight multiplied by power of 10.
Eg: Let us take a number 56. here 5 has a weight of 10 i.e. 5×10=50 and 6 has a weight of 1 i.e. 6×1=6 after adding these numbers we get 50+6=56 |
NOVA scienceNOW: Profile: James McLurkin
Many of your students have never experienced life without computers. Help
them develop a clearer understanding of their personal, and society's,
dependence on computers. Have them brainstorm a list of the ways computers are
used today. Ask them: What roles do computers play in our lives? (Some examples
include personal computers that help us write research papers or e-mail,
calculators that make number crunching quick and easy, and computer chips that
help our cars, microwave ovens, VCRs, cell phones, and other appliances run.)
How would our lives change if computers did not exist?
Then, have students brainstorm a list of city and state agencies with back-up
systems that allow them to continue to provide critical services in the event
of a power or computer failure (airlines, hospitals, nuclear power plants, fire
Finally, you might also have students interview older caregivers or friends,
asking how they perform tasks for which we now use computers.
It used to be that Christmas tree light strings were wired bulb to bulb. If
any bulb burned out, the whole string went dark. The more bulbs in the string,
the more likely the string would fail because the additional bulbs meant more
chances of failure. One of the first electronic computers was ENIAC, first
operated on behalf of the U.S. Army in 1944. Like the old Christmas tree
lights, ENIAC's 19,000 vacuum tubes were a constant source of failure. Technicians
continuously circulated among its banks to change burned-out tubes as it
operated. Help students understand how parallel and series circuits work.
Sketch Drawing 1 on the board. Have students trace the flow of electricity from
one blade of the plug through all four light bulbs and back to the other blade
of the plug. Next, erase one of the bulbs and replace it with a large X (see
Drawing 1a). Ask whether the other bulbs would be able to light. (No.) Ask why
all bulbs would be dark. (Pathway is broken.) Explain that this is a series
circuit. Ask students to theorize why early versions of ENIAC would only run
for about 20 minutes before a failure. (Many vacuum tubes create many
opportunities for the pathway to break.)
Sketch Drawing 2 next to Drawing 1. Explain that each bulb
has its own plug. Now, erase one of the bulbs and replace it with a large X (see
Drawing 2a). Ask whether the other bulbs would be able to light. (Yes.) Ask why
the other bulbs would light. (They are on separate pathways.)
Ask students to design a circuit that has only one plug, but all bulbs have a
separate pathway to that plug. Have students share their designs.
Sketch Drawing 3 after students have had a
chance to try the challenge.
Explain that Drawing 3 shows a parallel circuit. Ask students which circuit,
series or parallel, is most likely to stay lit if a bulb burns out.
Explain that James McLurkin designs computers that are made of many small
computers working together. Ask whether McLurkin's computer robots are more
like a series or a parallel circuit. (Parallel: Many robots represent many
pathways to accomplish a task; when one fails, the others continue the task.)
Today, computers seem to effortlessly perform very complicated tasks. These
tasks are actually based on thousands or even millions of simple step-by-step
instructions painstakingly written by computer programmers. A single bad or
missing instruction reveals that computers are really just mindless devices.
List the following common tasks on the board: making a peanut butter and jelly
sandwich, making chocolate milk from syrup, and brushing your teeth. Divide
your class into four-member teams. Have each team choose one of the tasks.
Explain that each student should write a set of instructions to successfully
complete the chosen task. To simulate programming commands, tell students to
write brief but clear instructions with only one action per line. Tell the
teams that when they finish, each writer should read his or her instructions
one step at a time while another team member acts them out. The remaining team
members should look for any missing instructions or instructions that can be
misinterpreted. For example, "Put the peanut butter on the bread" could lead to
a jar sitting on an unwrapped loaf of bread. Ask teams to share some of the
hilarious programming errors that they detected. In what ways is programming
easy and difficult at the same time? How might you keep track of the
instructions in a program that contained 1,000 lines?
James McLurkin says the first rule about robots is that they are "profoundly
stupid." They must be carefully and painstakingly programmed to be successful.
His robots were unable to complete a music demonstration because of a
programming error. Ask students to give examples of robots they are familiar
with from books, television, and movies. How do McLurkin's real-world robots
compare with these fictional robots?
McLurkin is very careful and schedules every detail of his activities on
his computer, but it doesn't always help him stay on schedule. Why does this
happen? Have students create flowcharts (or a step-by-step pictorial
representation) of their day. First, discuss different ways students can create
a flowchart. (It could be simply a series of boxes containing times and tasks
connected by arrows.) Ask them to make a flowchart that includes everything
they think they will do the next day. The flowchart should be fairly detailed.
As they go through the day, have them keep a timed log of what they
actually do. Then, have them compare the real-life log to their
flowchart. Were they behind? Were they ahead? On schedule? Was there duplicate
effort or unnecessary tasks? What adjustments do they need to make in their
schedules to make their days more efficient and the flowchart more accurate?
Make a new flowchart for the following day and test it out. Did their
efficiency increase? What was it like to have a flowchart for their life? How,
if at all, did it change their life?
This activity demonstrates how a large number of students (computers) can be
programmed so that simple arithmetic operations can be carried out without
anyone coordinating the process. (You may want to review the Data Flow Diagram
before doing the activity with students in order to see how information
travels between groups of students in the activity.)
Put students into the following four groups:
- The Result
- First Number
- Second Number
Tell the students that you will give them a copy of their group's
written instructions to review. You will then say "RUN," at which point they
should stand up and execute the written instructions. After their group has
performed its task, they should continue to display the results so other groups
can see them.
To "program" the groups, copy the Instructions for Groups
(PDF or HTML) handout. Cut up each group's
instructions and distribute it to the appropriate group.
Review each group's instructions with them and ask them to demonstrate
Give the First and Second Number Groups a number and the Operation
Group an operation (for example, 14 - 8 = 6). Remember that the largest
number that can be displayed by any group is twice its membership.
Explain that after this activity begins, students are not to
communicate with each other. They are to carefully follow all of the
programming (instructions), observe, and react.
Start the calculation by saying "RUN." Assist the first attempt, as
Once the groups have completed their programming, count the Result
Group hands and share the original operation and the results. Repeat with a new
arithmetic operation, as desired.
After the first run, you can test the resiliency of your distributed computer
by asking several students to sit down during a run. Does the computer adapt
and correct for these losses? Ask students how James McLurkin's small computers
are like this distributed class computer. How would using 24 distributed
computer robots be more effective than a single, very smart robot mapping a cave system
or looking for a lost child? |
Digital computer, any of a class of devices capable of solving problems by processing information in discrete form. It operates on data, including magnitudes, letters, and symbols, that are expressed in binary code—i.e., using only the two digits 0 and 1. By counting, comparing, and manipulating these digits or their combinations according to a set of instructions held in its memory, a digital computer can perform such tasks as to control industrial processes and regulate the operations of machines; analyze and organize vast amounts of business data; and simulate the behaviour of dynamic systems (e.g., global weather patterns and chemical reactions) in scientific research.
A brief treatment of digital computers follows. For full treatment, see computer science: Basic computer components.
A typical digital computer system has four basic functional elements: (1) input-output equipment, (2) main memory, (3) control unit, and (4) arithmetic-logic unit. Any of a number of devices is used to enter data and program instructions into a computer and to gain access to the results of the processing operation. Common input devices include keyboards and optical scanners; output devices include printers and monitors. The information received by a computer from its input unit is stored in the main memory or, if not for immediate use, in an auxiliary storage device. The control unit selects and calls up instructions from the memory in appropriate sequence and relays the proper commands to the appropriate unit. It also synchronizes the varied operating speeds of the input and output devices to that of the arithmetic-logic unit (ALU) so as to ensure the proper movement of data through the entire computer system. The ALU performs the arithmetic and logic algorithms selected to process the incoming data at extremely high speeds—in many cases in nanoseconds (billionths of a second). The main memory, control unit, and ALU together make up the central processing unit (CPU) of most digital computer systems, while the input-output devices and auxiliary storage units constitute peripheral equipment.
Development of the digital computer
Blaise Pascal of France and Gottfried Wilhelm Leibniz of Germany invented mechanical digital calculating machines during the 17th century. The English inventor Charles Babbage, however, is generally credited with having conceived the first automatic digital computer. During the 1830s Babbage devised his so-called Analytical Engine, a mechanical device designed to combine basic arithmetic operations with decisions based on its own computations. Babbage’s plans embodied most of the fundamental elements of the modern digital computer. For example, they called for sequential control—i.e., program control that included branching, looping, and both arithmetic and storage units with automatic printout. Babbage’s device, however, was never completed and was forgotten until his writings were rediscovered over a century later.
Of great importance in the evolution of the digital computer was the work of the English mathematician and logician George Boole. In various essays written during the mid-1800s, Boole discussed the analogy between the symbols of algebra and those of logic as used to represent logical forms and syllogisms. His formalism, operating on only 0 and 1, became the basis of what is now called Boolean algebra, on which computer switching theory and procedures are grounded.
John V. Atanasoff, an American mathematician and physicist, is credited with building the first electronic digital computer, which he constructed from 1939 to 1942 with the assistance of his graduate student Clifford E. Berry. Konrad Zuse, a German engineer acting in virtual isolation from developments elsewhere, completed construction in 1941 of the first operational program-controlled calculating machine (Z3). In 1944 Howard Aiken and a group of engineers at International Business Machines (IBM) Corporation completed work on the Harvard Mark I, a machine whose data-processing operations were controlled primarily by electric relays (switching devices).
Since the development of the Harvard Mark I, the digital computer has evolved at a rapid pace. The succession of advances in computer equipment, principally in logic circuitry, is often divided into generations, with each generation comprising a group of machines that share a common technology.
In 1946 J. Presper Eckert and John W. Mauchly, both of the University of Pennsylvania, constructed ENIAC (an acronym for electronic numerical integrator and computer), a digital machine and the first general-purpose, electronic computer. Its computing features were derived from Atanasoff’s machine; both computers included vacuum tubes instead of relays as their active logic elements, a feature that resulted in a significant increase in operating speed. The concept of a stored-program computer was introduced in the mid-1940s, and the idea of storing instruction codes as well as data in an electrically alterable memory was implemented in EDVAC (electronic discrete variable automatic computer).
The second computer generation began in the late 1950s, when digital machines using transistors became commercially available. Although this type of semiconductor device had been invented in 1948, more than 10 years of developmental work was needed to render it a viable alternative to the vacuum tube. The small size of the transistor, its greater reliability, and its relatively low power consumption made it vastly superior to the tube. Its use in computer circuitry permitted the manufacture of digital systems that were considerably more efficient, smaller, and faster than their first-generation ancestors.
The late 1960s and ’70s witnessed further dramatic advances in computer hardware. The first was the fabrication of the integrated circuit, a solid-state device containing hundreds of transistors, diodes, and resistors on a tiny silicon chip. This microcircuit made possible the production of mainframe (large-scale) computers of higher operating speeds, capacity, and reliability at significantly lower cost. Another type of third-generation computer that developed as a result of microelectronics was the minicomputer, a machine appreciably smaller than the standard mainframe but powerful enough to control the instruments of an entire scientific laboratory.
The development of large-scale integration (LSI) enabled hardware manufacturers to pack thousands of transistors and other related components on a single silicon chip about the size of a baby’s fingernail. Such microcircuitry yielded two devices that revolutionized computer technology. The first of these was the microprocessor, which is an integrated circuit that contains all the arithmetic, logic, and control circuitry of a central processing unit. Its production resulted in the development of microcomputers, systems no larger than portable television sets yet with substantial computing power. The other important device to emerge from LSI circuitry was the semiconductor memory. Consisting of only a few chips, this compact storage device is well suited for use in minicomputers and microcomputers. Moreover, it has found use in an increasing number of mainframes, particularly those designed for high-speed applications, because of its fast-access speed and large storage capacity. Such compact electronics led in the late 1970s to the development of the personal computer, a digital computer small and inexpensive enough to be used by ordinary consumers.
By the beginning of the 1980s integrated circuitry had advanced to very large-scale integration (VLSI). This design and manufacturing technology greatly increased the circuit density of microprocessor, memory, and support chips—i.e., those that serve to interface microprocessors with input-output devices. By the 1990s some VLSI circuits contained more than 3 million transistors on a silicon chip less than 0.3 square inch (2 square cm) in area.
The digital computers of the 1980s and ’90s employing LSI and VLSI technologies are frequently referred to as fourth-generation systems. Many of the microcomputers produced during the 1980s were equipped with a single chip on which circuits for processor, memory, and interface functions were integrated. (See also supercomputer.)
The use of personal computers grew through the 1980s and ’90s. The spread of the World Wide Web in the 1990s brought millions of users onto the Internet, the worldwide computer network, and by 2015 about three billion people, half the world’s population, had Internet access. Computers became smaller and faster and were ubiquitous in the early 21st century in smartphones and later tablet computers. |
The shoulder is composed of the proximal humerus, clavicle, and scapula. The joints of the shoulder include the sternoclavicular (SC), the acromioclavicular (AC), and the glenohumeral. There is also an articulation between the scapula and the thorax. Figures 16–1, 16–2, 16–3 provide the essential anatomy, both osseous and ligamentous, that must be understood to comprehend the disorders involving the shoulder. Superficial to the ligaments are the muscles that support the shoulder and provide for its global range of motion. The rotator cuff surrounds the glenohumeral joint and is composed of the supraspinatus, infraspinatus and teres minor muscles (insert on the greater tuberosity), and the subscapularis muscle (inserts on the lesser tuberosity) (Fig. 16–4). Superficial to these muscles is the deltoid, which functions as an abductor of the shoulder.
The essential anatomy of the shoulder.
The ligaments around the shoulder.
The ligamentous attachments of the clavicle to the sternum medially and the acromion laterally.
The clavicle is an oblong bone, the middle portion of which is tubular and the distal portion, flattened. It is anchored to the scapula laterally by the AC and the coracoclavicular (CC) ligaments. The SC and the costoclavicular ligaments anchor the clavicle medially (Fig. 16–3). The clavicle serves as points of attachment for both the sternocleidomastoid and the subclavius muscles. The ligaments and the muscles act in conjunction to anchor the clavicle and, thus, maintain the width of the shoulder and serve as the attachment point of the shoulder to the axial skeleton.
The scapula consists of the body, spine, glenoid, acromion, and coracoid process. The bone is covered with thick muscles over its entire body and spine. On the posterior surface, the supraspinatus muscle covers the fossa superior to the spine, whereas the infraspinatus and teres minor muscles cover the fossa below the spine. The anterior surface of the scapula is separated from the rib cage by the subscapularis muscle. These muscles offer protection and support for the scapula. The scapula is connected to the axial skeleton only by way of the AC joint. The remainder of the scapular support is from the thick investing musculature surrounding its surface.
When examining the shoulder, start by assessing neurovascular structures. Neurovascular injuries frequently accompany traumatic shoulder injuries. The structures in closest proximity to the shoulder include the brachial plexus, axillary nerve, and axillary artery (Fig. 16–5).
The course of the important neurovascular structures ... |
Sensation is the process by which our senses gather information and send it to the brain. A large amount of information is being sensed at any one time such as room temperature, brightness of the lights, someone talking, a distant train, or the smell of perfume. With all this information coming into our senses, the majority of our world never gets recognized. We don’t notice radio waves, x-rays, or the microscopic parasites crawling on our skin. We don’t sense all the odors around us or taste every individual spice in our gourmet dinner. We only sense those things we are able too since we don’t have the sense of smell like a bloodhound or the sense of sight like a hawk; our thresholds are different from these animals and often even from each other.
The absolute threshold is the point where something becomes noticeable to our senses. It is the softest sound we can hear or the slightest touch we can feel. Anything less than this goes unnoticed. The absolute threshold is therefore the point at which a stimuli goes from undetectable to detectable to our senses.
Once a stimulus becomes detectable to us, how do we recognize if this stimulus changes. When we notice the sound of the radio in the other room, how do we notice when it becomes louder. It’s conceivable that someone could be turning it up so slightly that the difference is undetectable. The difference threshold is the amount of change needed for us to recognize that a change has occurred. This change is referred to as the Just Noticeable Difference.
This difference is not absolute, however. Imagine holding a five pound weight and one pound was added. Most of us would notice this difference. But what if we were holding a fifty pound weight? Would we notice if another pound were added? The reason many of us would not is because the change required to detect a difference has to represent a percentage. In the first scenario, one pound would increase the weight by 20%, in the second, that same weight would add only an additional 2%. This theory, named after its original observer, is referred to as Weber’s Law.
Signal Detection Theory
Have you ever been in a crowded room with lots of people talking? Situations like that can make it difficult to focus on any particular stimulus, like the conversation we are having with a friend. We are often faced with the daunting task of focusing our attention on certain things while at the same time attempting to ignore the flood of information entering our senses. When we do this, we are making a determination as to what is important to sense and what is background noise. This concept is referred to as signal detection because we attempt detect what we want to focus on and ignore or minimize everything else.
The last concept refers to stimuli which has become redundant or remains unchanged for an extended period of time. Ever wonder why we notice certain smells or sounds right away and then after a while they fade into the background? Once we adapt to the perfume or the ticking of the clock, we stop recognizing it. This process of becoming less sensitive to unchanging stimulus is referred to as sensory adaptation, after all, if it doesn’t change, why do we need to constantly sense it? |
From: NASA HQ
Posted: Tuesday, July 14, 2015
Today, our nation is poised to reach a new milestone in exploration and discovery. More than 50 years after our first flyby of another planet, and five years after President Obama challenged America's space program to extend humanity's reach in space while strengthening America's leadership here on Earth, the New Horizons spacecraft will reach Pluto, providing the closest view humanity has ever seen of the dwarf planet.
Since its launch in 2006, New Horizons has traveled to the far reaches of the solar system. When New Horizons passes by Pluto at a distance of only 8,000 miles above the icy surface it will be nearly 3 billion miles from home.
In advance of today's rendezvous, New Horizons has sent back the most detailed images and measurements ever taken of Pluto and its moons, revealing significant new insights about the dwarf planet. Recent color images from New Horizons confirmed the hypothesis that, like Mars, Pluto has a reddish color. Scientists also have tantalizing new evidence of surface features on Pluto that they hope to see even more clearly as the spacecraft draws nearer to its destination.
In the days, weeks, and months to come, as New Horizons transmits volumes of data back to Earth about its encounter, there is much more that scientists hope to learn: what do the surfaces of Pluto and its largest moon, Charon, look like and what is their composition? What is Pluto's atmosphere like, and does Charon have an atmosphere of its own?
Credit for this scientific and technological feat is due to the women and men of NASA, the Johns Hopkins University Applied Physics Laboratory, the Southwest Research Institute and other partners in the public, private and academic sectors. Space travel is challenging and risky, and the team of scientists and engineers who designed, built and supported New Horizons over the years should be commended for this extraordinary achievement. New Horizons is the latest in a long line of scientific accomplishments at NASA, including multiple rovers exploring the surface of Mars, the Cassini spacecraft that has revolutionized our understanding of Saturn and the Hubble Space Telescope, which recently celebrated its 25th anniversary. New Horizons may have reached its destination in the outer reaches of the solar system, but the journey of discovery continues at NASA.
NASA's portfolio of scientific exploration includes a broad and robust array of missions and destinations. The James Webb Space Telescope scheduled to launch in October 2018 will orbit the Sun a million miles from Earth and will reveal new worlds, galaxies and solar systems, enabling a better understanding of our own place in the universe. The 2020s will bring a new rover to the surface of Mars and a mission to explore Europa, an icy moon of Jupiter, is in development. NASA-led studies of the Earth continue to shed new light on the dynamic and complex interactions that influence the climate, weather and natural hazards people encounter around the world. Ultimately, this journey of discovery will bring American astronauts to a place that has sparked imaginations for generations: the surface of Mars.
Successful completion of this mission to Pluto marks a scientific achievement that only a generation ago would have seemed little more than fantasy. With New Horizons' flyby of Pluto, the United States will have visited every planet and dwarf planet in our solar system, a remarkable accomplishment that no other nation can match. Thanks to American ingenuity and leadership, people around the world have a better understanding of planet Earth, the solar system and the universe.
With that knowledge in hand, the next generation of scientists and engineers can look ahead to new horizons of discovery in the decades to come, and the entire NASA Family can take pride in this wonderful accomplishment.
// end // |
Jews do not celebrate Easter. The holiday is recognized by Christians as an event emphasizing the role of Jesus as the messiah. Since Jews dispute claims that Jesus was the messiah, they do not celebrate Easter; however, the major Jewish holiday of Passover often coincides with Easter for historical reasons.Continue Reading
According to Jewish belief, a messiah is an important personage who is destined to rule all Jews. Throughout Jewish history, there have been numerous claims that one person or another was the messiah; however, mainstream Jewish view asserts that the Messiah has not yet arrived. The supernatural and textual evidence cited by Christians to support the belief that Jesus was the messiah is either irrelevant, fictitious, or a misinterpretation of Jewish beliefs, according to Jewish scholars.
Although Jesus' role as the messiah has been long disputed, it is generally accepted that Jesus was Jewish. The events commemorated by Easter occurred during Passover, which is why the two holidays often coincide in the modern age.Learn more about Easter |
The circle is split into four equal pieces. The total number of pieces in the whole is called the denominator. That is the number that goes on the bottom of a fraction. Three of the pieces are shaded in. The number of pieces shaded in is called the numerator. That number goes on top of the fraction. When put together, we get the fraction three-fourths.
In order to help us remember important information about fractions, we created a video. Listen for important vocabulary words such as numerator, denominator, whole, and parts of a whole.
If you’d like to view more math videos, you can check out Student Math Movies, a wiki created by Mr. Avery, Mr. Salsich, and Mrs. Yollis. You can find videos created by students about all different math concepts!
*Did this video help you to understand the difference between the numerator and the denominator?*
*Where might you see an example of a fraction outside of school?* |
In the oracle bone script, it was an ideogrammic compound (會意): 日(“sun”) + 頁(“head; man”) – man under the scorching sun; summer.
Various variants were seen in the bronze inscriptions from the Spring and Autumn period. 止 (“foot”) was often added to the bottom of the man. Additionally, in the form shown above, the 日 was removed and 𦥑 (“two hands”) was added. The seal script inherits its form from this form, with 止 replaced with the related 夊.
Various forms were also seen from the Warring States period. The bamboo and silk script above shows a common form: 日 + 止 + 頁. The “ancient script” (古文) from Shuowen (labelled as the large seal script) has deviated significantly, with 止 becoming the related 足. The top part may be a corruption of 頁.
The current form is simplified from the seal script, with the removal of 𦥑 and the legs from 頁.
Possibly related to 假 (OC *kraːʔ, *kraːs, “great”), 嘏 (OC *kraːʔ, “great”) and 廈 (OC *sraːs, *ɡraːʔ, “big house”) (Wang, 1980). Shi (2000) and Mair (2013) relate this word to Tibetanརྒྱ(rgya, “great; wide; width; size; expanse; China”).
“magnificent colours; variegated”
The sense “variegated” may be of a different origin. Compare Proto-Sino-Tibetan*Krā(H)(“variegated”) (Starostin), whence Tibetanབཀྲ(bkra, “variegated; bright; radiant; splendid”), Tibetanཁྲ(khra, “many-coloured; variegated; mottled; striped”) and Burmeseကျား(kya:, “variegated; striped; chequered”). Possibly related to 騢 (OC *ɡraː, “horse with mixed red and white colour”). |
distribution function for the number of events that might occur in the next N years is given by the well-known binomial distribution, with expected value p*N, but with some chance for more than this number of events occurring during the next N years, and some chance of less. For example, if p* = 0.15 (a 15 percent chance of an event occurring each year) and N = 20 years, the expected number of events over the 20-year period is 0.15 ×20 = 3 events. We can also calculate the standard deviation for this amount (= [p*(1- p*)N]1/2), which in this case is calculated to be 1.6 events. All this, however, assumes that we are certain that p* = 0.15. In most homeland security modeling, such certainty will not be possible because the assumptions here do not hold for terrorism events. A more sophisticated analysis is needed to show the implications of our uncertainty in p* in those cases.
A common model used to represent uncertainty in an event occurrence probability, p (e.g., a failure rate for a machine part), is the beta distribution. The beta distribution is characterized by two parameters that are directly related to the mean and standard deviation of the distribution of p; this distribution represents the uncertainty in p (i.e., the true value of p might be p*, but it might lower than p* or higher than p*). The event outcomes are then said to follow a beta-binomial model, where the “beta” part refers to the uncertainty and the “binomial” part refers to the variability. When the mean value of the beta distribution for p is equal to p*, the mean number of events in N years is the same as that calculated above for the simple binomial equation (with known p = p*). In our example, with mean p = p* = 0.15 and N = 20 years, the expected number of events in the 20-year period is still equal to 3. However, the standard deviation is larger. So, for example, if our uncertainty in p is characterized by a beta distribution with mean = 0.15 and standard deviation = 0.10 (a standard deviation nearly as great or greater than the mean is not uncommon for highly uncertain events such as those considered in homeland security applications), then the standard deviation of the number of events that could occur in the 20-year period is computed to be 2.5. This is 60 percent larger than the value computed above for the binomial case where p is assumed known (standard deviation of number of events in 20 years = 1.6), demonstrating the added uncertainty in future outcomes that can result from uncertainty in event probabilities. This added uncertainty is also illustrated in Figure A-1, comparing the assumed probability distribution functions for the uncertain p (top graph in Figure A-1) and the resulting probability distribution functions for the uncertain number of events occurring in a 20-year period (bottom graph in Figure A-1) for the simple binomial and the beta-binomial models. As indicated, the beta-binomial model results in a greater chance of 0 or 1 event occurring in 20 years, but also a greater chance of 7 or more events occurring, with significant probability up to and including 11 events. In this case, characterizing the uncertainty in the threat estimate is clearly critical when estimating the full uncertainty in future outcomes.
Proper recognition and characterization of both variability and uncertainty is important in all elements of a risk assessment, including effective interpreta- |
What is STEM education? As teaching STEM in the early years becomes more common, many educators may find themselves asking this question. Science, Technology, Engineering, and Math education develops crucial skills in children that they will use the rest of their lives and apply to other areas of learning beyond science and math.
Bubbling Baking Soda
- You will demonstrate how two substances can be mixed to produce a gas. The children will use their sense of sight, hearing, and smell.
For your demonstration:
- Test tube or 8-ounce water bottle
- 2 ounces vinegar
- 1 tablespoon baking soda
- Timer or stopwatch
For each child:
- Egg carton or recycled lids
- Pipette or dropper
- ¼ cup vinegar
- ½ teaspoon baking soda
- Red, yellow, and blue food coloring
- Explain that vinegar at the bottom of a test tube will be mixed with baking soda inside a balloon. Have the children predict what will happen.
- For demonstration purposes, place 2 ounces of vinegar at the bottom of the test tube (or water bottle). Use a funnel to put 1 tablespoon of baking soda into the balloon.
- Carefully stretch the opening of the balloon over the mouth of the test tube, and hold it tight at that spot with one hand. With your other hand, hold up the other end of the balloon, and shake the baking soda into the test tube.
- Ask the children to observe what happens to the balloon. When the two ingredients are mixed, fizzing will take place at the bottom of the test tube, and the balloon will begin to inflate.
- Note that you created a gas, which is causing the balloon to inflate. Emphasize that the gas is a new matter that was produced by mixing the vinegar and baking soda together. Explain that because you cannot get either ingredient back, a chemical change has taken place.
- Move around the room, and let the children feel the balloon.
- Following the demonstration put about ½ teaspoon of baking soda in a section of each child’s egg carton or in a lid. In separate sections, provide about a teaspoon of vinegar in each of the three colors - red, yellow, and blue- for each child.
- Allow the children to fill their pipettes with colored vinegar and drop the vinegar onto the baking soda in their containers. Ask them to listen and watch as the baking soda fizzes inside the trays.
- Let the children experiment with dropping different colors of vinegar on the same batches of baking soda. They will see that the combination not only fizzes, but it also changes colors. They find this visible change very exciting!
Pouring and More: Funnels and Test Tubes
- You will introduce the funnel as a tool scientists and other individuals use to prevent spills. Once children know how to use a funnel, you can add it to the sand-and-water table. You will also introduce test tubes as tools for holding liquids. The children will explore the proper use of test tubes and the use of funnels to make pouring easier.
- Food coloring (optional)
- Variety of containers
- Cups in a variety of sizes
- Small portion cups (available at dollar, party, and restaurant-supply stores)
For each child:
- Test-tube rack and plastic test tubes (Note: you can use 8-ounce water bottles as an alternative.)
- Plastic container large enough to hold test-tube rack
- Plastic container filled with water
- Give each child a test-tube rack. Instruct the children to place their test tubes inside the plastic containers to be used as catch basins. Each child will also need another container filled with water and small cup. If you use colored water, the children will find the activity more interesting and easier to see.
- Demonstrate by putting a funnel into the first test tube in a rack. Hold the test tube in the air, and show the children the top and bottom of the test tube or little water bottle. Using a cup, scoop some water from the plastic water-filled container. Ask the children to tell you when the water poured into the funnel reaches the top of the test tube or bottle. Slowly begin pouring, and stop when the children tell you to stop.
- When the first test tube is filled, demonstrate moving the funnel from one test tube to the next. Practice filling the other test tubes and letting the children tell you when the water reaches the top of each one.
- Give the children a chance to practice pouring the water into their own test tubes or water bottles. It will be easier for smaller children to do this standing up.
- Once the children know how to use funnels, you can leave funnels in the sand-and-water or sensory tables along with various containers for practice and free exploration.
Find the Long-Lost Animals
- The children will learn about the role of a paleontologist, a scientist who studies fossils- the ancient remains of animals and plants. They will act as pretend paleontologists as they explore fossils with different plastic animals. The children will also sort attributes and learn about characteristics of different types of dinosaurs.
- Contact paper or plastic page protectors
For each child:
- Container (16-ounce size works well)
- Small plastic animals, such as dinosaurs, snakes, and insects
- Tweezers (optional)
- Take digital photos of the toy dinosaurs, snakes, and insects. Make charts with the specimen photos, and cover the chart with clear contact paper or place it in a plastic page protector. Tape it to the underside of each tray to use for sorting at the end of the activity.
- Before working with the children, fill the containers with Playdough. Press the plastic animals inside each container.
- Give each child a tray with pretend fossils. Have the children dump the contents of the containers onto their trays. Demonstrate how to pull the Playdough apart and search for plastic dinos and other hidden critters. You can also give the children plastic tweezers to pick out their findings. Using tweezers as well as squeezing and ripping the Playdough apart will also help the children with fine-motor skills.
- Once the children have extracted all of the hidden treasures, instruct them to line up their specimens. Give each child a chance to point to each item and count how many he has found. This is a great way to practice one-on-one correspondence.
- After the plastic toys have been extracted and counted, instruct the children to flip over their trays to reveal the charts of photos of the fossils. Have the children match up their items to the items in the photos. You can also discuss then names of the dinosaurs and other items as they are printed alongside the photos. Younger children can identify the first letter of each.
Look Out! Volcano Erupting
- While creating a simulated volcano that erupts, the children will explore the concepts of chemical change, creating a gas, and how volcanic eruptions occur.
For each pair of children:
- Paper bowl
- 4-ounce paper cup
- 2 tablespoons baking soda
- 2 ounces vinegar
- Red food coloring
- Use food coloring to turn the vinegar red, and give each pair of children a 2-ounce portion, along with the other materials and ingredients.
- For each pair, place the paper cup into the small disposable bowl, and put the baking soda in the bottom of the cup. Alternatively, give the children portion cups holding the baking soda, and allow them to pour the baking soda into their drinking cups.
- Tell the children to place their funnels inside their cups and quickly pour the vinegar into the top of the funnel.
- Because the children get so excited watching the eruptions, have one pair of students at a time do their demonstration. The children will not get bored watching this over and over again!
Find more great hands-on STEM activities in Hands-On Science and Math! |
Writing the Letter d
Spot and write the lowercase letter d on this worksheet! Read the instructions aloud to your child and assist him in following the instructions. Show him the proper way to hold his pencil as he traces and writes the letter d.
Check out the rest of this alphabet series: |
The Domestic Turkey is a large poultry bird, one of the two species in the genus Meleagris and the same as the wild turkey. Turkey domestication was originally occurred in central Mesoamerica at least 2,000 years ago. Recent research suggests a second domestication event in the Southwestern United States between 200 BC and AD 500. All of the commercial domestic turkey varieties today descend from the domestic turkey raised in central Mexico that was subsequently imported into Europe by the Spanish in the 16th century. The fl eshy protuberance atop the beak is the snood, and the one attached to the underside of the beak is known as a wattle. |
2001 Mars Odyssey is a robotic spacecraft orbiting the planet Mars. The project was developed by NASA, and contracted out to Lockheed Martin, with an expected cost for the entire mission of US$297 million. Its mission is to use spectrometers and a thermal imager to detect evidence of past or present water and ice, as well as study the planet’s geology and radiation environment.
It is hoped that the data Odyssey obtains will help answer the question of whether life has ever existed on Mars and create a risk-assessment of the radiation future astronauts on Mars might experience. It also acts as a relay for communications between the Mars Exploration Rovers, Mars Science Laboratory, and the Phoenix lander to Earth. The mission was named as a tribute to Arthur C. Clarke, evoking the name of 2001: A Space Odyssey.
Odyssey was launched 7 April 2001 on a Delta II rocket from Cape Canaveral Air Force Station, and reached Mars orbit on 24 October 2001, at 02:30 UTC. It is currently in a polar orbit around Mars with an altitude of about 3,800 km or 2,400 miles.
By 15 December 2010 it broke the record for longest serving spacecraft at Mars, with 3,340 days of operation, claiming the title from NASA’s Mars Global Surveyor. It currently holds the record for the longest-surviving continually active spacecraft in orbit around a planet other than Earth, ahead of the European Space Agency’s Mars Express, at 14 years, 5 months and 13 days.
Mars Odyssey mapped the distribution of water in the surface. The ground truth for its measurements came on 31 July 2008, when NASA announced that the Phoenix lander confirmed the presence of water on Mars, as predicted in 2002 based on data from the Odyssey orbiter. The science team is trying to determine whether the water ice ever thaws enough to be available for microscopic life, and if carbon-containing chemicals and other raw materials for life are present.
Mars Odyssey’s THEMIS instrument was used to help select a landing site for the Mars Science Laboratory (MSL). Several days before MSL’s landing in August 2012, Odyssey’s orbit was altered to ensure that it would be able to capture signals from the rover during its first few minutes on the Martian surface. Odyssey now acts as a relay for UHF radio signals from the (MSL) rover Curiosity. Because Odyssey is in a sun-synchronous orbit, it consistently passes over Curiosity’s location at the same two times every day, allowing for convenient scheduling of contact with Earth. |
Indicator Report - Air Quality: Ozone
Why Is This Important?Ozone can cause several adverse health effects in anyone, but especially in sensitive populations such as children, older adults, people with preexisting lung diseases such as asthma, and people who are physically active outdoors. Some of these health problems include painful breathing, chest tightness, headache, coughing, increased asthma symptoms, lung inflammation, and temporary reduction in lung capacity. Over time, ozone is associated with chronic lung problems and respiratory infections. Adverse health effects from ozone are more likely to occur when ozone levels exceed the Environmental Protection Agency's standard, but are possible when ozone levels are below the standard, especially in sensitive populations.
Ground-level ozone, not to be confused with the atmosphere's protective ozone layer, is created by reactions between environmental pollutants and light and heat. Ozone is the main component of smog and is dangerous to health and the environment. The creation of ozone is facilitated by warm weather and sunshine; therefore, ozone levels are usually higher in the summer and in the mid-afternoon.
Climate change may play a part in the creation of more ground-level ozone pollution. As temperatures increase, it is expected that the number of high ozone days will increase, since heat accelerates the nitrogen oxide and volatile organic compound reaction (4). Researchers have found that a combination of higher temperatures, sunlight, emissions, and air stagnation events (i.e., inversions) may result in an increase of ozone levels. However, more research is needed to accurately gauge what portion of ozone is actually increasing solely due to climate change.
Data NotesAverages calculated using available years which can vary depending on location.
This map was made using an interval break method called "equal interval" where classes are based on equal-sized sub-ranges according to numeric value.
Data SourcesU.S. Environmental Protection Agency, Air Quality System (AQS).
DefinitionOzone is a naturally occurring component of the earth's atmosphere at ground level and in the upper regions of the atmosphere. While upper atmospheric ozone protects the earth from the sun's harmful rays, ground-level ozone can be detrimental to the health of plants, animals, and human beings.
Molecules of ozone are made up of three oxygen atoms (O3) and are chemically identical in the upper atmosphere and at ground level. The lungs of animals and humans have a thin liquid lining that protects lung tissue from normal amounts of ozone. But sunlight and heat can create new, ground-level ozone molecules from nitrogen oxides and volatile organic chemicals that are found naturally at the earth's surface, as well as in emissions from industrial facilities and electric utilities, motor vehicle exhaust, gasoline vapors, and chemical solvents in urbanized regions. Ozone is a principle component of urban smog and is measured in parts per million (ppm).
The Environmental Protection Agency's ozone standard states that the 8-hour average ozone level should not exceed 0.075 ppm. This level is considered protective for most people and within the normal defensive capacities of the human respiratory system (1, 2, 3).
How We Calculated the Rates
Page Content Updated On 10/30/2013, Published on 11/12/2013 |
Watergate Teacher Resources
Find Watergate educational ideas and activities
Showing 1 - 20 of 151 resources
Students take and defend positions on what conditions contribute to the establishment and maintenance of a constitutional government. They debate whether or not the government should have prosecuted Nixon over the Watergate scandal.
While the break-in at Watergate in the 1970s and the subsequent resignation of President Nixon was surely scandalous, what is more noteworthy is the lasting impact such an event had on the American public. With this engaging video, your class will learn more about the events of the Watergate political scandal, and the resulting increase in journalistic power and distrust for government.
Students explore the Watergate scandal. In this Watergate lesson, students watch a video regarding the scandal and use the Internet to research it as well. Students then interview adults who share memories of the scandal.
Students discuss the primary events of the Watergate crisis. They conduct an interview with a Watergate-era adult and present a summary of their interview.
Students review Watergate Files and the Watergate Trial using Internet sites. They read about the people involved in Watergate. They discuss the events leading up to and after Watergate.
Students examine Watergate and explore how this crisis affected American politics.
Eleventh graders investigate the charges brought against President Nixon. In this 20th century America instructional activity, 11th graders read excerpts from Articles of Impeachment and respond to the provided discussion questions about the Watergate debacle and Nixon's involvement in it.
Learners examine the climate of American politics. In this Watergate lesson, students analyze political cartoons and documents about the Watergate scandal and discuss the scandal's implications. Learners then research other political scandals and determine how they have contributed to the political climate in the nation.
Students research the Watergate crisis. They discover the differences in investigative reporting then and now.
Through learning about the Watergate scandal students can find out how this incident changed how Americans viewed the presidency.
Students explore ideas about journalism ethics as they relate to Watergate and discuss various issues related to an anonymous source being revealed. They write letters to the public editor of The NY Times about credibility and anonymous sources.
Students investigate the Watergate scandal. They compare and contrast the Watergate incident with other White House scandals.
High schoolers analyze selected pieces of art and infer how they reflect a sense of disillusionment, and/or cynicism in American society in the aftermath of the Vietnam War and Watergate scandal. Then they identify and place cultural attitudes of recent generations of Americans within a historical context. Finally, students identify how art and/or literature and films mirrors a distrust, uneasiness, or cynicism from some Americans about how they view their government and its role.
Connect events of the past to events of today. Budding historians read an eight paragraph passage describing the Watergate scandal. They then connect the Nixon scandal to sex scandals of recent times. There are six critical thinking questions included for use as a writing prompt or as discussion starters.
Pupils analyze the role of independent counsel. In this Bill of Rights lesson, students listen to their instructor present a lecture regarding Watergate, Impeachment, and the role of independent counsel. Pupils respond to discussion questions pertaining to the lecture and participate in an activity.
Students compare Watergate and the Clinton/Lewinsky scandal. In this U.S. Constitution lesson, students define vocabulary terms and read articles regarding the impeachment process. Students respond to questions that require them to compare and contrast the scandalous actions of Clinton and Nixon.
Learners explain how the media portrays certain events and its effects on public opinion of government. They focus on Watergate, the Vietnam War, and the Clinton impeachment. They write essays about skepticism promoted by the media.
How scandalous! Take your class through the more implicating pages of American history with this lesson, which compares Watergate to other White House scandals (Iran-Contra, Teapot Dome, or Whitewater). Then create a timeline of Watergate as well as another of the three given events, and compare and contrast the details of each, such as a the nature of the scandal, illegality, and impact on the public and president. The lesson can work for homeschool as well as whole class.
High schoolers are asked to think about their attitudes towards politicans. They describe the character of Richard Nixon and the attitude of his White House. Students are told about the Watergate scandal. They discuss the effects of the watergate scandal.
In this Nixon presidency worksheet, students respond to 5 short answer questions about Nixon's foreign policy. Students also define 9 terms relating to Watergate. |
Students will match color words to the sheet with the correct color and trace the sentence. If you want an activity you can use over and over again:
Print color word cards out on cardstock, cut apart, and laminate for durability.
Print out activity sheets on cardstock and laminate for durability.
Have students match the color words to the correct color activity sheet. They can place the color word in the box, and then use wet erase markers to trace the sentence. When they have completed the activity, they can erase the color sheets with wet wipes.
Alternatively, you can print these out on regular paper and make it a cut and paste activity.
You can have children work in groups or individually.
You can make this a whole group activity doing one color at a time, or make it a center activity with as many colors as you would like.
Be the first to know about my new discounts, freebies and product launches:
• Look for the green star near the top of any page within my store and click it to become a follower. You will be notified about new products and freebies. All new products are 50% off for 48 hours.
©2016 Greg Smith. All of the activities included in this product are the intellectual properties of Greg Smith - Fun Reading and Writing Resources. This resource is for classroom or homeschool use and is intended for the purchaser. Duplication for other classes, other teachers, or for use in wide distribution is not allowed, but you may purchase extra licenses at a discount! :) This material is protected under the Digital Millennium Copyright Act. |
TenMarks teaches you how to find the factors of a whole number.
Read the full transcript »
How to Find Factors of a Number In this lesson, let’s learn how we find factors of a number. What are factors? Anytime we have a number which is a product, we can look at which whole numbers can be multiplied to give you that product. For example, if the product is 10, I can multiply two and five which are both whole numbers to give me 10. Two and five are both called factors of 10. Second, the product is always divisible by its factors. For example, 10 can be divided by two to give me five and 10 can also be divided by five to give me another whole number two. So, the key thing to remember is a number can be divided by each one of its factors to give other whole numbers and two whole numbers that can be multiplied to give us that product are called factors of the product. Let’s use this to find factors of 20. First, what we do is we figure out which two numbers when multiplied will give us 20. One multiplied by 20 will give us 20, two multiplied by 10 will give us 20, four multiplied by five will give us 20. The factors of 20 are one, two, four, five, 10 and 20. Key thing to remember, every number can be divided by one and by itself. These are definite factors anyway and what we found was there we're four other factors. Let’s try and do 17. If we want to look at 17, what number when multiplied by each other will give us 17? I can only find one pair, one and 17. Seventeen cannot be divided by anything else other than itself and one. The factors of 17 are one and 17. When a number can only be divided by itself and one, when it only has two factors, it’s called a prime number. One other thing to remember is I can take any number for example 12, the factors of 12 would be one, two, three, four, six and 12. Out of these, two and three are primed numbers. Prime factorization of 12 will be 2×2×3. It’s written as a product of its prime factors, 2×2 is 4×3 is 12.
Copyright © 2005 - 2014 Healthline Networks, Inc. All rights reserved for Healthline. |
GlossaryThere are 4 entries in this glossary.
A type of energy that comes from natural sources, such as the wind, sun or water. Some of the most common types of renewable energy include solar energy, hydroelectric power and hydrogen. Renewable energy is a preferred energy source over many types of energy commonly used to day because it is virtually unlimited. The environment continues to provide the sources of energy like these, ensuring there will always be plenty of resources to go around.
A process used to level land surfaces for paving, restore current roads to original driving surface or provide necessary drainage for roadways. The procedure for road grading involves the use of heavy machinery, including a motor grader that removes potholes and other road depressions by filling them with appropriate material before paving. Road grading success is dependent on the amount of moisture on the road base during grading, the weather conditions and the speed of motorists during the process.
Special containers, often waste receptacles, designed to be rolled on and off of transport vehicles. The containers come in a wide range of shapes and sizes, but are typically too large and heavy to be transported through other means. The containers are carried on the flatbed of the truck, and then lifted and carefully rolled off the trailer when they make it to their location. A permit is usually required to have a roll-off container at your home or business.
The process of removing unwanted items and trash from a home or business. Rubbish removal is typically handled by a professional waste management service that provides customers with specific guidelines as to how to collect rubbish and place it curbside for pickup. In some cases, rubbish removal might require special handling, such as in the case of hazardous waste. It might also involve taking some items to a recycling center to remake them into new products. |
Reformation day and martin luther
The Protestant Reformation was the 1th-century religious, political, intellectual and cultural upheaval that splintered Catholic Europe, setting in place the structures and beliefs that would define the continent in the modern era. They argued for a religious and political redistribution of power into the hands of Bible- and pamphlet-reading pastors and princes. Reliable evidence unambiguously confirming this event is not known of.
Available data suggests that October 31 was when Luther sent his work to Albert of Brandenburg, the Archbishop of Mainz. reformation day and martin luther Luther spent his early years in relative anonymity as a monk and scholar. Although these ideas had been advanced before, Martin Luther codified them at a Home Calendar Holidays Germany Reformation DayReformation Day in GermanyReformation Day is a public holiday in five states in Germany on October 31 each year to remember the religious Reformation in Europe. This event was the start of religious and social changes in Europe.
Reformation Day commemorates the efforts that theologian Martin Luther made towards religious and social changes. He predicted a great falling away or apostasy from Christianitybefore the end of the world. This was not to be a falling away intoatheism or agnosticism but a corruption of the true Gospelby the Man of Sin sitting in the Temple of God. It commemorates the act where Martin Luther nailed his 95 Theses to the door of the Wittenburg Church in 1517.The holiday has been celebrated since reformation day and martin luther, but the exact dates have varied. In 1717, October 31 was set at the official day to commemorate the event, but churches often apply the Sunday prior to October 31 as Reformation Sunday.Reformation Day and Martin LutherMartin Luther (1483-154) was a revolutionary of his time, serving as a priest and professor of theology.
He wrote his 95 Theses as an expression of his concern reformation day and martin luther corruption within the Roman Catholic Church. James Emery White. |
Caspian tigers were slightly smaller than Bengal and Amur tigers, with the longer fur characteristic of Amur tigers. Caspian tigers inhabited the dense riparian thickets associated with large rivers in Turkestan, Afghanistan, Iran, Azerbaijan and Turkey. Tigers were extirpated from this region about 40 years ago, through prey depletion and conversion of tiger habitat for agricultural production.
Until recently, the Caspian tiger was considered extinct. However, recent genetic analyses suggest that genetic differences are insufficient to separate Caspian and Amur tigers into separate subspecies. Studies that evaluated the prospects of reintroducing Amur tigers to the former range of Caspian tigers have identified several sites with suitable habitat but insufficient prey. Efforts to restore prey populations to densities capable of supporting tigers are currently underway.
The last confirmed sighting of a Bali tiger occurred in 1937. The Bali tiger was the first of three tiger subspecies classified as extinct by the International Union for the Conservation of Nature, and was likely driven to extinction through a combination of trophy hunting, habitat loss and prey depletion. Historically found on the Island of Bali, the Bali tiger was the smallest of the 9 tiger subspecies, weighing approximately half as much as Amur tigers. Bali tiger fur was short, and dark, with occasional spots between the stripes. Unfortunately, no Bali tigers remain in zoos and very few museum specimens are known to exist worldwide.
The Javan tiger (pictured) was endemic—found there and nowhere else—to Indonesian island of Java, and was the most recent of the tiger subspecies to go extinct, with recorded observations of tigers occurring as recently as 1976. Javan tigers were intermediate in size between Bali and mainland tiger subspecies. This subspecies was driven to extinction by a combination of factors, including a rapidly growing human population, poisoning of tigers and their prey, and the loss of the tigers’ primary prey, the rusa deer, to disease. Like the Bali tiger, no Javan tigers remain in zoos and museum specimens are extremely limited.
To learn how you can help the world’s remaining tigers, visit the Tiger Conservation Campaign website. |
Otitis Externa, Swimmers Ear
External otitis, commonly known as swimmer’s ear, occurs when the protective coating of wax in the canal is altered, making it susceptible to infection. This often occurs with frequent water in the ear canal, which leaves the skin irritated. It can also happen following injury to the canal in the form of cotton swabs, fingernails, or a foreign body.
The infection is often caused by bacteria but may be caused by a fungus. There can also be a localized infection in the form of a small abscess of a hair follicle, called a furuncle.
The infection usually starts with itching and a feeling of fullness which may progress to severe pain. If the ear canal is moved by chewing or pulling on the ear, the person may experience pain. Fever is sometimes present.
The treatment for this condition consists of medicated ear drops and avoidance of water or other foreign bodies in the ear. If the condition is severe, a wick can be placed in the ear canal and kept moist with ear drops.
Prevention consists of avoiding cotton-tipped applicators, hair pins, fingernails, or other foreign objects in the ear canal. For swimmers, drops can be used after swimming to try to prevent this condition. Premixed drops may be purchased at the pharmacy or alcohol can be used to dry the canal. One may also mix one part alcohol to one part vinegar, as an alternative.
Despite treatment, otitis externa can get worse. If the patient has persistent severe pain, swelling or redness of the external ear or the area behind the ear, or drainage from the ear, please call us for further evaluation.
Download and Print as Adobe PDF |
-- in which students examine one another's work -- lie at
the center of most ENWR classes. Workshops give students the chance
to improve both as readers and as writers. They also help students
to understand that writing is a process that involves revision
informed by reader feedback.
In an effective workshop, writers get helpful comments from their
peers and learn to think critically about their own work. Ultimately,
then, good workshops mean that students can revise without direct
input from the instructor.
Key guidelines | Workshop
Procedures | Workshop Variations | Workshop
Here are a few key guidelines:
* Workshop goals must match the course content. Especially at the beginning
of the semester, these goals should be well and narrowly defined. (For example,
a class that has been working on claims might identify paragraph-level claims
and decide whether they are contestable, supportable, and appropriate to the
genre and academic discipline in which the writer is working.) Defined goals
ensure that workshops reinforce other class activities and keep students from
offering purely local grammar corrections and vague assessments of whether the
paper "flows." Students should provide specific, written comments
to each other's work.
* Early on, you should set the workshop goals. As the semester progresses, the
class might generate them ahead of time, using the shared vocabulary of LRS.
* Editor's worksheets are a good way to keep students focused on the particular
goals of any workshop.
* Students work best as articulate readers. Ask them to report and evaluate
what they see on the page. This feedback is an invaluable resource for writers,
especially novices, who are often surprised to find that the argument in their
heads (or their outlines) never made it into their essays. Student editors should
comment on what's actually on the page instead of what the writer might have
written, where the argument could have gone.
* After the workshop, ask writers to review the comments on their own papers
and turn in some written account of what they learned and how they will proceed
in light of the feedback they received. (They might write this in class or as
a homework assignment.) This step reinforces the results from the workshop.
(adapted from work by Betsy Winakur Tontiplaphol)
* Students should always know in advance that their writing will be workshopped.
Writing should be submitted before class so that students have a chance to consider
and refine their reactions to the papers. Papers for workshopping should not
be submitted with any further explanation of the student's work; the text should
stand on its own.
* Before the first workshop, have a class discussion about what makes a
good or bad workshop. Students have often workshopped before and have strong
about them. Then, model a workshop. Hand out a piece of writing (written
by you, a former student, a published author—anyone who isn't
actually a student in the class) and ask students to comment as if they
to the author in a workshop. Afterwards, ask students to critique the comments
that their classmates offered.
* Ideally, workshop groups should probably be between 2 and 4 people. A smaller
group means that everyone is sure to get involved but there isn't a huge reservoir
of ideas to draw on; a bigger group means that there can be lots of discussion
and debate, but quiet (or underprepared) people can hide.
* At least at the beginning of the semester, you should probably assign workshop
groups, so that you can separate chatty best friends, spread out class leaders,
or group together people with similar paper topics. It's also handy to change
groups around early in the semester so that students can all become familiar
with one another. Later in the term, you might want to keep students in the
same group for several workshop sessions, so that they can become familiar with
one another's writing.
(adapted from Betsy Winakur Tontiplaphol)
Typically, the workshop group works together to evaluate all relevant aspects
of a paper. Here are some possible variations.
* Have the whole class workshop the same document. This ensures that everyone learns the same lesson, but
can cause quiet students to retreat from the discussion.
* Exchange papers in such a way that students aren't reading the work of anyone
in their workshop group, so that, for example, Group A and Group B would exchange
papers. Then, Groups A and B could come together to report back about each essay.
* Ask each group to act as a specialized unit: a writing SWAT team. During a
problem statement workshop, for example, there might be a status quo group,
a destabilizing moment group, a consequences group, and a resolution group.
The students in each group will focus only on their designated element when
they read the class' papers, which will be analyzed within the group then passed
on to the next team.
* Designate specialties within each workshop group. During an argument workshop,
for example, there might be a claim member, a reasons member, an evidence member,
and an acknowledgment and response member. Then, each member could comment only
on her given specialty, but the group could reach a consensus on how well all
the parts fit together.
* Devote a workshop to expectations and predictions. For example, ask students
to write introductions but exchange only the status quo, destabilizing moment,
and consequences. Then, workshop groups can predict what they think the resolution
is and compare them with what the writer actually came up with. This helps writers
to understand the importance of reader expectations.
* Move around and get rid of the worksheet. When students get out of their chairs
and put their pens down, the energy in the classroom can really improve. For
instance, to workshop for acknowledgment and response, ask students to stand
in two lines, facing someone in the line opposite them. Students on one side
should all tell their claims to the person in the opposite line, who will answer
with an objection (acknowledgment), and then the claimer will have to come up
with a response. When they're through, students in the claim line should move
down one place, so that they can tell their claim to a new person, and hear
* Vary the time limit and formality level. Some workshops might last for an
entire class, and others for only a minute or two. Some might involve complicated
worksheets to be filled out, and others might only require sketches or verbal
For more on workshop variables, with specific
pros and cons, click here.
Parts of Argument
The Whole Paper: These workshops address,
in various combinations, argument, problem statements, and style. They are
most useful later in the semester, when
your students are familiar with the course's principles and are
ready to workshop complete
drafts. You can also adapt and excerpt these worksheets for
shorter, more focused workshops or workshops of partial drafts. |
The most common terms associated with gases are temperature
. Temperature is a measurement of the average kinetic energy of the molecules in a system, usually measured with a thermometer and expressed in Kelvin (K). While working through exercises, if you encounter a problem that has temperature in Celsius, convert it to Kelvin in the following manner:
Kelvin = Celsius + 273.15
A practice exercise might also use the term standard temperature
. This is defined as 0 o
C or 273 K. Notice the units for Kelvin are not expressed in degrees. In fact, the General Conference on Weights and Measurements abolished the use of degrees when referring to Kelvin in 1967. If you think chemistry is tedious, just be glad you don't have to sit through that conference. Make sure you omit the degree symbol when using K.
Pressure is defined as the force per unit area, usually measured with a barometer. There are three common units for pressure: atmospheres (atm), millimeters of mercury (mm Hg), and Pascals (Pa). Standard pressure
is defined as 1 atm of 760.0 mm Hg. A common practice in chemistry is to refer to conditions as STP
, or standard temperature and pressure
. When we encounter this, it means the reaction was run at 273 K and 1 atm.
The gaseous state is the most simple and least fixed phase of matter. It has no definite volume or shape. Because of this, gases are subject to pressure, volume, and temperature changes, all of which affect the overall properties of the gas. The particles in a gas move so rapidly and are so loosely arranged that they will fill any shape in which they are put. Thank goodness, because where would we be without balloon animals? Need a break? Make a monkey.
With its territory vast and its occupants few, a gas consists of mainly empty space. The gas particles, therefore, are pretty much on their own. This is the basis of the ideal gas
model. The ideal gas in an approximation is a theoretical description of a gas state and is not a description of any specific substance. There are no ideal gases in real life. Scientists constructed the ideal gas model to simplify the gas phase and make calculations more manageable.
An ideal gas is described by the following characteristics, known collectively as the kinetic molecular theory (KMT):
1. Contains tiny, discrete particles that have mass but virtually no volume
2. The particles are in constant, rapid, and random motion
3. No attractive forces exist between the particles
4. No attractive forces exist between the particles and their container
5. When the particles collide, energy is conserved
6. No energy is lost when a particle collides with the container
These characteristics are the foundation of the ideal gas and the gas laws that describe gas behavior, namely: Boyle's Law, Charles' Law, Avogadro's Law,
the Combined Gas Law,
the Ideal Gas Law
, Dalton's Law of Partial Pressures
, and finally Graham's Law
It's the gas man |
Rivers may be a significant source of the greenhouse gas nitrous oxide, scientists now find.
Their calculation suggests that across the globe the waterways contribute three times the amount of nitrous oxide to the atmosphere as had been estimated by the International Panel on Climate Change (IPCC), the United Nations scientific body charged with reviewing climate change research.
They found that the amount of nitrous oxide produced in streams is related to human activities that release nitrogen into the environment, such as fertilizer use and sewage discharges.
"Human activities, including fossil fuel combustion and intensive agriculture, have increased the availability of nitrogen in the environment," said Jake Beaulieu of the University of Notre Dame and the U.S. Environmental Protection Agency in Cincinnati, Ohio, and lead author of the paper published this week in the journal Proceedings of the National Academy of Sciences.
"Much of this nitrogen is transported into river and stream networks," Beaulieu said. There, in a process called denitrification, microbes convert the nitrogen into nitrous oxide (also called laughing gas) and an inert gas called dinitrogen.
The finding is important, the researchers say, because nitrous oxide is a potent greenhouse gas that contributes to climate change and destruction of the stratosphere's ozone layer, which protects us from the sun's harmful ultraviolet radiation. Compared with carbon dioxide, nitrous oxide is 300-fold more potent in terms of its warming potential, though carbon dioxide is a far more prevalent greenhouse gas. Scientists estimate nitrous oxide accounts for about 6 percent of human-induced climate change.
Beaulieu and colleagues measured nitrous oxide production rates from denitrification in 72 streams draining multiple land-use types across the United States. When summed across the globe, the results showed rivers and streams are the source of at least 10 percent of human-caused nitrous oxide emissions to the atmosphere.
"This new global emission estimate is startling," said Henry Gholz, a program director for the National Science Foundation's Division of Environmental Biology, which funded the research.
"Changes in agricultural and land-use practices that result in less nitrogen being delivered to streams would reduce nitrous oxide emissions from river networks," Beaulieu said.
- The 10 Longest Rivers
- Earth in the Balance: 7 Crucial Tipping Points
- Threats to Earth: 7 Little Known Ecological Hazards
You can follow LiveScience Managing Editor Jeanna Bryner on Twitter @jeannabryner. |
Q: What is an anti-alias filter, and do I need one?
A: At its simplest an anti-alias filter removes unwanted high-frequency signals from the signals you want to measure. Let’s look at why you might need one.
When we see a signal on an analog oscilloscope, the signal goes through every voltage shown on the signal trace. That means we see a continuous signal. When we use an analog-to-digital converter (ADC) we have only a set of discrete values that give us voltages at equal time intervals. This transition from continuous signals to discrete samples can cause problems unless you understand the characteristics of signals you want to measure.
The graph below plots seven values obtained by an ADC at about 6 ksamples/sec. Can you determine with certainty the signal that provided these voltages?
They might have come from a sine wave such as the one below that shows a continuous 1-kHz sine wave and the seven ADC samples. You might say, “OK, that’s the signal because the points fit exactly.” But you’re in for a surprise.
The same seven ADC measurements also can arise from a 7-kHz signal with the same amplitude, as shown below. When you examine only the seven points shown earlier, can you say for certain which signal, 1-kHz or 7-kHz, they represent? You cannot. The seven values also could come from measurements of other signals, and not necessarily sine waves.
Engineers call this effect aliasing because it’s almost as if the 7-kHz signal slipped through in disguise and took the “identity” of the 1-kHz signal. You cannot unambiguously identify the original signal based only on the seven ADC measurements. So what can we do? A bit of background information and then a solution.
In the early 1950s, Harry Nyquist (1889 – 1976) and Claude Shannon (1916 – 2001), both at Bell Laboratories, recognized the problem caused by sampling a signal. They published a theorem that explained the need to sample a real-world signal at more than two times the highest-frequency component present in a signal of interest. Today engineers refer to this relationship as the Nyquist–Shannon sampling theorem or just the Nyquist criterion. So if you plan to measure a 1-kHz sine-wave signal with an ADC, you must sample at a rate greater than 2 ksamples/sec. And, you must remove higher-frequency signals that can “alias” to lower frequencies and distort measurements.
Q: So I need a low-pass filter to remove unwanted signals above 1 kHz. But how do I describe it to a supplier?
A: The Nyquist theorem actually says we must sample at more than twice the frequency of a signal’s bandwidth. When you have a sine wave, frequency equals bandwidth. So for the sake of simplicity I’ll use a 600-Hz sine-wave signal as an example. Assume we have a 12-bit ADC and the signal will span the entire 0- to 2.5-volt input range.
We cannot buy or make a “brick-wall” filter that passes a signal at 600 Hz and blocks all signals above 601 Hz. Low-pass analog filters have a characteristic slope, or roll-off, that shows how much signals get attenuated as frequency increases. And engineers usually specify a filter cut-off frequency (fc); the point at which signal attenuation becomes -3 decibels (dB). This information often appears on a Bode plot of attenuation in decibels (dB) vs. frequency (Hz).
This Bode plot uses a logarithmic scale for frequency. The attenuation in decibels (dB)–shown here as a “negative gain”–already represents the logarithm of an input-output signal ratio. Courtesy of the Wikimedia Commons.
To simplify the mathematics of filter design, I recommend the free FilterLab filter-design software from Microchip Technology. It lets people examine several filter types and their characteristics based on the characteristics they need. I have used the software to create filters and the following examples come from FilterLab. For more information, visit: http://www.microchip.com/pagehandler/en_us/devtools/filterlab-filter-design-software.html. FilterLab also can produce a schematic diagram of a filter with the characteristics you select.
To properly filter the 600-Hz test signal before it reaches an ADC we want no attenuation at 600 Hz, and a steep attenuation of signals above that frequency. The figure below provides a FilterLab plot of attenuation vs. frequency for an 8-pole Butterworth filter with a 700-Hz cut-off. (The red line represents phase shift in the filter and I’ll ignore it in this discussion. The word “pole” refers to something called the Z transform of a filter’s impulse response. For more information, visit: http://sound.stackexchange.com/questions/24637/what-does-poles-mean-in-relation-to-a-filter.)
The next diagram shows a steeper frequency cutoff with an 8-pole Chebychev low-pass filter, but at the expense of -3 dB attenuation “ripples” at frequencies in the 0-600-Hz passband. I find a Butterworth anti-alias filter works well in most cases and it has no “ripple” in the passband.
Here is the schematic diagram for the Butterworth filter described above:
Q: The Butterworth filter gets to -100 dB at around 3000 Hz, so should I set the ADC to sample at a rate substantially above 6000 samples/sec?
A: Before you determine a sample rate, look at the resolution of the ADC used to digitize the signal. A 12-bit ADC has a resolution of 1 part in 4096, which we express in decibels with the equation:
Here n equals the ADC’s number of bits. For the 12-bit ADC, we calculate a dynamic range of 72 dB. That means the ADC cannot measure signals attenuated below -72 dB. In other words, signals below -72 dB don’t exceed the voltage for ±1/2 LSB. (If you want to include a small margin, consider an attenuation of -75 dB on the plot.)
If you look at the Butterworth filter Bode plot again, you see the -72-dB point on the attenuation line occurs at about 2000 Hz. Thus the Nyquist sample rate must exceed 4000 samples/sec, so you can sample at a slower rate that you might expect at first glance. Always treat the Nyquist sample rate as a starting point. In practice I use a sample rate 5 to 10 times that specified by the Nyquist theorem.
Keep in mind that although our 8-pole Butterworth will attenuate signals at frequencies above 600 Hz, signals between that frequency and the frequency at the -72 dB point will still appear in your sampled data.
Q: My equipment doesn’t output a sine wave. Do the same filter assumptions and calculations apply?
A: They do, but your signal might include information at frequencies higher than you expect. You must take into account higher-frequency signal components and whether you need them or can do without them. A spectrum analyzer will give you a display of frequency components in a signal. A mathematical fast Fourier transform (FFT) of signals from your sensors also will show frequency components. Microsoft Excel can perform an FFT on your test data. Only you can determine which signals contribute useful information and which are just noise that could interfere with measurements.
A 1-kHz square wave provides a good example. We could not just sample at a rate of, say, 5- or 10-kHz/second because a square wave includes many higher-frequency components. Instead, we look at the bandwidth of the square wave.
A perfect square wave–one with an instantaneous voltage change–comprises the sum of an infinite number of smaller and smaller portions of sine-wave harmonics. With enough terms to sum, the equation shown below will eventually produce a perfect square-wave plot. Here the letter f represents the fundamental frequency and t represents time:
An animation lets you see the effect of summing additional odd harmonics to create a square wave. Visit: http://en.wikibooks.org/wiki/Trigonometry/A_Square_Wave_in_Sines and scroll to the bottom of the page.
To capture all the sine-wave harmonics in a square wave would require an impossibly large bandwidth, which no ADC can provide. An Excel plot of a calculated square wave out to the 15th harmonic appears below. (Even with eight polynomial terms, we still get a “rough” square wave.)
Square-wave plot from an Excel spreadsheet includes frequency components out to the 15th harmonic.
Suppose an engineer specifies the need to capture 2.5-volt square-wave signals with a 1-kHz fundamental frequency out to the 15th-harmonic frequency. That means we need samples of 15th of the amplitude at 15 times the fundamental frequency, or 15 kHz:
1. Can the given ADC–12 bits with a 2.5-volt input range–measure such a small signal? The amplitude of the 15th harmonic comes to:
The ADC can measure 611 μvolts/step (1 LSB), so the ADC will easily measure the 15th harmonic signal. In fact the ADC can measure amplitudes out to several thousand harmonics. (Thankfully, real-world square waves have finite rise and fall times and do not include that many harmonics. Also, cables, connectors, components, and circuits attenuate many of the higher-frequency harmonics as a matter of course.)
2. To remove higher-frequency harmonics before they reach the ADC you need an anti-alias filter. The filter must have a cutoff frequency such that it will not attenuate the 15th harmonic. The Microchip Technology FilterLab software calculates the response for an 8-pole Butterworth filter. I adjusted the cut-off frequency to get an attenuation of -0.1 dB at 15 kHz, and a -3 dB attenuation at 18 kHz. See the Bode plot below.
As noted earlier, a 12-bit ADC has a 72-dB dynamic range, which the filter reaches at about 54 kHz. So the sample rate must exceed 108 ksamples/sec. Given my guidelines, that means a real-world sample rate between 270 and 540 ksamples/sec. So although we started with a 1-kHz square wave, we must sample at a much greater rate. Again, a Chebychev low-pass filter offers a faster cutoff but the filter attenuates some of the signals below 15 kHz.
Most signals in process industries do not require such a high sample rate. I provide the square wave example to illustrate that what look like low-frequency signals can include high-frequency components you must take into account when you think about an anti-alias filter. |
The definition includes different elements because “chemical recycling” has evolved over many years. For example, when referring to raw materials, it is critical to clarify that chemical recycling can produce commercial end-products or raw material as intermediaries to manufacture a product. Furthermore, in their definition the Coalition wanted to explicitly highlight that products exclude those used as fuels or means to generate energy in line with the WFD. See below an explanation for each element taken in the definition.
Polymeric waste is explained by the polymer definition under REACH: Means a substance consisting of molecules characterised by the sequence of one or more types of monomer units. Such molecules must be distributed over a range of molecular weights where differences in molecular weight are primarily attributed to differences in the number of monomer units. A polymer comprises the following:
(a) a simple weight majority of molecules containing at least three monomer units which are covalently bound to at least one other monomer unit or another reactant;
(b) less than a simple weight majority of molecules of the same molecular weight.
In the context of this definition, a “monomer unit” means the reacted form of a monomer substance in a polymer.
Hence “Polymeric Waste” is “waste” as defined in the WFD that is largely made up of polymers.
Reference to “converts” means that chemical recycling facilities receive polymeric waste sorted in the waste management processes, returning it from an end of life status to a circular recovery system. Chemical recycling changes the structure of the polymers into molecules of lower molecular weight.
Substance is defined: under REACH as a chemical element and/or its compounds in their natural state or obtained by any manufacturing process, including any additive necessary to preserve its stability and any impurity deriving from the process used, but excluding any solvent which may be separated without affecting the stability of the substance or changing its composition.
Hence a Substance can be made by any manufacturing process, including chemical recycling. The Coalition refers to “substances” to also include mixtures.
Raw Materials is defined: as materials or substances used in the primary production or manufacturing of goods. Raw materials are commodities that are bought and sold on commodities exchanges worldwide.
Hence raw materials are used in primary production or manufacturing and can be traded.
Products: in the definition, this refers to polymers as defined above and other materials or substances other than fuels. |
Learning is an integral part of being a kid. Parents, teachers, and caregivers can help children live up to their learning potential by utilizing all the resources available at LoveToKnow Child Education.
Start your child's academic career by choosing the right preschool. From there, you can help the child in your life achieve academic success by utilizing trustworthy information found on topics like:
- Grade-Appropriate Learning: Whether you need to introduce kindergarten math concepts or teach your preschooler about science, you can find articles to help you choose the right activities for your youngster.
- Subjects: You'll find more than just math and English topics to broaden your child's horizons. Everything from agriculture, civics, and even neuroscience is covered.
- Special Considerations: Get plenty of specialized ideas and tips to help you deal with anything from ADHD and reading to choosing appropriate lesson plans for gifted students.
- Activities: Supplement any subject with fun activity ideas for science fairs, tornado experiments, or games for reading. Plenty of printable worksheets are also available for you to download and use.
Outside the Classroom
More goes into child education than simple facts and figures. Influences outside the academic classroom will also affect how a child learns.
- Uniforms: Whether your child needs to wear a school uniform is a hotly debated topic. Consider the pros and cons before sending him off to school.
- Supplemental Options: From boarding school to overnight camps, there are plenty of enriching educational opportunities beyond the traditional classroom.
- Conflict Resolution: Get practical tips on dealing with bullying in elementary school.
Year Round Education Information
Get trustworthy information on education year round. Whether you need tips for the first day of school or you're just looking for some fun last day of school activities, you'll find the material you need to help your child succeed. |
January 2nd, 2018 in Improve English
Demonstrative pronouns are used to point out the persons or things for which they stand. English has just five of them and they are: this, that, these, those and such.
- This is the prize I got.
- That is her house.
- These are the girls who won the prizes.
- Those are the pictures to be framed.
- I may have offended you, but such was not my intention.
When these words are used as pronouns, they stand alone. That means they are not followed by a noun.
- This is my pen.
Note that the words this, that, these, those and such can also be used as demonstrative adjectives. Demonstrative adjectives are followed by the nouns or pronouns they qualify.
- This tree is taller than that tree. (Here the demonstrative adjective this modifies the noun tree.)
- Those houses look imposing. (Here the demonstrative adjective those modifies the noun houses.)
- Nobody likes such people.
Demonstrative pronouns only stand for certain nouns, and they are not immediately followed by the nouns.
- This is my daughter.
- These are the only apples left.
Indefinite pronouns are words like one, none, nobody, nothing, all, few, some, many, anybody, everybody etc. They do not refer to any person or thing in particular but are used in a general way.
- One should love one’s country.
- Nobody came to her rescue.
- None of these conditions is acceptable to us.
- Something is better than nothing.
- Few escaped unhurt.
- We haven’t received any reply yet.
Notes: None means not one: it may be followed by a singular or plural verb. |
Left shows an image of Jupiter’s clouds and swirling wind patterns, with corresponding computer simulations shown on the right. Credit: NASA/JPL/University of Alberta/MPS
Computer simulations have offered an explanation as to the location and cause of Jupiter’s whirlwinds, including the reason why they rotate in the opposite direction to those found on Earth.
The study found that Jupiter’s whirlwinds are caused by gas flows that rise from deep under the planet’s surface.
Simulations have shown the winds occurring in wide bands north and south of Jupiter’s equator for the first time, in the area where the Great Red Spot is located.
Jupiter’s Great Red Spot is a massive anticyclone that measures up to twice Earth’s diameter and is about 350 years old.
The term ‘anticyclone’ refers to the fact that Earth’s storms rotate anticlockwise in the northern hemisphere and clockwise in the south, while Jupiter’s whirlwinds spin the other way.
“Our high-resolution computer simulation now shows that an interaction between the movements in the deep interior of the planet and an outer stable layer is crucial,” says Johannes Wicht from the Max Planck Institute for Solar Research (MPS), which worked with the University of Alberta in Canada on the simulations.
Jupiter consists mostly of hydrogen of helium; a mixture that becomes metallic and electrically conductive deep within the planet due to the high pressure caused by the surface layers.
Closer to the surface, these gases exist in their non-metallic state in a more stable, outer layer, where the whirlwinds occur.
Simulations depicted this stable layer for the first time, showing just the top 7,000 kilometres of the non-metallic region.
Further inside the core of Jupiter, in the electrically conductive region, immense heat causes gas to rise upwards, but the stable layers closer to the surface provide a sort of barrier.
“Only when the buoyancy of the gas package is strong enough, it can penetrate into this layer and spreads out horizontally. Under the influence of planetary rotation, the horizontal movement is swirled, just as is observed for hurricanes on Earth”, says Wicht.
When this gas then cools, it sinks again.
The various rotational and horizontal motions that occur as this process happens in the simulations correspond to actual observations of the planet’s surface.
On Earth, cyclones are formed as a result of air converging and rising, while on Jupiter these vortices form when the rising gas is pulled apart in the upper atmosphere.
This solution explains why the cyclones on Jupiter swirl the opposite way from those found on Earth.
“Simulating the conditions in Jupiter’s atmosphere is tricky since many properties of this region are not well known,” says MPS scientist Thomas Gastine.
The researchers are still unable to define exactly what causes Jupiter’s Great Red Spot using these simulations, however.
“We are just beginning to understand Jupiter’s weather phenomena”, Wicht says.
“In addition to its size and durability, the Red Spot has other special features such as its characteristic colour.
Additional processes seem to be involved here that we don’t yet comprehend.” |
The rocks pulled down under the continent begin to melt. When two blocks of rock or two plates are rubbing against each other, they stick a little or lock in place. The magnitude-7.0 earthquake struck the region the day before. 0.0 Magma rises through cracks or weaknesses in the Earth's crust. Earthquakes are caused by the sudden release of energy within some limited region of the rocks of the Earth. Get a Britannica Premium subscription and gain access to exclusive content. Earthquakes happen quickly and are short-term environmental changes. Over the centuries, earthquakes have been responsible for millions of deaths and an incalculable amount of damage to property. However, the ground rapture is the primary impact caused by the large earthquakes. Sometimes the molten rock rises to the surface, through the continent, forming a line of volcanoes. Your IP: 126.96.36.199 Earthquakes are caused by the sudden release of energy within some limited region of the rocks of the Earth. They were sliding past each other. when they are produced far out in the ocean, they are not all that tall. Help the community by sharing what you know. The major fault lines of the world are located at the fringes of the huge tectonic plates that make up Earth’s crust. • When the stress on the edge overcomes the friction, there is an earthquake that releases energy in waves that travel through the earth's crust and cause the shaking that we feel. Water, mud and gases are ejected from beneath the fissure. Depending on their intensity, earthquakes (specifically, the degree to which they cause the ground’s surface to shake) can topple buildings and bridges, rupture gas pipelines and other infrastructure, and trigger landslides, tsunamis, and volcanoes. Earthquake Effects We all know that the effects of an earthquake are terrible and devastating. An earthquake is caused by a sudden slip on a fault. Effects of Earthquakes. Richter’s scale was originally for measuring the magnitude of earthquakes from magnitudes 3 to 7, limiting its usefulness. It is a result of the passage of seismic waves through the ground, and ranges from quite gentle in small earthquakes to incredibly violent in large earthquakes. Earthquakes happen quickly and are short-term environmental changes. P waves propagate through the Earth with a speed of about 15,000 miles per hour and are the first waves to cause vibration of a building. These phenomena are primarily responsible for deaths and injuries. The magnitude-9.0 earthquake struck at 2:46 pm. The largest earthquake ever measured was a 9.5 on the scale a 10 has never been recorded. There are four principal types of elastic waves: two, primary and secondary waves, travel within Earth, whereas the other two, Rayleigh and Love waves, called surface waves, travel along its surface. Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. Another way to prevent getting this page in the future is to use Privacy Pass. 2. September 4, 2016 By Janice VanCleave. This sudden release of energy causes the seismic waves that make the ground shake. Seismologist Charles F. Richter created an earthquake magnitude scale using the logarithm of the largest seismic wave’s amplitude to base 10. If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware. Today the moment magnitude scale, a closer measure of an earthquake’s total energy release, is preferred. Corrections? Causes: The chief cause of the earthquake shocks is the sudden slipping of rock formations along faults and fractures in […] Over the centuries they have been responsible for millions of deaths and an incalculable amount of damage to property. Of these, approximately 100 are of sufficient size to produce substantial damage if their centres are near areas of habitation. Answering questions also helps you learn! Seismic waves are produced when some form of energy stored in Earth’s crust is suddenly released, usually when masses of rock straining against one another suddenly fracture and “slip.” Earthquakes occur most often along geologic faults, narrow zones where rock masses move in relation to one another. An earthquake happens when the rock underground breaks and that sudden burst of energy causes seismic waves. Earthquake in many cases, can cause great loss of life. Ground shaking is the most familiar effect of earthquakes. Earthquakes bring about massive destruction such as ground rapture as well as landslides. Richter scale. An earthquake is the result of a sudden release of stored energy in the Earth's crust that creates seismic waves. Of all these the release of elastic strain is the most important cause, because this form of energy is the only kind that can be stored in sufficient quantity in the Earth to produce major disturbances. Little was understood about earthquakes until the emergence of seismology at the beginning of the 20th century. Both earthquakes and tectonic plate movement can affect organisms. The severity of these effects relies on the complex combination of the quake magnitude, geomorphological conditions as well as the distance from epicenter which can reduce or maximize the wave propagation. What causes a volcano to erupt, how they formed and different types of volcano revealed. what question can she ask to help guide her design?? While landslides are considered naturally occurring disasters, human-induced changes in the environment have recently caused their upsurge. Causes and Effects of Earthquakes. (The early estimate of magnitude 8.9 was later revised upward.) Aside from these, electrical lines and gas pipes may break during the cause of the earthquake, causing massive fires when sparks occur. Seismology, which involves the scientific study of all aspects of earthquakes, has yielded answers to such long-standing questions as why and how earthquakes occur. USGS Earthquake Hazards Program - The Great 1906 San Francisco Earthquake, earthquake - Children's Encyclopedia (Ages 8-11), earthquake - Student Encyclopedia (Ages 11 and up), earthquake-damaged neighbourhood of Port-au-Prince, Haiti. Professor Emeritus of Earth and Planetary Science, University of California, Berkeley. Performance & security by Cloudflare, Please complete the security check to access. Earthquake causes damage to the building, bridges, dams. Although the causes of landslides are wide ranging, they have 2 aspects in common; they are driven by forces of gravity and result from failure of soil and rock materials that constitute the hill slope: Earthquake can also cause floods and landslides. Faults extend from a few centimetres to many hundreds of kilometres. Tectonic Movements: The disturbances inside the earth are called tectonic movements. Magnitude is a measure of the amplitude (height) of the seismic waves an earthquake’s source produces as recorded by seismographs. Author of. Be on the lookout for your Britannica newsletter to get trusted stories delivered right to your inbox. The list of some of the main effects caused by earthquakes are given below: 1. These vibrations pass through the earth’s surface in waves and cause the earthquake. The most important earthquake belt is the Circum-Pacific Belt, which affects many populated coastal regions around the Pacific Ocean—for example, those of New Zealand, New Guinea, Japan, the Aleutian Islands, Alaska, and the western coasts of North and South America. Let us know if you have suggestions to improve this article (requires login). An earthquake also referred to as tremor or quake is the trembling that happens on the Earth’s surface as a result of sudden energy release to the Earth lithosphere, therefore, developing seismic waves. Cause of Earthquake . The largest fault surfaces on Earth are formed due to boundaries between moving plates. Also, earthquakes can alter the course of a river and can even cause it to flow in the opposite direction for a short time (this happened to the Mississippi River in the late 1800's). Answering questions also helps you learn! Earthquakes vary in sizes, for instance, there are quakes that are weak and are not enough to be felt by people or even destroy cities. The common effects of earthquake are they are the possibilities to have a tsunami, landslide, and here are the some harmful effects of earthquake.. Loss of property , loss of life, Changes in the course of the river, mud fountains and cracks in Earth crust. 1. Several buildings collapsed, schools, hospitals, … In the 27 March 1964 Alaskan earthquake, for example, strong ground shaking lasted for as much as 7 minutes! Earthquakes associated with this type of energy release are called tectonic earthquakes. When the stress on the edge overcomes the friction, there is an earthquake that releases energy in waves that travel through the earth's crust and cause the shaking that we feel.. 2. Considered as one of the most dangerous effects of earthquakes (especially large ones), these extra-large sea waves can reach up to 100 feet in height and it can move between 500mph to 700mph. 2. Omissions? There also are striking connected belts of seismic activity, mainly along oceanic ridges—including those in the Arctic Ocean, the Atlantic Ocean, and the western Indian Ocean—and along the rift valleys of East Africa. If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices. Causes of Earthquake Earthquakes are caused by sudden tectonic movements in the Earth’s crust. The seismic activity is by no means uniform throughout the belt, and there are a number of branches at various points. Our editors will review what you’ve submitted and determine whether to revise the article. You may need to download version 2.0 now from the Chrome Web Store. Both earthquakes and tectonic plate movement can affect organisms. But, earthquakes can also be caused by other factors that can be natural or artificial. It is estimated that 80 percent of the energy presently released in earthquakes comes from those whose epicentres are in this belt. Building knocked off its foundation by the January 1995 earthquake in Kōbe, Japan. Global seismic centres for earthquakes of magnitude 5.5 and greater occurring between 1975 and 1999. About 50,000 earthquakes large enough to be noticed without the aid of instruments occur annually over the entire Earth. Articles from Britannica Encyclopedias for elementary and high school students. The Brainly community is constantly buzzing with the excitement of endless collaboration, proving that learning is more fun — and more effective — when we put our heads together. But we do know where earthquakes might happen in the future, like close to fault lines. Crowds watching the fires set off by the earthquake in San Francisco in 1906, photo by Arnold Genthe. An earthquake is the shaking or trembling of the earth, caused by the sudden movement of a part of the earth’s crust. The number indicating magnitude ranges between 0 to 9. Several buildings collapsed, schools, hospitals, … Richter in the year 1935.. Earthquake in many cases, can cause great loss of life. 3. This scale was devised by Charles.F. Image: Seiche A seiche is the effect of the sloshing of water back and forth. Damage done is chiefly in following respects: Loss of Property: Severe earthquakes reduce to rubble human structures ranging from huts to palaces and single storey to multi storey buildings.Even pipelines laid under the ground and railway lines are damaged or displaced. When the two jagged edges catch, they hold at first. The energy released in earthquakes from this belt is about 15 percent of the world total. Earthquake, any sudden shaking of the ground caused by the passage of seismic waves through Earth’s rocks. The effects from earthquakes include ground shaking, surface faulting, ground failure, and less commonly, tsunamis. Subduction causes deep ocean trenches to form, such as the one along the west coast of South America. The tectonic plates are always slowly moving, but they get stuck at their edges due to friction. Image: Seiche A seiche is the effect of the sloshing of water back and forth. Michigan Tech - Department of Geological and Mining Engineering and Sciences - Why Do Earthquakes Happen? Ring in the new year with a Britannica Membership, Observation and interpretation of precursory phenomena, Exploration of the Earth’s interior with seismic waves, https://www.britannica.com/science/earthquake-geology, USGS Earthquake Hazards - The Science of Earthquakes. Plate tectonics, when the inner layers of the earth vibrates and reacts with high intensity of heat and magma due to the earth's core. Residents of an earthquake-damaged neighbourhood of Port-au-Prince, Haiti, seeking safety in a sports field, January 13, 2010. Causes of Earthquake. The Brainly community is constantly buzzing with the excitement of endless collaboration, proving that learning is more fun — and more effective — when we put our heads together. Very great earthquakes occur on average about once per year. S waves arrive next and cause … Updates? These plates are rigid parts of the Earth's crust that slide separately on the planet’s molten core. This has long been apparent from early catalogs of felt earthquakes and is even more readily discernible in modern seismicity maps, which show instrumentally determined epicentres. Effects of an earthquake Earthquake produces various damaging effect in the areas the act upon. Help the community by sharing what you know. The main cause is that when tectonic plates, one rides over the other, causing orogeny collide (mountain building), earthquakes. ( the early estimate of magnitude 5.5 and greater occurring between 1975 and 1999 right your! Size to produce substantial damage if their centres are near areas of.. In waves and cause the earthquake in many cases, can cause great of... Physical features like mountains, plateaus and rift valleys are formed the magnitude-7.0 earthquake struck the region day! Belts coinciding with the margins of tectonic plates earthquake in many cases, can cause loss... Sciences - Why do earthquakes happen, the ground caused by sudden tectonic movements: the disturbances inside the 's. March 1964 Alaskan earthquake, any sudden shaking of the tectonic plates make... The 27 March 1964 Alaskan earthquake, any cause and effect of earthquake brainly shaking of the seismic waves •! & security by cloudflare, Please complete the security check to access: 1 approximately 100 are of size! From cause and effect of earthquake brainly Encyclopedias for elementary and high school students caused naturally or artificially that generates waves. Causes and effects of hurricane or human life the two cause of volcanic eruption and its two advantages reactions or! An earthquake before it happens mainly in belts coinciding with the margins of tectonic plates, one rides the. With the margins of tectonic plates, one rides over the other, orogeny! Is about 15 percent of the energy can be released by elastic strain, gravity chemical! See the table of major earthquakes. ) are less advantageous and more harmful man... • your IP: 188.8.131.52 • Performance & security by cloudflare, complete... More concrete evidence of the volcanic cause … Both earthquakes and tectonic plate occurs. The fringes of the 20th century the sloshing of water back and forth ocean trenches to form, such the. Subduction causes deep ocean trenches to form, such as ground rapture as well as landslides the interior the. Logarithm of the rocks of the Earth due to boundaries between moving plates seismicity distribution is understood! For deaths and an incalculable amount of damage to property to boundaries between moving plates and greater occurring between and. By disturbances in the future, like close to fault lines struck the region of Earth!, are vibrations generated by an earthquake earthquake produces various damaging effect in the 27 cause and effect of earthquake brainly 1964 Alaskan earthquake for! Water, mud and gases are ejected from beneath the fissure causing orogeny collide mountain! Earthquakes comes from those whose epicentres are in this belt by disturbances the... Is to use Privacy pass CAPTCHA proves you are agreeing to news offers! When the rock underground breaks and that sudden burst of energy released elastic. And physical features like mountains, plateaus and rift valleys are formed due to friction plate movement slowly! Sudden slip on a fault the number indicating magnitude ranges between 0 to 9 strong ground shaking for. Is by no means uniform throughout the belt, and information from Encyclopaedia Britannica exclusive content to news offers! |
For anyone responsible for a child, it is important to know what bullying is and to recognize its signs. This problem affects thousands of young people in the world; but unfortunately, many parents do not realize that their children are subjected to this hostile behavior.
My experience as a therapist has taught me that the role of parents in bullying situations is essential. That is why I invite you to join me in this article and to reflect, in the name of God, on this delicate subject.
What is bullying?
What is bullying? A persistent and abusive behavior from a bully that can cause physical or emotional harm to a child or adolescent. It may be group or individual aggression that exposes the victim to sustained public humiliation and ridicule.
The Bible tells us «Do not hurry yourself (…) to become offended» (Ecclesiastes 7:9); but the victims of bullying are generally vulnerable to mistreatment and often suffer serious psychological consequences that remain after the physical consequences have disappeared.
Characteristics of bullying
Although bullying is a complex phenomenon, it has a few outstanding characteristics:
- It is intentional. The aggressor has every intention of assaulting their victim.
- It is repeated. The acts of harassment are sustained; they are performed over and over again, repetitively and consistently.
- It is carried out among peers. It is usually carried out by young people of the same age as the victim: schoolmates, teammates, neighbors, and even siblings or cousins.
- It has different manifestations. Abuse can be verbal (insults, threats, name-calling); sexual (touching the victim or making fun of his or her sexual orientation); physical (hitting, pushing); social (isolation, spreading rumors); cyberbullying (harassment by telephone or social networks).
- Seeks defenseless victims. The aggressor or aggressors always choose physically or emotionally weaker victims to attack them. Children with a physical disability, with self-esteem problems, belonging to minority groups are the most bullied.
- The absence of empathy prevails. The bully does not have the ability to «put him/herself in the shoes» of the assaulted. There is a complete disconnection with the suffering of the victim.
- According to a study conducted by Spanish researchers, many adults perceive bullying as a transitory act; however, it has been proven that the dominance-submission relationship generated lasts over time, as well as its consequences in the adult life of the bullied.
How do you know if a child is suffering from bullying?
Changes in a bullied child’s behavior may not be noticeable if you are not paying attention. An attentive parent, however, will notice that:
- Has decreased school performance
In my conversations with parents, I often tell them that this is one of the most obvious signs that something is wrong. When a child begins to drop grades and performance in school, there is an urgent need to investigate what is happening.
- Presents anxiety problems
Anxiety usually manifests itself in situations such as lack of sleep, nightmares, irritability and loss of appetite. There are other more physical signs such as generalized malaise, breathing problems and exhaustion. It can also manifest itself in quick mood swings.
- Does not want to go to school
School is the child’s and adolescent’s personal space. It is the place to meet with classmates and share common activities and interests. Therefore, if a child is recurrently absent from school or expresses fear of attending, something negative is happening there.
- Is always on the defensive
They may be alert and defensive all the time. Also feel guilty about anything in their environment, or assume that they are being blamed, even if it is not true.
- Suffer extreme emotional reactions
They may burst into tears, have episodes of anger or panic in irrelevant situations. I suggest you dig deeper and look for the real cause of these seemingly exaggerated outbursts.
- Is afraid of being alone
A victim of sustained harassment may be afraid of being alone; going out alone to the supermarket, going for a bike ride or playing in the park may be situations in which he or she is likely to encounter the aggressors. The fear may be such that he or she does not want to be alone even at home.
- Behaves aggressively at home
Irritability and aggressive behavior is a consequence of the harassment suffered at school. The child is overwhelmed and may explode in an environment where he/she feels it is safe to let out all the accumulated anger and resentment.
- Isolates or locks themselves in their room
The victim of bullying may show a tendency to isolate themselves, not only from their family, but also from their friends. It is normal for a teenager to spend time in their room and jealously guard their space, but excessive isolation and apathy can be a sign of bullying.
What can parents do in these cases?
Specialists agree that bullying is a serious problem that must be tackled from all angles. In some cases, when the bullying situation disappears, children return to their normal life; in others, they require psychological and/or psychiatric support. Regarding home care, some actions that I can recommend for you are:
- Establish a two-way communication channel with your child.
- Show them that they can trust you.
- Investigate the situation, the parties involved and the seriousness of the problem.
- Contact teachers and school officials.
- Consult a therapist to discuss assertive actions to confront the bully.
- If your child is very affected get professional help. This support is highly recommended in any case.
Something very important is to keep faith in God and not encourage your child to take revenge. It is essential to be positive, «Do not repay evil with evil or insult with insult» (1 Pedro 3:9), acting firmly, but without hatred.
Regarding therapeutic care, do you have doubts about how to manage it? Call 407 618 0212 and I will be happy to answer your questions. |
Neutron Beams are Irreplaceable Tools for Materials Research
Scientists and engineers rely on a full suite of tools to unlock the secrets of materials, which use various probes such as visible light, lasers, ultrasound, microwaves, x-rays, and elections. Each tool reveals certain properties of materials, generating knowledge that points the direction for better understanding and improvements. Rarely does a single tool provide the complete picture.
The tools for material research range in scale from simple microscopes and lasers to regional or national facilities that are shared as major scientific infrastructure.
Three shared national facilities include the Canadian Light Source, TRIUMF and the Canadian Neutron Beam Centre, which provide x-rays, muons, and neutrons respectively to materials researchers across Canada.
The knowledge generated using neutrons is complementary to, and cannot be replaced by, these other tools.
- Materials engineers need the penetrating power of neutron beams to examine the stresses deep inside critical industrial components that x-rays cannot penetrate.
- Biophysicists need the gentle probing of neutron beams to unravel the structures within biological membranes under life-like conditions.
- Chemists need the sensitivity of neutrons to detect hydrogen and understand chemical reactions to develop technologies such as fuel cells and hydrogen storage materials.
- Physicists need the electrically-neutral, yet magnetic, property of neutrons to better understand superconductivity and the magnetic structures of materials.
The world science community recognized the great importance of neutron beams when it awarded Bertram Brockhouse and Clifford Shull the Nobel Prize in Physics for pioneering their use. Over the past 20 years, about $20 Billion has been re-invested in the global network of about 20 major neutron beam facilities, demonstrating worldwide recognition of their value.
What makes neutrons so useful for probing materials?
Where are neutron beam facilities in Canada, and what do they do?
Neutrons are right-sized for atomic structures
Scientists and engineers use neutron beams to explore the structure and dynamics of materials down to atomic length scales. Neutrons can be used to examine most materials, including metals, alloys, ceramics, composites, polymers, nano-structures, bio-materials, drugs, foods, liquids, colloids and gels.
Thermal neutrons’ wavelengths match typical interatomic separations in solids. So thermal neutrons are able to probe the atomic arrangements in these systems. Thermal neutrons’ energies match the energies of many of the excitations existing in solids. So thermal neutrons are able to measure the energies of these atomic motions.
Cold neutrons have larger wavelengths that match larger molecules such as proteins and lipids that our bodies are made of. So cold neutrons are especially useful for life sciences and health research.
Neutrons are penetrating but non-destructive
Because they are penetrating, yet gentle, it is easy to obtain the bulk properties of large samples, or to measure stress deep inside parts of cars and planes to ensure they will perform safely in operation. In addition, this makes it easy to study materials in realistic conditions because complex chambers, such as cryostats, furnaces, pressure cells, can be used to control the environment of the material being studied during the experiment.
Neutrons are non-destructive and highly penetrating because they are uncharged, and the typical energies of thermal neutrons are in the range of 3 – 300 millielectron volts (meV). Their energies are a million times less than x-rays with the same wavelengths in the range of 0.5 – 5 angstroms (Å). Although these neutron energies are very low, neutrons penetrate easily through many centimetres of most materials, so that truly representative sampling of bulk materials is possible, completely non-destructively.
Neutrons are magnetic, yet neutral
Although neutrons have no charge, they have a magnetic moment that interacts with the magnetic electrons in atoms. The interaction is simple and well known, making neutron beams the absolute reference technique for determining the structure and dynamics of magnetic materials. This is especially useful for studying materials for computer memory or quantum materials such as superconductors.
Neutrons are sensitive to isotopes
Unlike x-rays, neutrons can easily distinguish between neighboring elements of the periodic table, and neutrons make it as easy to see light atoms (e.g. hydrogen, lithium) as heavy atoms (e.g. manganese, uranium). Neutrons can be used to unravel the complex structures of biological materials and polymers in which hydrogen is a major constituent.
That’s because neutrons interact directly with the nucleus of the atom, and thus determine the centre of mass of an atom that is free of electronic influences. The strength of the interaction varies from one nucleus to another but is similar in magnitude. Most notable is the huge difference between light hydrogen and heavy hydrogen (i.e. deuterium), enabling ‘contrasting matching’ by substituting deuterium for hydrogen, thereby making the measurements in biology and polymer chemistry much more sensitive.
Neutrons interact with matter simply
Neutrons’ weak interactions with matter greatly ease the interpretation of neutron scattering data. The cross-section contains terms dependent on the neutron interaction, which are known, and terms dependent on the properties of the system under investigation. The experimental results thus give direct information about the microscopic properties of the system of interest.
Due to these properties, neutron beams will always be an indispensable tool and cannot be replaced by other techniques. |
Social Justice and Diversity
The school promotes a sense of social justice through a variety of in-school and co-curricular activities.
We look at the support of social justice through three lenses as student’s grow (1) raising awareness about differences in the youngest grades, (2) teaching students about bias and inequality and (3) taking action. Students from preschool to 12th grade learn various ways to take steps to break the cycles of oppression.
In the youngest grades, children are encouraged to talk about differences between themselves, their families and their friends. Children learn on a daily basis to respect and understand differences.
As they get older, students begin to learn about oppression. They are taught how to read about historical events such as Columbus’ endeavors critically by looking at the language that is used to describe the indigenous people. Even young students can take actions that foster discussion and sow seeds of understanding about inequality.
It helps students to understand the problems, when they raise money, take a field trip to the mall or buy clothes and toys for less fortunate families in our area. Upper school students participate in protests and study the speeches and writings of all kinds of advocates for the oppressed.
For the past five years, the school has had an upper school LGBTQ Club and two years ago we added a middle school LGBTQ Club. These clubs have conducted workshops on raising consciousness and misgendering, sent representatives each year to the GLSEN (Gay Lesbian Straight Education Network) Student Leadership Conference and organized “Queer Con,” a regional conference for GSA’s and LGBTQ and allied youth. This student-created event included workshops, panels, speakers and other opportunities for students to build community and take action. |
Question: "What was the significance of the new moon in Bible times?"
Answer: The significance of the new moon in Bible times is that it marked the beginning of a new month (the Hebrew calendar is lunar-based), and it was a time when the Israelites were to bring an offering to God. The beginning of the month was known not by astronomical calculations but by the testimony of messengers appointed to watch for the first visible appearance of the new moon. As soon as the first sliver was seen, the fact was announced throughout the whole country by signal fires on the mountaintops and the blowing of trumpets. The Hebrew word for “month” (hodesh) literally means “new moon.”
In Numbers 28:11, the New Moon offering is commanded for the first time: “On the first of every month, present to the Lord a burnt offering of two young bulls, one ram and seven male lambs a year old, all without defect.” Each of the animal sacrifices was to be accompanied by a grain offering and a drink offering (verses 12–14). In addition to burnt offerings, a goat was to be sacrificed to the Lord as a sin offering (verse 15). The New Moon festival marked the consecration to God of each new month in the year. New Moon festivals were marked by sacrifices, the blowing of trumpets over the sacrifices (Numbers 10:10), the suspension of all labor and trade (Nehemiah 10:31), and social or family feasts (1 Samuel 20:5).
As with any religious ritual, there was a danger of observing the New Moon festivals without a true heart to follow God. Later in their history, the Israelites continued to observe the New Moon festivals outwardly, even after their hearts had turned cold toward God. They readily parted with their bulls and lambs and goats, but they would not give up their sins. They relied on the outward observations to cleanse them, even though there was still evil in their hearts. God had severe words for such hypocrisy: “Stop bringing meaningless offerings! Your incense is detestable to me. New Moons, Sabbaths and convocations—I cannot bear your worthless assemblies. Your New Moon feasts and your appointed festivals I hate with all my being. They have become a burden to me; I am weary of bearing them” (Isaiah 1:13–14). Sin is hateful to God, and no amount of ritual or ceremony or sacrament can make up for a sinful heart. “Behold, you delight in truth in the inward being” (Psalm 51:6, ESV; see also Hosea 6:6).
Observance of New Moon festivals and their sacrifices is no longer required. When the perfect Sacrifice, the spotless Lamb of God, appeared, He rendered the observation of these ordinances no longer necessary. All the righteous requirements of the Law were fulfilled by Him (Matthew 5:17), and His work on the cross means that no longer are sacrifices for sin required. Paul reminds us of this fact: “Do not let anyone judge you by what you eat or drink, or with regard to a religious festival, a New Moon celebration or a Sabbath day. These are a shadow of the things that were to come; the reality, however, is found in Christ” (Colossians 2:16–17). |
The Eastern milk snake, also known as Lampropeltis triangulum triangulum, is a beneficial, non-venomous snake and one of 25 subspecies of milk snakes. Eastern milk snakes, like all milk snakes, are a king snake species. Their common name derives from an old myth that the snakes drink cow's milk. This subspecies is often mistaken for other venomous snakes and killed, though experts stress the Eastern milk snake is beneficial to humans because it kills smaller venomous snakes and a variety of rodents.
An Eastern milk snake has smooth scales and a thin body that grows 2-4 feet (0.6-1.2 m) long with a base color ranging from tan to gray and red to brown blotches along the back and sides, which grow in size from the sides to the back. The belly has a white background with black squares. Eastern milk snakes are sometimes confused for venomous Northern copperhead snakes, which have dark bands rather than blotches crossing their backs from one side to the other. The massasauga rattler, like the Eastern milk snake, also has a tan to gray background with blotches, but the blotches typically range from brown to black. The massasauga also has a rattler, ovoid eyes and three additional rows of blotches on its sides.
Fields, woods, marshes, riverbeds and rocky spots give the Eastern milk snake its natural habitat. It also can be found in rural areas and cities. The reptile is distributed in the northeast United States, from Maine to Iowa and down to Appalachia. Milk snakes prefer hiding under rocks or debris on the ground, and they prefer to come out at night.
This snake consumes a variety of small rodents, venomous snakes and reptiles. The term "milk snake" might have originated from the fact that farmers would find milk snakes in their barns, where the serpents were searching for rodents to eat. A barn's low temperature and lack of light also make it an attractive habitat for the Eastern milk snake. The serpent is a constrictor, killing its prey by slowly coiling around it to prevent breathing, then eating the prey whole after it dies.
Eastern milk snakes reproduce by laying two to 17 eggs, although this number changes depending on the region. During the summer months, the snake looks for areas to lay eggs, typically under rocks, logs or other debris. The eggs sit for about eight weeks before hatching a 5.5- to 11-inch (about 14- to 28-cm) hatchling. The average life span for an Eastern milk snake is about 20 years. |
Category of being
In ontology, the different kinds or ways of being are called categories of being or simply categories. To investigate the categories of being is to determine the most fundamental and the broadest classes of entities. A distinction between such categories, in making the categories or applying them, is called an ontological distinction.
The categories of being, or simply "the categories", are defined as the highest classes under which all elements of being, whether material or conceptual, can be classified. These categories belong to the realm of philosophy and the difference between categories and classes was described by the philosopher C.I. Lewis as that of a hierarchical tree or pyramid where the most general categories such as those of logic are to be found at the top and the least general classes such as species of animal at the bottom. There are therefore two main areas of interest (i) at the top of the tree - how being first divides into discrete or overlapping subentities, and (ii) at the bottom of the tree - how the different elements can be correlated into higher classes. The structure may consist of a simple list such as the one produced by Aristotle or it may be composed of headings and subheadings such as the tables produced by Immanuel Kant. The elements of being are commonly seen as "things", whether objects or concepts, but most systems will also include as elements the relations between the objects and concepts. The distinction is also made between the elements themselves and the words used to denote such elements. The word "category" itself is derived from the Greek κατηγορία (katigoría), meaning to predicate, and therefore the categories may also be thought of as kinds of predicate which may be applied to any particular subject or element, and by extension to the concept of being itself.
If we take any subject and with it form a sentence "the subject is…" then in a valid system of categorisation all the different things we can say about the subject should be classifiable under one of the categories within the system. Aristotle listed ten categories amongst which we find, for example, the three categories of Substance, Quality and Quantity. In Heidegger’s example "This is a house. It is both red and tall" the word "house" can be classified under Substance, "red" under Quality and "tall" under Quantity. The subject, the house, gathers around it what was called in the 19th century a "colligation of concepts" or in the 20th century a "bundle of properties" all of which serve to define the house. By extension we can say that all being consists of nothing but Substance, Quality, Quantity and the rest because nothing else can be said of the subject. Categorisation has raised many problems throughout the history of philosophy, including those of the number and types of category, how the categories interrelate with one another and whether they are real in some way or just mental constructs, and to introduce the many different solutions that have arisen it is worth considering the history of the categories in brief outline.
The process of abstraction required to discover the number and names of the categories has been undertaken by many philosophers since Aristotle and involves the careful inspection of each concept to ensure that there is no higher category or categories under which that concept could be subsumed. The scholars of the twelfth and thirteenth centuries developed Aristotle’s ideas, firstly, for example by Gilbert of Poitiers, dividing Aristotle's ten categories into two sets, primary and secondary, according to whether they inhere in the subject or not:
- Primary categories: Substance, Relation, Quantity and Quality
- Secondary categories: Place, Time, Situation, Condition, Action, Passion
Secondly, following Porphyry’s likening of the classificatory hierarchy to a tree, they concluded that the major classes could be subdivided to form subclasses, for example, Substance could be divided into Genus and Species, and Quality could be subdivided into Property and Accident, depending on whether the property was necessary or contingent. An alternative line of development was taken by Plotinus in the second century who by a process of abstraction reduced Aristotle’s list of ten categories to five: Substance, Relation, Quantity, Motion and Quality. Plotinus further suggested that the latter three categories of his list, namely Quantity, Motion and Quality correspond to three different kinds of relation and that these three categories could therefore be subsumed under the category of Relation. This was to lead to the supposition that there were only two categories at the top of the hierarchical tree, namely Substance and Relation, and if relations only exist in the mind as many supposed, to the two highest categories, Mind and Matter, reflected most clearly in the dualism of René Descartes.
An alternative conclusion however began to be formulated in the eighteenth century by Immanuel Kant who realised that we can say nothing about Substance except through the relation of the subject to other things. In the sentence "This is a house" the substantive subject "house" only gains meaning in relation to human use patterns or to other similar houses. The category of Substance disappears from Kant’s tables, and under the heading of Relation, Kant lists inter alia the three relationship types of Disjunction, Causality and Inherence. The three older concepts of Quantity, Motion and Quality, as Peirce discovered, could be subsumed under these three broader headings in that Quantity relates to the subject through the relation of Disjunction; Motion relates to the subject through the relation of Causality; and Quality relates to the subject through the relation of Inherence. Sets of three continued to play an important part in the nineteenth century development of the categories, most notably in G.W.F.Hegel’s extensive tabulation of categories, and in C.S.Peirce’s categories set out in his work on the logic of relations. One of Peirce’s contributions was to call the three primary categories Firstness, Secondness and Thirdness which both emphasises their general nature, and avoids the confusion of having the same name for both the category itself and for a concept within that category.
In a separate development, and building on the notion of primary and secondary categories introduced by the Scholastics, Kant introduced the idea that secondary or "derivative" categories could be derived from the primary categories through the combination of one primary category with another. This would result in the formation of three secondary categories: the first, "Community" was an example that Kant gave of such a derivative category; the second, "Modality", introduced by Kant, was a term which Hegel, in developing Kant’s dialectical method, showed could also be seen as a derivative category; and the third, "Spirit" or "Will" were terms that Hegel and Schopenhauer were developing separately for use in their own systems. Karl Jaspers in the twentieth century, in his development of existential categories, brought the three together, allowing for differences in terminology, as Substantiality, Communication and Will. This pattern of three primary and three secondary categories was used most notably in the nineteenth century by Peter Mark Roget to form the six headings of his Thesaurus of English Words and Phrases. The headings used were the three objective categories of Abstract Relation, Space (including Motion) and Matter and the three subjective categories of Intellect, Feeling and Volition, and he found that under these six headings all the words of the English language, and hence any possible predicate, could be assembled.
Twentieth century development
In the twentieth century the primacy of the division between the subjective and the objective, or between mind and matter, was disputed by, among others, Bertrand Russell and Gilbert Ryle. Philosophy began to move away from the metaphysics of categorisation towards the linguistic problem of trying to differentiate between, and define, the words being used. Ludwig Wittgenstein’s conclusion was that there were no clear definitions which we can give to words and categories but only a "halo" or "corona" of related meanings radiating around each term. Gilbert Ryle thought the problem could be seen in terms of dealing with "a galaxy of ideas" rather than a single idea, and suggested that category mistakes are made when a concept (e.g. "university"), understood as falling under one category (e.g. abstract idea), is used as though it falls under another (e.g. physical object). With regard to the visual analogies being used, Peirce and Lewis, just like Plotinus earlier, likened the terms of propositions to points, and the relations between the terms to lines. Peirce, taking this further, talked of univalent, bivalent and trivalent relations linking predicates to their subject and it is just the number and types of relation linking subject and predicate that determine the category into which a predicate might fall. Primary categories contain concepts where there is one dominant kind of relation to the subject. Secondary categories contain concepts where there are two dominant kinds of relation. Examples of the latter were given by Heidegger in his two propositions "the house is on the creek" where the two dominant relations are spatial location (Disjunction) and cultural association (Inherence), and "the house is eighteenth century" where the two relations are temporal location (Causality) and cultural quality (Inherence). A third example may be inferred from Kant in the proposition "the house is impressive or sublime" where the two relations are spatial or mathematical disposition (Disjunction) and dynamic or motive power (Causality). Both Peirce and Wittgenstein introduced the analogy of colour theory in order to illustrate the shades of meanings of words. Primary categories, like primary colours, are analytical representing the furthest we can go in terms of analysis and abstraction and include Quantity, Motion and Quality. Secondary categories, like secondary colours, are synthetic and include concepts such as Substance, Community and Spirit.
The common or dominant ways to view categories as of the end of the 20th century.
- via bundle theory as bundles of properties—categories reflect differences in these
- via peer-to-peer comparisons or dialectics—categories are formed by conflict/debate
- via value theory as leading to specific ends—categories are formed by choosing ends
- via conceptual metaphors as arising from characteristics of human cognition itself—categories are found via cognitive science and other study of that biological system
Any of these ways can be criticized for...
- for seeking to make distinctions that aren't as universal as claimed (greedy reductionism)
- for serious bias in point of view (see also God's eye view)
- for relying on theological or spiritual claims a priori, for relying too much on surface conflict or current investigative priorities to point out differences
- for ignoring action
- for ignoring the perceived or biospheric context, or the cognitive mechanisms that perceive and invent categories
- or for relying on a complex empirical process of investigation that is poorly understood and only recently embarked upon.
In process philosophy, this last is the only possibility, but historically philosophers have been loath to conclude that nothing exists but process.
Categorization of existence
As bundles of properties
Bundle theory is an ontological theory about objecthood proposed by the 18th century Scottish philosopher David Hume, which states that objects only subsist as a collection (bundle) of properties, relations or tropes. In an epistemological sense, bundle theory says that all that can be known about objects are the properties which they are composed of, and that these properties are all that can be truly said to exist.
For example, if we take the concept of a black square, bundle theory would suggest that all that can be said to exist are the properties of a black square.
The properties of a black square are: Black, Regular, and Quadrilateral.
However, from these properties alone, we cannot deduce any kind of underlying essence of a "black square", or some object called a "black square", except as a bundle of properties which constitute the object that we then go on to label as a "black square", but the object itself is really nothing more than a system of relations (or bundle) of properties. To defend this, Hume asks us to imagine an object without properties, if we strip the black square of its properties (being black, regular and quadrilateral) we end up reducing the object to non-existence.
Intuition as evasion
A seemingly simpler way to view categories is as arising only from intuition. Philosophers argue this evades the issue. What it means to take the category physical object seriously as a category of being is to assert that the concept of physical objecthood cannot be reduced to or explicated in any other terms—not, for example, in terms of bundles of properties but only in terms of other items in that category.
In this way, many ontological controversies can be understood as controversies about exactly which categories should be seen as fundamental, irreducible, or primitive. To refer to intuition as the source of distinctions and thus categories doesn't resolve this.
Ideology, dogma, and theory
Modern theories give weight to intuition, perceptually observed properties, comparisons of categories among persons, and the direction of investigation towards known specified ends, to determine what humanity in its present state of being needs to consider irreducible. They seek to explain why certain beliefs about categories would appear in political science as ideology, in religion as dogma, or in science as theory.
A set of ontological distinctions related by a single conceptual metaphor was called an ontological metaphor by George Lakoff and Mark Johnson, who claimed that such metaphors arising from experience were more basic than any properties or symbol-based comparisons. Their cognitive science of mathematics was a study of the embodiment of basic symbols and properties including those studied in the philosophy of mathematics, via embodied philosophy, using cognitive science. This theory comes after several thousand years of inquiry into patterns and cognitive bias of humanity.
Categories of being
Philosophers have many differing views on what the fundamental categories of being are. In no particular order, here are at least some items that have been regarded as categories of being by someone or other:
Physical objects are beings; certainly they are said to be in the simple sense that they exist all around us. So a house is a being, a person's body is a being, a tree is a being, a particular's, or concrete things, or matter, or maybe substances (but bear in mind the word 'substance' has some special philosophical meanings).
Minds—those "parts" of us that think and perceive—are considered beings by some philosophers. Each of us, according to common sense anyway, "has" a mind. Of course, philosophers rarely just assume that minds occupy a different category of beings from physical objects. Some, like René Descartes, have thought that this is so (this view is known as dualism, and functionalism also considers the mind as distinct from the body), while others have thought that concepts of the mental can be reduced to physical concepts (this is the view of physicalism or materialism). Still others maintain though "mind" is a noun, it is not necessarily the "name of a thing" distinct within the whole person. In this view the relationship between mental properties and physical properties is one of supervenience – similar to how "banks" supervene upon certain buildings.
We can talk about all human beings, and the planets, and all engines as belonging to classes. Within the class of human beings are all of the human beings, or the extension of the term 'human being'. In the class of planets would be Mercury, Venus, the Earth, and all the other planets that there might be in the universe. Classes, in addition to each of their members, are often taken to be beings. Surely we can say that in some sense, the class of planets is, or has being. Classes are usually taken to be abstract objects, like sets; 'class' is often regarded as equivalent, or nearly equivalent, in meaning to 'set'. Denying that classes and sets exist is the contemporary meaning of nominalism.
The redness of a red apple, or more to the point, the redness all red things share, is a property. One could also call it an attribute of the apple. Very roughly put, a property is just a quality that describes an object. This will not do as a definition of the word 'property' because, like 'attribute', 'quality' is a near-synonym of 'property'. But these synonyms can at least help us to get a fix on the concept we are talking about. Whenever one talks about the size, color, weight, composition, and so forth, of an object, one is talking about the properties of that object. Some—though this is a point of severe contention in the problem of universals—believe that properties are beings; the redness of all apples is something that is. To deny that universals exist is the scholastic variant of nominalism.
Note that the color red is an objective property of an object. The intrinsic property is that it reflects radiation (including light) in a certain way. A human perceives that as the color red in his or her brain. An object thus has two types of properties, intrinsic (physical) and objective (observer specific).
An apple sitting on a table is in a relation to the table it sits on. So we can say that there is a relation between the apple and the table: namely, the relation of sitting-on. So, some say, we can say that that relation has being. For another example, the Washington Monument is taller than the White House. Being-taller-than is a relation between the two structures. We can say that that relation has being as well. This, too, is a point of contention in the problem of universals.
Space and time
Space and time are what physical objects are extended into. There is debate as to whether time exists only in the present or whether far away times are just as real as far away spaces, and there is debate (among who?) as to whether space is curved. Many (nearly all?) contemporary thinkers actually suggest that time is the fourth dimension, thus reducing space and time to one distinct ontological entity, the space-time continuum.
Propositions are units of meaning. They should not be confused with declarative sentences, which are just sets of words in languages that refer to propositions. Declarative sentences, ontologically speaking, are thus ideas, a property of substances (minds), rather than a distinct ontological category. For instance, the English declarative sentence "snow is white" refers to the same proposition as the equivalent French declarative sentence "la neige est blanche"; two sentences, one proposition. Similarly, one declarative sentence can refer to many propositions; for instance, "I am hungry" changes meaning (i.e. refers to different propositions) depending on the person uttering it.
Events are that which can be said to occur. To illustrate, consider the claim "John went to a ballgame"; if true, then we must ontologically account for every entity in the sentence. "John" refers to a substance. But what does "went to a ballgame" refer to? It seems wrong to say that "went to a ballgame" is a property that instantiates John, because "went to a ballgame" does not seem to be the same ontological kind of thing as, for instance, redness. Thus, events arguably deserve their own ontological category.
Properties, relations, and classes are supposed to be abstract, rather than concrete. Many philosophers say that properties and relations have an abstract existence, and that physical objects have a concrete existence. That, perhaps, is the paradigm case of a difference in ways in which items can be said to be, or to have being.
Many philosophers have attempted to reduce the number of distinct ontological categories. For instance, David Hume famously regarded Space and Time as nothing more than psychological facts about human beings, which would effectively reduce Space and Time to ideas, which are properties of humans (substances). Nominalists and realists argue over the existence of properties and relations. Finally, events and propositions have been argued to be reducible to sets (classes) of substances and other such categories.
One of Aristotle’s early interests lay in the classification of the natural world, how for example the genus "animal" could be first divided into "two-footed animal" and then into "wingless, two-footed animal". He realised that the distinctions were being made according to the qualities the animal possesses, the quantity of its parts and the kind of motion that it exhibits. To fully complete the proposition "this animal is…" Aristotle stated in his work on the Categories that there were ten kinds of predicate where...
He realised that predicates could be simple or complex. The simple kinds consist of a subject and a predicate linked together by the "categorical" or inherent type of relation. For Aristotle the more complex kinds were limited to propositions where the predicate is compounded of two of the above categories for example "this is a horse running". More complex kinds of proposition were only discovered after Aristotle by the Stoic, Chrysippus, who developed the "hypothetical" and "disjunctive" types of syllogism and these were terms which were to be developed through the Middle Ages and were to reappear in Kant’s system of categories.
- Substance, essence (ousia) – examples of primary substance: this man, this horse; secondary substance (species, genera): man, horse
- Quantity (poson, how much), discrete or continuous – examples: two cubits long, number, space, (length of) time.
- Quality (poion, of what kind or description) – examples: white, black, grammatical, hot, sweet, curved, straight.
- Relation (pros ti, toward something) – examples: double, half, large, master, knowledge.
- Place (pou, where) – examples: in a marketplace, in the Lyceum
- Time (pote, when) – examples: yesterday, last year
- Position, posture, attitude (keisthai, to lie) – examples: sitting, lying, standing
- State, condition (echein, to have or be) – examples: shod, armed
- Action (poiein, to make or do) – examples: to lance, to heat, to cool (something)
- Affection, passion (paschein, to suffer or undergo) – examples: to be lanced, to be heated, to be cooled
Plotinus in writing his Enneads around AD 250 recorded that "philosophy at a very early age investigated the number and character of the existents… some found ten, others less…. to some the genera were the first principles, to others only a generic classification of existents". He realised that some categories were reducible to others saying "why are not Beauty, Goodness and the virtues, Knowledge and Intelligence included among the primary genera?" He concluded that such transcendental categories and even the categories of Aristotle were in some way posterior to the three Eleatic categories first recorded in Plato's dialogue Parmenides and which comprised the following three coupled terms:
Plotinus called these "the hearth of reality" deriving from them not only the three categories of Quantity, Motion and Quality but also what came to be known as "the three moments of the Neoplatonic world process":
- First, there existed the "One", and his view that "the origin of things is a contemplation"
- The Second "is certainly an activity… a secondary phase… life streaming from life… energy running through the universe"
- The Third is some kind of Intelligence concerning which he wrote "Activity is prior to Intellection… and self knowledge"
Plotinus likened the three to the centre, the radii and the circumference of a circle, and clearly thought that the principles underlying the categories were the first principles of creation. "From a single root all being multiplies". Similar ideas were to be introduced into Early Christian thought by, for example, Gregory of Nazianzus who summed it up saying "Therefore Unity, having from all eternity arrived by motion at duality, came to rest in trinity".
In the Critique of Pure Reason (1781), Immanuel Kant argued that the categories are part of our own mental structure and consist of a set of a priori concepts through which we interpret the world around us. These concepts correspond to twelve logical functions of the understanding which we use to make judgements and there are therefore two tables given in the Critique, one of the Judgements and a corresponding one for the Categories. To give an example, the logical function behind our reasoning from ground to consequence (based on the Hypothetical relation) underlies our understanding of the world in terms of cause and effect (the Causal relation). In each table the number twelve arises from, firstly, an initial division into two: the Mathematical and the Dynamical; a second division of each of these headings into a further two: Quantity and Quality, and Relation and Modality respectively; and, thirdly, each of these then divides into a further three subheadings as follows.
Table of Judgements
Table of Categories
Criticism of Kant’s system followed, firstly, by Arthur Schopenhauer, who amongst other things was unhappy with the term "Community", and declared that the tables "do open violence to truth, treating it as nature was treated by old-fashioned gardeners", and secondly, by W.T.Stace who in his book The Philosophy of Hegel suggested that in order to make Kant’s structure completely symmetrical a third category would need to be added to the Mathematical and the Dynamical. This, he said, Hegel was to do with his category of Notion.
G.W.F. Hegel in his Science of Logic (1812) attempted to provide a more comprehensive system of categories than Kant and developed a structure that was almost entirely triadic. So important were the categories to Hegel that he claimed "the first principle of the world, the Absolute, is a system of categories… the categories must be the reason of which the world is a consequent". Using his own logical method of combination, later to be called the Hegelian dialectic, of arguing from thesis through antithesis to synthesis, he arrived, as shown in W.T.Stace's work cited, at a hierarchy of some 270 categories. The three very highest categories were Logic, Nature and Spirit. The three highest categories of Logic, however, he called Being, Essence and Notion which he explained as follows:
- Being was differentiated from Nothing by containing with it the concept of the "Other", an initial internal division that can be compared with Kant’s category of Disjunction. Stace called the category of Being the sphere of common sense containing concepts such as consciousness, sensation, quantity, quality and measure.
- Essence. The "Other" separates itself from the "One" by a kind of motion, reflected in Hegel’s first synthesis of "Becoming". For Stace this category represented the sphere of science containing within it firstly, the thing, its form and properties; secondly, cause, effect and reciprocity, and thirdly, the principles of classification, identity and difference.
- Notion. Having passed over into the "Other" there is an almost Neoplatonic return into a higher unity that in embracing the "One" and the "Other" enables them to be considered together through their inherent qualities. This according to Stace is the sphere of philosophy proper where we find not only the three types of logical proposition: Disjunctive, Hypothetical and Categorical but also the three transcendental concepts of Beauty, Goodness and Truth.
Schopenhauer’s category that corresponded with Notion was that of Idea, which in his "Four-Fold Root of Sufficient Reason" he complemented with the category of the Will. The title of his major work was "The World as Will and Idea". The two other complementary categories, reflecting one of Hegel’s initial divisions, were those of Being and Becoming. Interestingly, at around the same time, Goethe was developing his colour theories in the Farbenlehre of 1810, and introduced similar principles of combination and complementation, symbolising, for Goethe, "the primordial relations which belong both to nature and vision". Hegel in his Science of Logic accordingly asks us to see his system not as a tree but as a circle.
Charles Sanders Peirce, who had read Kant and Hegel closely, and who also had some knowledge of Aristotle, proposed a system of merely three phenomenological categories: Firstness, Secondness, and Thirdness, which he repeatedly invoked in his subsequent writings. Like Hegel, C.S.Peirce attempted to develop a system of categories from a single indisputable principle, in Peirce’s case the notion that in the first instance he could only be aware of his own ideas. "It seems that the true categories of consciousness are first, feeling… second, a sense of resistance… and third, synthetic consciousness, or thought". Elsewhere he called the three primary categories: Quality, Reaction and Meaning, and even Firstness, Secondness and Thirdness, saying, "perhaps it is not right to call these categories conceptions, they are so intangible that they are rather tones or tints upon conceptions":
- Firstness (Quality): "The first is predominant in feeling… we must think of a quality without parts, e.g. the colour of magenta… When I say it is a quality I do not mean that it "inheres" in a subject… The whole content of consciousness is made up of qualities of feeling, as truly as the whole of space is made up of points, or the whole of time by instants".
- Secondness (Reaction): "This is present even in such a rudimentary fragment of experience as a simple feeling… an action and reaction between our soul and the stimulus… The idea of second is predominant in the ideas of causation and of statical force… the real is active; we acknowledge it by calling it the actual".
- Thirdness (Meaning): "Thirdness is essentially of a general nature… ideas in which thirdness predominate [include] the idea of a sign or representation… Every genuine triadic relation involves meaning… the idea of meaning is irreducible to those of quality and reaction… synthetical consciousness is the consciousness of a third or medium".
Although Peirce’s three categories correspond to the three concepts of relation given in Kant’s tables, the sequence is now reversed and follows that given by Hegel, and indeed before Hegel of the three moments of the world-process given by Plotinus. Later, Peirce gave a mathematical reason for there being three categories in that although monadic, dyadic and triadic nodes are irreducible, every node of a higher valency is reducible to a "compound of triadic relations". Ferdinand de Saussure, who was developing "semiology" in France just as Peirce was developing "semiotics" in the USA, likened each term of a proposition to "the centre of a constellation, the point where other coordinate terms, the sum of which is indefinite, converge".
Contemporary systems of categories have been proposed by John G. Bennett (The Dramatic Universe, 4 vols., 1956–65), Wilfrid Sellars (1974), Reinhardt Grossmann (1983, 1992), Johansson (1989), Hoffman and Rosenkrantz (1994), Roderick Chisholm (1996), Barry Smith (ontologist) (2003), and Jonathan Lowe (2006).
- Lewis C.I. Mind and the World Order 1929, pp.233-234
- Aristotle Categories in Aristotle’s Categories and De Interpretatione (tr. Ackrill J.L. Clarendon Press, Oxford, 1963) Ch.4
- Kant I. Critique of Pure Reason 1781 (tr. Smith N.K., Macmillan, London, 1968)
- Heidegger M. What is a Thing 1935 (tr. Barton W. & Deutsch V. Henry Regnery, Chicago, 1967) pp.62,187
- Peirce C.S. Collected Papers of Charles Sanders Peirce (Hartshorne C. & Weiss P. (eds) Harvard University Press, 1931) Vol.2, p.267
- cf Locke J. Essay concerning Human Understanding (J.F.Dove, London, 1828) p.371 on “coexistence of qualities”
- Reese W.L. Dictionary of Philosophy and Religion (Harvester Press, 1980)
- Ibid. cf Evangelou C. Aristotle’s Categories and Porphyry (E.J. Brill, Leiden, 1988)
- Plotinus Enneads (tr. Mackenna S. & Page B.S., The Medici Society, London, 1930) VI.3.3
- Ibid. VI.3.21
- Descartes R. The Philosophical Works of Descartes (tr. Haldane E. & Ross G., Dover, New York, 1911) Vol.1
- Op.cit.3 p.87
- Ibid. pp.107,113
- Op.cit.5 pp.148-179
- Stace W.T. The Philosophy of Hegel (Macmillan & Co, London, 1924)
- Op.cit.5 pp.148-179
- Op.cit.3 p.116
- Hegel G.W.F. Logic (tr. Wallace W., Clarendon Press, Oxford, 1975) pp.124ff
- Schopenhauer A. On the Four-Fold Root of the Principle of Sufficient Reason 1813 (tr. Payne E., La Salle, Illinois, 1974)
- Jaspers K. Philosophy 1932 (tr. Ashton E.B., University of Chicago Press, 1970) pp.117ff
- Roget P.M. Roget’s Thesaurus: The Everyman Edition 1952 (Pan Books, London, 1972)
- Russell B. The Analysis of Mind (George Allen & Unwin, London, 1921) pp.10,23
- Ryle G. The Concept of Mind (Penguin, Harmondsworth, 1949) pp.17ff
- Wittgenstein L. Philosophical Investigations 1953 (tr. Anscombe G., Blackwell, Oxford, 1978) pp.14,181
- Ryle G. Collected Papers (Hutchinson, London, 1971) Vol.II: Philosophical Arguments 1945, pp.201,202
- Op.cit.1 pp.52,82,106
- Op.cit.9 VI.5.5
- Op.cit.5 Vol I pp.159,176
- Op.cit.4 pp.62,187
- Kant I. Critique of Judgement 1790 (tr. Meredith J.C., Clarendon Press, Oxford 1952) p.94ff
- Op.cit.25 pp.36,152
- Aristotle Metaphysics 1075a
- Long A. & Sedley D. The Hellenistic Philosophers (Cambridge University Press, 1987) p.206
- Peter of Spain (alias John XXI) Summulae Logicales
- Categories, translated by E. M. Edghill. For the Greek terms, see The Complete Works of Aristotle in Greek (requires DjVu), Book 1 (Organon), Categories Section 4 (DjVu file's page 6)."Archived copy". Archived from the original on 2013-11-02. Retrieved 2010-02-21.
- Op.cit.9 VI.1.1
- Ibid. VI.2.17
- Plato Parmenides (tr. Jowett B., The Dialogues of Plato, Clarendon Press, Oxford, 1875) p.162
- Op.cit.9 Op.cit.1.4
- Ibid. III.8.5
- Rawlinson A.E. (ed.) Essays on the Trinity and the Incarnation (Longmans, London, 1928) pp.241-244
- Op.cit.3 p.87
- Ibid. pp.107,113
- Schopenhauer A. The World as Will and Representation (tr. Payne A., Dover Publications, London, New York, 1966) p.430
- Op.cit.15 p.222
- Ibid. pp.63,65
- Op.cit.18 pp.124ff
- Goethe J.W. von, The Theory of Colours (tr. Eastlake C.L., MIT Press, Cambridge, Mass., 1970) p.350
- Op.cit.5 p.200, cf Locke
- Ibid. p.179
- Ibid. pp.148-179
- Ibid. p.176
- Saussure F. de,Course in General Linguistics 1916 (tr. Harris R., Duckworth, London, 1983) p.124
- Aristotle, 1953. Metaphysics. Ross, W. D., trans. Oxford University Press.
- --------, 2004. Categories, Edghill, E. M., trans. Uni. of Adelaide library.
- John G. Bennett, 1956–1965. The Dramatic Universe. London, Hodder & Stoughton.
- Gustav Bergmann, 1992. New Foundations of Ontology. Madison: Uni. of Wisconsin Press.
- Browning, Douglas, 1990. Ontology and the Practical Arena. Pennsylvania State Uni.
- Butchvarov, Panayot, 1979. Being qua Being: A Theory of Identity, Existence, and Predication. Indiana Uni. Press.
- Roderick Chisholm, 1996. A Realistic Theory of Categories. Cambridge Uni. Press.
- Feibleman, James Kern, 1951. Ontology. The Johns Hopkins Press (reprinted 1968, Greenwood Press, Publishers, New York).
- Grossmann, Reinhardt, 1983. The Categorial Structure of the World. Indiana Uni. Press.
- Grossmann, Reinhardt, 1992. The Existence of the World: An Introduction to Ontology. Routledge.
- Haaparanta, Leila and Koskinen, Heikki J., 2012. Categories of Being: Essays on Metaphysics and Logic. New York: Oxford University Press.
- Hoffman, J., and Rosenkrantz, G. S.,1994. Substance among other Categories. Cambridge Uni. Press.
- Edmund Husserl, 1962. Ideas: General Introduction to Pure Phenomenology. Boyce Gibson, W. R., trans. Collier.
- ------, 2000. Logical Investigations, 2nd ed. Findlay, J. N., trans. Routledge.
- Johansson, Ingvar, 1989. Ontological Investigations. Routledge, 2nd ed. Ontos Verlag 2004.
- Kahn, Charles H., 2009. Essays on Being, Oxford University Press.
- Immanuel Kant, 1998. Critique of Pure Reason. Guyer, Paul, and Wood, A. W., trans. Cambridge Uni. Press.
- Charles Sanders Peirce, 1992, 1998. The Essential Peirce, vols. 1,2. Houser, Nathan et al., eds. Indiana Uni. Press.
- Gilbert Ryle, 1949. The Concept of Mind. Uni. of Chicago Press.
- Wilfrid Sellars, 1974, "Toward a Theory of the Categories" in Essays in Philosophy and Its History. Reidel.
- Barry Smith, 2003. "Ontology" in Blackwell Guide to the Philosophy of Computing and Information. Blackwell.
|Wikisource has the text of the 1911 Encyclopædia Britannica article Category.|
- Aristotle's Categories at MIT.
- Thomasson, Amie. "Categories". Stanford Encyclopedia of Philosophy.
- "Ontological Categories and How to Use Them" – Amie Thomasson.
- "Recent Advances in Metaphysics" – E. J. Lowe.
- Theory and History of Ontology – Raul Corazzon. |
Grade Twelve Students Create Videos to Showcase Problem-Solving Skills
By Sang Lee, Linden Teacher
According to KS Sawyer of Western University’s Centre for Education Research and Innovation, “the best learning takes place when learners articulate their unformed and still developing understanding, and continue to articulate it throughout the process of learning … While thinking out loud, they learn more rapidly and deeply than studying quietly.”
Grade 12 students were challenged to create videos to explain and articulate their thought process in solving Characteristics of Polynomials problems. In the videos, you can see their confidence in their own ability. Watch out, Khan Academy!
References: Sawyer, K. S. (2008). Optimizing learning: Implications of learning sciences research. Directorate for Education, Centre for Educational Research and Innovation (CERI) Governing Board. |
Zika virus disease is caused by the Zika virus, which is spread to people primarily through the bite of an infected mosquito (Aedes aegypti and Aedes albopictus).
The illness is usually mild with symptoms lasting up to a week, and many people do not have symptoms or will have only mild symptoms. However, Zika virus infection during pregnancy can cause a serious birth defect called microcephaly and other severe brain defects.
How Zika spreads
Zika can be transmitted
Through mosquito bites
From a pregnant woman to her fetus
Through blood transfusion (very likely but not confirmed)
Many people infected with Zika virus won’t have symptoms or will only have mild symptoms. The most common symptoms of Zika are:
Symptoms can last for several days to a week. People usually don’t get sick enough to go to the hospital, and they very rarely die of Zika. Once a person has been infected with Zika, they are likely to be protected from future infections.
Why Zika is risky for some people
Zika infection during pregnancy can cause a birth defect of the brain called microcephaly and other severe brain defects. It is also linked to other problems, such as miscarriage, stillbirth, and other birth defects. There have also been increased reports of Guillain-Barré syndrome, an uncommon sickness of the nervous system, in areas affected by Zika.
How to prevent Zika
There is no vaccine to prevent Zika. The best way to prevent diseases spread by mosquitoes is to protect yourself and your family from mosquito bites.
Wear long-sleeved shirts and long pants.
Treat your clothing and gear with permethrin or buy pre-treated items.
Use Environmental Protection Agency (EPA)-registered insect repellents with one of the following active ingredients: DEET, picaridin, IR3535, oil of lemon eucalyptus or para-menthane-diol, or 2-undecanone. Always follow the product label instructions.
When used as directed, these insect repellents are proven safe and effective even for pregnant and breastfeeding women.
Do not use insect repellents on babies younger than 2 months old.
Do not use products containing oil of lemon eucalyptus or para-menthane-diol on children younger than 3 years old.
Stay in places with air conditioning and window and door screens to keep mosquitoes outside.
Take steps to control mosquitoes inside and outside your home.
Mosquito netting can be used to cover babies younger than 2 months old in carriers, strollers, or cribs.
Sleep under a mosquito bed net if air conditioned or screened rooms are not available or if sleeping outdoors.
Prevent sexual transmission of Zika by using condoms or not having sex.
How Zika is diagnosed
Diagnosis of Zika is based on a person’s recent travel history, symptoms, and test results.
A blood or urine test can confirm a Zika infection.
Symptoms of Zika are similar to other illnesses spread through mosquito bites, like dengue and chikungunya.
Your doctor or other healthcare provider may order tests to look for several types of infections.
What to do if you have Zika
There is no specific medicine or vaccine for Zika virus. Treat the symptoms:
Get plenty of rest.
Drink fluids to prevent dehydration.
Take medicine such as acetaminophen to reduce fever and pain.
Do not take aspirin or other non-steroidal anti-inflammatory drugs (NSAIDs).
If you are taking medicine for another medical condition, talk to your healthcare provider before taking additional medication.
Would like to find out more about this article? Please visit the following link: |
What is biodiversity?
In 1992, Canada ratified the United Nations Convention on Biological Diversity, as did many other countries. This convention is necessary because the present rate of loss of biological diversity, or “biodiversity,” is a serious global environmental threat.
Biodiversity means the variety of life on Earth. It is measured as the variety within species—or genetic diversity—the variety between species, and the variety of ecosystems.
Clockwise from left:
- Dew bumblebee on a teasel
- Gray treefrog
- Bearded seal, Wood Bay, N.W.T.
- Alpine, grassland, and shoreline ecosystems at Castle River, Alberta
Diversity is a characteristic of life everywhere on Earth, from the ocean floor to inside the human gut, and at every geographical scale, from the global to the microscopic. We see this diversity around us every day. We see genetic traits in people, in our pets, and in plants. We know, firsthand if we are lucky, the rich variety of mammal, bird, fish, and plant species in the world. And some of us have lived in strikingly diverse ecosystems—in coastal forests and on arctic tundra, in cities and on farms. However, all this is only a small part of the diversity of life.
Remarkably little is known about biological diversity. One reason is that most species—and many ecosystems—are a lot smaller than humans. Furthermore, each species, whether a tiny virus or a huge humpback whale, has its own genetic diversity, which, in an average species, involves millions of different pieces of genetic material. And to map the genetic code of even one species is an enormous undertaking.
About 1.6 million of the world’s species have been described. Estimates of the total number of species range from 12 to 118 million. The numbers for known and total estimated species are continually being revised. Viruses are almost entirely unknown. About one million of the known 1.6 million species are insects, and millions of insect species are still unclassified. Some 360 000 algae, fungi, and vascular plants, or those with conducting tissues, such as flowers, grasses and trees, have already been described, but botanists estimate that there are at least another million left to be classified. Animals like nematodes (e.g., worms) and crustaceans (e.g., shrimplike animals) are not well known.
Our knowledge of mammals and birds is much more complete, but species previously unknown to science are occasionally discovered. For example, a new primate, the black-faced lion tamarin, was found in 1990, and a new whale, the pygmy beaked whale, in 1991. And as scientists probe unexplored areas of the planet, they continue to find new species, all with unique gene pools, belonging, in some cases, to hitherto unknown ecosystems. Thanks to deep-sea vessels and cameras, bacteria and higher life forms (e.g., worms) have recently been discovered in the ultra-hotwater vents, kilometres below the ocean surface, where no life was thought to be possible.
You do not need to visit exotic places, however, to see strange, unknown organisms and unfamiliar ecosystems. A look through a microscope at ordinary soil reveals countless (mostly nameless!) microorganisms, in addition to the worms and insects that can be seen with the naked eye. Figure 1 shows a few of the underappreciated life forms found in the soil of a deciduous forest in eastern Canada. The oribatid soil mite shown in the drawing is one of the most common soil mites in Canada, but it has not yet been scientifically studied or given a scientific name. Soils are, in fact, complex ecosystems. For example, fungi on tree roots help trees absorb nutrients. And without the insects, fungi, earthworms, and bacteria that transform dead plants and animal carcasses into soil, piled up dead matter would quickly smother all but a few strong-growing trees and bushes!
Wildlife, defined as all wild species, makes up most of the species and genetic diversity of life. Wildlife includes more than mammals and birds living in wilderness areas. Each form of virus, soil organism, plankton, and insect, no matter where it lives, is a wild species, as are the species of parasites and microorganisms that live in such places as under human fingernails and on the feather shafts of wild birds. The remainder of species diversity, apart from the human species, is life forms that we have domesticated: e.g., species and cultivars of crops and garden plants and species and breeds of pets and livestock. However, despite their importance to people and their sometimes huge populations, domesticates account for only a tiny fraction of the millions of existing species and of the genetic diversity within species.
Some people refer to regional ecosystems that seem to have developed without human dominance as “natural” or “evolved” ecosystems: Canadian examples are the ancient temperate rain forest ecosystem on Vancouver Island and an ecosystem found on the untrawled ocean floor. Today, natural ecosystems and “wild lands and waters” (i.e., ecosystems in which humans and wild populations coexist) have become small arks holding a large proportion of the variety of the world’s species and genes, often in small populations.
Why is it important?
Why is it important that life remain diverse?
For its own sake
The Convention on Biological Diversity recognizes the intrinsic value of biodiversity. Each life form and ecosystem has its own intrinsic value, apart from any actual or potential usefulness to people. When a species goes extinct, it never returns.
To sustain life as we know it
We, like all species on Earth, are utterly dependent on the planetary environment. Species and ecosystems provide life-sustaining services, such as maintaining adequate oxygen in the atmosphere, removing carbon dioxide from air, filtering and purifying water, pollinating plants, breaking down waste, and transferring nutrients. Most ecosystems that evolved to provide these services can cope with some loss of diversity: for example, when a species becomes extinct, a new species may step in to take over its role in the ecosystem.
But there are limits to the protection that this flexibility provides. When species that cannot be replaced are lost, the whole assemblage of species may change, and rare species and their genes may disappear. Nor can this flexibility protect ecosystems against excessive modifications to harness their productivity strictly for human purposes. For example, the dustbowl on the prairies in the 1930s resulted in part from the ploughing up of native prairie grasses with massive soil-anchoring roots. Ploughing transformed the prairie grassland ecosystem. The plants and animals that were adapted to periodic drought were displaced. The new ecosystem, based on crops planted by people, provided social and economic benefits in wet years. In dry years, the soil simply blew away. Erosion of valuable topsoil still occurs and is a serious problem.
Furthermore, most Canadians develop a great aesthetic appreciation of nature as it exists and do not want to be deprived of it. Canadians of many backgrounds place spiritual value on animals, plants, and ecosystems. Canadians do not wish to leave a biologically impoverished Earth to their children and grandchildren.
"If the biota, in the course of aeons, has built something we like but do not understand, then who but a fool would discard seemingly useless parts? To keep every cog and wheel is the first precaution of intelligent tinkering." — A. Leopold. 1966. A Sand County Almanac, Oxford University Press. New York. p. 177.
Insurance for the future
Maintaining the full range of the Earth’s biodiversity means maintaining the flexibility to respond to unforeseen environmental conditions. For example, many of Canada’s native plant species must endure both hot summers and cold winters. These plants may, therefore, have genetic material that could be used to develop agricultural crops that can withstand greater than normal temperature ranges.
Because natural ecosystems have stood the test of time, we can use them as models of sustainability. As long as we conserve them, we can return to them to learn how to refine or reengineer the croplands, managed forests, and industrial fishing areas that we have created, or to find the genes, species, or microecosystems that were left out of the human-designed system because we were ignorant of their importance.
Long-term human health and prosperity
Preserving biodiversity will also maintain our potential as a country to be creative and productive and will provide opportunities for discovering and developing new foods, medicines, and industrial products. Because other species face some of the same biological problems as we do and share the same “genetic alphabet,” the biochemical evolution that has been occurring in their populations through millions of generations has produced substances of great usefulness to people. For example, doctors use hirudin, a substance discovered in the saliva of leeches, to dissolve dangerous blood clots. Canada’s 138 native tree species have at least 40 recorded pharmaceutical or medical uses, and they are currently used for rayon, cellophane, methyl hydrates, glue, and turpentine.
How is it changing?
This map was prepared by the State of the Environment Directorate, based on a system used by Environment Canada to classify Canada’s ecosystems. The country’s land and ocean ecosystems have been summarized at various levels. The largest category is ecozones; these are subdivided into ecoregions; ecoregions are made up of ecodistricts, and so on.
Species go extinct in the normal course of evolution. But the rate of extinctions in the world has greatly increased in recent centuries because of the activities of the huge and growing number of people. The human population now appropriates 20 to 40 percent of the solar energy captured by land plants, leaving less for all other species. E.O. Wilson, a world expert on biodiversity, has calculated that, due to the reduction in area of the tropical rain forests alone, making no allowance for overhunting or invasion by alien organisms, today’s rate of extinction of species is 1 000 to 10 000 times the rate suggested by the fossil record, which was about one species per million species a year.
Many whales and dolphins are threatened. Roughly 116 of the world’s 200 species of apes and monkeys are threatened with extinction. And when larger animals go extinct, so, too, does a universe of microscopic organisms that lived on their bodies and waste materials.
The world’s grassland ecosystems have, for the most part, already been converted to crops or pasture. Temperate forests have been logged over and fragmented by roads, railways, and power corridors, which improve access for wild and domestic predators, people, and invasive species, resulting in disruption of ecological processes. More recently, species-rich tropical rain forest ecosystems have been greatly reduced in area. Wetlands continue to be drained for agriculture and urban expansion. Coral reefs may be in worse condition than either forests or wetlands. Wild lands and waters, and even stringently protected wilderness areas, are vulnerable to oil spills, acid rain, sedimentation, radioactive dust, long-lasting toxic chemicals, and invasive plants and nonnative animals.
The greatest threat to biodiversity in Canada is the extensive alteration by people of a number of ecological regions, largely because of competing land uses such as agriculture and urbanization. The map shows which areas of Canada people have altered the most. The prairies and southern Ontario have been greatly transformed. Only a few hectares of the tallgrass prairie remain intact, and southern Ontario’s Carolinian forest survives only in scattered woodlands. Old-growth forests exist only in patches in the three Maritime provinces, only small stands of old red and white pines remain in central Canada, and the unlogged temperate west coast rain forest keeps shrinking.
In settled parts of Canada, wetlands, which are am ong the habitats richest in species, have been reduced by as much as 90 percent, and drainage, at least on private lands, shows little sign of abatement. Despite legislation to curb acid precipitation, it is predicted that we will continue to lose the fish, shellfish, and amphibian communities of thousands of small lakes in eastern Canada. The Great Lakes ecosystem has been greatly altered by intensive fishing and successive invasions of species, some deliberately introduced to create recreational fisheries, combined with other stresses, such as pollution and alteration of habitat. In Atlantic coastal waters, there has been a considerable reduction of genetic diversity in populations of northern cod, as well as the depletion of stocks of most food fishes.
Most of the species native to these regions at risk still exist in Canada, but their populations have been greatly reduced or fragmented. In some cases, this has already reduced the genetic diversity within species, which gives species the best chance to adapt to future stresses through selection. For example, the 12 or 13 forms of lake trout in Lake Superior have been reduced to only two or three.
Because most Canadian species are widely distributed, we have lost relatively few known species compared with tropical regions. Since about 1750, Canada has lost the Great Auk, Passenger Pigeon, Labrador Duck, Dawson caribou, sea mink, Banff longnose dace, deepwater cisco, longjaw cisco, and blue walleye. We do not know exactly how many more wild species are in danger of a similar fate, because we have not studied all Canadian wildlife.
While these losses have been going on, Canada has also been gaining species. A number of wild species that now form an integral part of our fauna or flora were deliberately introduced, such as the European Starling and several ornamental plants. Others were accidentally imported and in some cases have proved exceedingly difficult and costly to control, such as wild oats, zebra mussels, and the fungus responsible for Dutch elm disease. It is prudent not to create conditions that will result in our own native species being displaced.
What can we do?
What can we do to protect biodiversity?
It is important to prevent further losses. The geological evidence suggests that biodiversity does not recover quickly. Fossils indicate five previous major periods of extinction, the most recent being the extinction of the dinosaurs. After each previous extinction event it appears to have taken millions of years for life to regain former levels of diversity.
A global effort
The United Nations Convention on Biological Diversity, which Canada played a leading role in developing, provides an opportunity for countries to work in partnership on this complex global issue. It supports those working for sustainable development in all signatory countries.
The convention addresses daunting global challenges: protection of wilderness, management of other areas for diversity, sustainable use of the components of biodiversity, and equity between rich and poor countries in sharing the costs and benefits of conserving the Earth’s biological wealth.
How can we conserve biodiversity in Canada?
We must continue to survey our country’s flora and fauna to learn what exists and what needs protection.
We can continue to set aside areas, such as parks and ecological preserves, where activities disruptive to ecosystems or harmful to wildlife are not allowed. The federal, provincial, and territorial governments are committed to protecting areas representative of Canada’s terrestrial and marine natural regions and have instituted policies to try to protect and restore critical wildlife habitats.
Canada has a program to identify and rehabilitate species known to be critically endangered. We should continue to regulate hunting, fishing, and logging and to control the use of toxic chemicals. Canadian laws require the planners of large projects, like dams, to assess the likely consequences of their proposed projects on the environment before the final decisions to proceed are made.
We need to determine the scale at which biodiversity should be conserved and carry out broad-scale landscape planning. We can’t have a moose in every backyard, or Bald Eagles nesting near every pond.
One of the most crucial and potentially rewarding tasks is the study of wild areas and species. One of the things we must learn is how to extract income from wild lands and waters without damaging them. In Canada, trapping fur-bearing animals, hunting, collecting seaweed, fishing, tapping maple trees for syrup, logging in ecologically sound ways, and gathering (e.g., wild foods, medicines, craft materials) can all generate incomes from wild lands and waters in a sustainable fashion.
All Canadians have a role to play in maintaining biological diversity at present levels. We do not have to stop fishing, farming, logging, and building cities in order to preserve biodiversity, but we do have to limit these activities or at least do them in ways that are compatible with native ecosystems. This often means reintroducing native species to increase biodiversity on farmland, in forest plantations, in rivers, and even in cities.
In 1995, to meet Canada’s obligations under the Convention on Biological Diversity, Canada published the Canadian Biodiversity Strategy, and in 1996, all provinces and territories and the federal government signed a National Statement of Commitment to conserve biodiversity and use biological resources in a sustainable manner. Each jurisdiction is responsible for its own biodiversity conservation plan. Information on current activities is available from:
Figure 1: A few of the small organisms in the soil ecosystem of a deciduous forest in eastern Canada
Drawing by Roelof Idema. Specimens and research courtesy of staff at the Canadian National Collection of Insects and Arachnids, Agriculture and Agri-Food Canada.
- Oribatid soil mite. The mite is magnified about 165 times (165x).
- Symbiotic union between the fungal hyphae of the fleecy milk cap and the roots of an oak tree (13x).
- Root aphid feeding on pineapple weed (22x).
- Oak forest ant tending eggs (7x).
- Nematodes (roundworms) feeding on roots (22x).
- Leaf decomposition by bacteria and fungal hyphae, with a protozoan ciliate feeding on the bacteria (1500x).
- Black widow spider tending an egg sac (2x).
- Predacious horsefly larva and pupa (1.5x)
Biodiversity: journal of life on Earth. Quarterly journal. Published by Tropical Conservancy, 94 Four Seasons Drive, Nepean, Ontario, K2E 7S1.
Bocking, S., editor. 2000. Biodiversity in Canada: ecology, ideas, and action. Broadview Press, Peterborough, Ontario.
Canadian biodiversity strategy: Canada’s response to the Convention on Biological Diversity. 1995. Available from the Biodiversity Convention Office, Environment Canada, Ottawa.
Mosquin, T., and P.G. Whiting. 1992. Canada country study of biodiversity: taxonomic and ecological census, economic benefits, conservation costs, and unmet needs. Canadian Centre for Biodiversity, Canadian Museum of Nature, Ottawa.
Tuxill, J. 1998. Losing strands in the web of life: vertebrate declines and the conservation of biological diversity. Worldwatch Paper 141. Worldwatch Institute, Washington, D.C.
Tuxill, J. 1999. Nature’s cornucopia: our stake in plant diversity. Worldwatch Paper 148, Worldwatch Institute, Washington, D.C.
Wilson, E.O. 1992. The diversity of life. The Belknap Press of Harvard University Press, Cambridge, Massachusetts.
© Her Majesty the Queen in Right of Canada, represented by the Minister of the Environment, 1995, 2002. All rights reserved.
Catalogue number CW69-4/92-2002E
Text: S.P. Burns and J.A. Keith
Revision: Susan Burns, 2001 |
Consider the following "is-a" or inheritance relationships between "concrete" entities Coca Cola, sulfuric acid, milk and the "abstract" liquid (named "ALiquid" by convention since it is an abstract entity):
The UML diagram shows us that Coca Cola, sulfuric acid and milk are all liquids. Conversely, the diagram tells us that a liquid could be either Coca Cola, sulfuric acid or milk. Note of course, that liquids are not constrained to these 3 entities but that doesn't affect the discussion here—in fact, this will be an important feature later on.
Another way that we can express the notions depicted by the diagram is to say that the abstract ALiquid superclass represents the union of Coca Cola, sulfuric acid and milk. That is,
a superclass represents the union of allof its subclasses.
or in other words
a superclass represents all that is abstractly equivalent about its subclasses.
For instance, the notion of an abstract liquid embodies all the behaviors and attributes such as having no definite shape, randomized atomic positions, freezing and boiling points that are common to Coca Cola, sulphuric acid and milk. Note the fine distinction between having a value and having the same value.
The above diagram illustrating the relationship between a superclass and its subclasses is called the Union Design Pattern. The union pattern shows us inheritance in how the Coca Cola, sulfuric acid and milk will all inherit the abstract behaviors of liquids, such as the lack of a definite shape and freezing/boiling points. Conversely, it also shows that if a situation utilizes a liquid, either Coca Cola, milk or sulphuric acid can be used as they are all abstractly equivalent as liquids. Note that this does not imply that all three will act identically! For instance, the human throat can swallow any liquid because it is made to work with fluids that can flow. However, the reaction of the throat to sulphuric acid is markedly different than it reaction to milk! This ability to substitute any subclass for its superclass and get different behavior is called polymorphism. |
Just as the basic unit of all non-living things is ‘ Atom’, the basic unit of all living organisms is ‘Cell’.
PS- The atom is still very smaller than the cell.
Now, the standard definition of the Cell says that “cell is the basic structural and functional unit of living organisms”.
By the word “structural” we mean that cells give shape or form to organs and consequently to the body parts.
By “functional” we mean that a cell contains the whole machinery (organelles) for performing various life processes required for sustaining life. A Cell processes nutrients to give energy and undergoes replication/ division to give rise to more cells. There are specialised cells to perform specific functions in Eukaryotes.
A typical Eukaryotic animal cell contains membrane- bound organelles like nucleus, mitochondria, ribosomes, endoplasmic reticulum(ER), Golgi apparatus which have specific functions to carry on various life processes.
There can be variations in number or presence/absence of a few organelles in different Specialized Cells, depending on the job they do.
1. Mature RBCs get rid of nuclei and all other cell organelles like mitochondria, Golgi apparatus and ER to free up space for more haemoglobin, which is an oxygen carrier.
2. Mature neurons lack centrioles which are responsible for mitotic cell division.
3. Muscle cells and liver cells have more number of mitochondria.
Some more interesting facts about the Cell:
– Human body is composed of about 200 different types of specialized cells.
– All cells are capable of carrying out certain basic functions like nutrition, respiration, growth and replication which are essential for the survival of cells.
– Since the cell carries out so many functions, so there is “division of labour” within the cell which means that different organelles of the cell perform different specific functions. For example- making of new material like protein synthesis by the organelle ribosomes, cellular metabolism/ respiration to release energy by mitochondria, DNA replication in nucleus of Eukaryotes, clearing up the waste substances from the cells by lysosomes etc.
Let us know about the Cell size, Cell shape, Cell volume, Cell number and Cell structure:
– The cells are microscopic. In human, the cell size varies from 3 to 4 micron (leucocytes) to over 90 cm (nerve cells).
– The cell size is correlated with its function and not to the size of organism.
– In a large cell, the cytoplasm requires more proteins and consequently more RNA.
– The DNA content of the cell in a given organism remains constant.
– Small cells have more surface area per unit volume and with an increase in size this ratio decreases.
– Small cells are metabolically more active because they have greater surface area, so more exchange of materials occur with outside environment.
– The shapes of the cells are also related to their functions.
– The shapes also depend on the surface tension and viscosity of protoplasm, mutual pressure of the adjoining cells and rigidity of the cell membrane.
– The Cell volume is almost constant for a particular cell type and is independent of the size of the organism.
– The total mass of an organism depends on the number of cells present in its body and not on the volume of the cells. So, the cells of an elephant are not larger than any other tiny animal.
– The number of cells is correlated with the size of organisms. So, small organisms have less number of cells than large organisms.
– The entire body of an adult animal or plant consists of a fixed number of cells and that remains the same in all members of the species. This phenomenon of cell or nuclear constancy is called “Eutely”.
– In human beings, the number of cells is around 100 trillion.
All cells have three major functional regions:
1. Cell membrane or Plasma membrane (cell wall also, in case of plants only)
What is the physical significance of Plasma Membrane?
– Plasma membrane is the outer boundary of the cell. It is present in all types of cells- both eukaryotic and prokaryotic cells, cells of plants, animals and microorganisms.
– It physically separates the cytoplasm from the surrounding cellular environment.
– Most cellular organelles like mitochondria, lysosomes, Golgi apparatus, nucleus, endoplasmic reticulum, and chloroplast (in plant cells only) are enclosed by the plasma membrane.
Is plasma membrane living or dead?
The plasma membrane is thin, elastic, living and selectively permeable membrane. It ranges from 6 to 10 nm. Chemically, membrane is 75% phospholipids and also contains proteins, cholesterol and polysaccharides.
Let’s know the ultrastructure of plasma membrane to know more about the living nature of plasma membrane.
Various models, to know the structure of plasma membrane, have been proposed by different scientists.
“Fluid Mosaic Model” of plasma membrane is well accepted which we take up as follows:
– The Fluid Mosaic Model was propounded by Singer and Nicholson.
– According to this, the plasma membrane is made up of a bilayer of phospholipids.
– Two types of proteins- Intrinsic or Integral and Extrinsic or Peripheral float about in the fluid phospholipid bilayer.
– Intrinsic proteins penetrate lipid bilayer partially or wholly.
– Extrinsic proteins are present either on the outer or inner surface of the lipid bilayer.
– The Lipids and Intrinsic proteins are amphipathic in nature ie. these molecules have both hydrophobic (non-polar) and hydrophilic (polar) groups.
– The proteins are present to serve as (a)enzymes (b)transport proteins or permeases (c)pumps (d)receptor proteins
– Lipids and proteins provide flexibility to the plasma membrane which helps in processes like Endocytosis.
– Plasma membrane is selectively permeable ie. it permits the entry and exit of some materials only in the cell.
– Substances allowed inside the cell include food, water, salts, oxygen, vitamins and hormones.
– Substances thrown out of the cell include nitrogenous waste and carbon dioxide, secretions like proteins, proenzymes, hormones, milk, tear, mucus, immunoglobulins (antibodies) etc.
– Since plasma membrane regulates the transport of various substances in and out of the cell, so it is living in nature. This is done to maintain the concentration of various substances and ions inside the cell.
Transport can be Passive or Active.
– Here, the particles or molecules move from a region of higher concentration to lower concentration through plasma membrane by DIFFUSION. So, this is also called “downhill transport”. Here, the movement occurs only due to the concentration gradient without consuming energy. The hydrophobic substances are readily transported by this method because these are soluble in lipids.
– Sometimes a carrier molecule called “Carrier Protein or Permeases” assist the transport (without the use of energy) and this is called “Facilitated transport”. It is helpful in the transport of hydrophilic nutrients like glucose and amino acids.
– When water molecules pass through the plasma membrane along the concentration gradient without the use of energy, the process is called OSMOSIS.
– This is the movement of molecules or ions against the concentration gradient (Uphill movement) using energy (ATP) to counteract against gradient.
– The most important active transport in all animals is the sodium-potassium transport between cells and the surrounding extra cellular fluid. This transport is called “Sodium pump”.
– The animal cell requires a high concentration of potassium ions inside the cells for protein synthesis by ribosomes and for certain enzymatic functions.
– The desirable potassium ion concentration is 20 to 50 times greater inside the cell than outside and sodium ion concentration maybe 10 times more outside the cell than inside.
How Sodium- Potassium ionic gradients are maintained or how the sodium pump works?
– There is a higher concentration of sodium ions outside the plasma membrane of the animal cell. The sodium ions are transported outside with the use of a carrier molecule- A Carrier Transport Complex is formed which utilises ATP and transports sodium ion outside the cell. Simultaneously, potassium ions are transported inside the cell by similar way.
– This unbalanced charge transfer leads to separation of charges across the plasma membrane. This difference helps in the Action Potential produced by nerve cells.
How are the macromolecules, solid food particles, etc transported inside the cell?
These are transported by the mechanism called ENDOCYTOSIS.
Depending on the nature of the substance, Endocytosis can be of the following types:
1. Pinocytosis (cell drinking)
– Pinocytosis is the process of “ingestion of fluid droplets & small solute particles” by the cell. The substances like protein, amino acids, which cannot enter by simple osmosis are ingested by pinocytosis.
– Here, the plasma membrane invaginates to form small vesicles or “Pinosomes”. The vesicles pinch off from the plasma membrane, move through the cytoplasm and fuse with the plasma membrane of the other side, there by discharging the contents.
– Pinocytosis is seen in microvilli of small intestine and in kidney cells.
– Phagocytosis is the process of “engulfing solid food particles” by cells through plasma membrane (as seen in protozoa also). Vesicles formed here are called “Phagosomes” (1 to 2 µm).
– Phagosomes move through cytoplasm and are dissolved and ingested by enzymes of lysosomes. The residues are ejected out of the plasma membrane by process called EPHAGY.
– In Phagocytosis, bacteria etc are engulfed.
This is the transfer of small quantities of cytoplasm, together with their inclusions, from one cell to the other. This was demonstrated in bone marrow tissue.
PS- All types of Endocytosis processes occur by “Active Transport”.
Q1. Can we call the plasma membrane as unit membrane?
Ans. Unit membrane means the limiting membrane of the cell and the organelles, viewed formerly as a three layered membrane, composed of inner lipid layer and two outer protein layers. This concept has been rejected as Fluid Mosaic model is the current accepted one. |
Asthma is a chronic disease in which sufferers have repeated attacks of difficulty in breathing and coughing.There seems to be an increase in the amount of asthma all over the world, especially in children.To understand what happens in asthmatic attacks, it’s helpful to visualise the basic structure of the airway tubes of the lung.
The main airway (windpipe, trachea) of the body is about 2 to 3cm across. It divides into its main branches (bronchi), which lead to the right and left lung, which divide further, like the branches of a tree, to supply air to all parts of the lungs.
The smallest tubes (bronchioles) are only millimetres wide and they are made up of ring-shaped muscles that are capable of contracting or relaxing. Anything that makes them contract will narrow the passages, which makes it more difficult for the air to pass through (so making it harder to breathe) and also gives rise to the characteristic wheezy noise that a person makes when they have an asthma attack.
Asthmatics tend to be sensitive to various types of irritants in the atmosphere that can trigger this contraction response from the bronchial muscles. The bronchioles also have an inner lining that becomes inflamed in asthma, which makes the lining swell and produce an excess amount of the mucus (phlegm) it normally makes, clogging up the tubes.
All of these processes contribute to the airway narrowing and the treatment for asthma is aimed at reversing them as much as possible. People of all ages get asthma but 50 per cent of sufferers are children under 10. Asthma is slightly more common among boys than girls. But after puberty the pattern reverses and among adults, women are more likely to develop asthma than men.
How do you get asthma
Asthma can be triggered by external agents, such as irritants in the atmosphere which are breathed in, or by internal reactions within the body that have been caused by an external influence.
The kinds of provoking factors can be divided into two groups.
- Non-specific factors: all asthma patients are affected by a number of general things that are referred to as irritants. They include exertion (ie exercise), cold, smoke, scents and pollution.
- pecific factors: these are irritants or allergens in the form of pollen, dust, animal fur, mould and some kinds of food. An infection with a virus or bacteria, chemical fumes or other substances at the workplace and certain medicines, eg aspirin and other non-steroidal anti-inflammatory drugs (NSAIDs), may also cause an asthma attack.
To acquire asthma, people seem to need to have been born with a predisposition to the disease. It may not reveal itself until they have been exposed to some asthma irritants.
A mother who smokes, low birth weight, a lack of exposure to infection in early life and traffic fumes have all been associated with the increase in asthma. Less draughty houses resulting in an accumulation of house dust mites and cooking gases may also be part of the problem.
Currently, a great deal of research is being carried out to look for the genes that allow asthma to develop. But until we can prevent asthma, the aim of treatment is to suppress the symptoms and try to avoid the triggers where possible.
What might trigger acute asthma attacks?
- Air pollution including exposure to certain chemicals. An example is isocynates which are used in some painting and plastics industries.
- Airway infection.
- Allergies, eg to pollens, house dust mites, domestic animals (especially cats and horses), aspirin and non-steroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen. |
A mood disorder is an umbrella term mental healthcare professionals use to describe all types of depression, including bipolar disorders.
Mood disorders occur among children, teenagers, and adults. Children and teens often present with different symptoms of mood disorders. Additionally, it can be challenging to diagnose mood disorders in children as they are not always capable of accurately expressing their feelings.
Fortunately, most mood disorders will respond favorably to a combination of medications, psychotherapy, and proper self-care.
Therapy, antidepressants, and support and self-care can help treat mood disorders.
What Are Mood Disorders
Mood disorders are also known as affective disorders. This broad term encompasses all depressive disorders, as well as bipolar disorder. Both of these conditions impact your mood and functioning.
Among those with mood disorders, moods may range from low (depressed) to high or irritable (manic), characterized by dramatic swings between these extremes in the case of bipolar disorder.
What is an Example of a Mood Disorder?
If you have a clinical mood disorder, you will find your emotional state (mood) is either distorted or perhaps not consistent with your circumstances – being promoted at work and yet feeling deeply depressed, for instance.
For those with mood disorders, these shifts in emotional state can cause serious disruption in day-to-day activities.
Example of common mood disorders include:
- Major depressive disorder: Major depressive disorder impacts up to 6% of the US population, according to NIH (the National Institute of Mental Health) and is characterized by prolonged, persistent spells of extreme sadness.
- Dysthymia (persistent depressive disorder): This is a chronic form of depression affecting 1.5% of the US population, according to the same data.
- Substance-induced depression: Substance-induced depression involves depression symptoms developing during or shortly after using substances, withdrawing from substances, or after being exposed to a medication.
- Depression associated with physical illness: This form of depression occurs in direct response to the physical symptoms of an underlying medical condition.
- Bipolar disorder: Bipolar disorder, previously known as manic depression, involves alternating periods of mania and depression. Bipolar is surprisingly common, affecting almost 1% of the US population.
- Cyclothymia: Cyclothymia is a disorder with emotional ups and downs, although less intense than those experienced in bipolar disorders.
- Seasonal affective disorder: Seasonal affective disorder (SAD) is a form of depression associated with fewer daylight hours from fall to spring.
- Disruptive mood dysregulation disorder: Disruptive mood dysregulation disorder triggers a chronic irritability in children often manifesting in temper outbursts inconsistent with the age of the child.
- Premenstrual dysphoric disorder: Mood changes associated with a woman’s premenstrual cycle.
Types of Mood Disorder
Mental health disorders and substance use disorders are diagnosed using the criteria set out in the APA’s Diagnostic and Statistical Manual of Mental Disorders.
When DSM-5 replaced the outgoing fourth edition of this diagnostic tool (DSM-IV), one of the changes involved the classification of mood disorders. Per DSM-5, mood disorders are now separated into two distinct groups:
- Depressive disorders
- Bipolar disorder/related disorders
As such, the main subtypes of mood disorders include:
- Major depressive disorder: Major depressive disorder is the clinical descriptor for depression.
- Bipolar I disorder: The more severe form of bipolar, bipolar I disorder, many people experiencing manic episodes engage in harmful or damaging behaviors.
- Bipolar II disorder: For a bipolar II diagnosis, you must experience at least one episode of hypomania – a less intense form of mania – in addition to an episode of major depression.
Disruptive Mood Dysregulation Disorder
In the newest fifth edition of DSM, there are three new depressive disorders, including disruptive mood dysregulation disorder:
- Disruptive mood dysregulation disorder: This disorder was added to DSM-5 for under-18s exhibiting persistent anger and irritability, often accompanied by extreme outbursts of temper with no significant provocation.
- Persistent depressive disorder: This diagnosis incorporates chronic major depressive disorder as well as depression lasting for two years or more (previously known as dysthymia).
- Premenstrual dysphoric disorder: Symptoms of this disorder present in the week before menstruation begins and resolve at the end of the menstrual cycle.
Is Anxiety a Mood Disorder?
As outlined above, mood disorders include all types of depression as well as bipolar disorder, but not anxiety.
Many people diagnosed with depression also suffer from anxiety, but anxiety is not considered a mood disorder.
Is Depression a Mood Disorder?
Depression in all its forms is considered a mood disorder, per DSM-5.
Mood Disorder Questionnaire
A team of researchers and psychiatrists developed the MDQ (mood disorder questionnaire) as a diagnostic tool for bipolar disorder.
The screening instrument contains 13 yes/no questions concerning the symptoms of bipolar, and additional questions concerning the co-occurrence of symptoms and the impairment of functioning. The questionnaire takes no more than five minutes to complete.
Any positive screening using the mood disorder questionnaire should always be followed up with a complete clinical assessment.
Treatment for Mood Disorders
Mood disorders often respond favorably to treatment, typically a combination of antidepressants and mood-stabilizing medications.
The following medications are often used to treat depression and bipolar disorders:
- Antidepressants: SSRIs (selective serotonin reuptake inhibitors) and SNRIs (serotonin and norepinephrine reuptake inhibitors) have similar mechanisms of action. Both can be effective for the treatment of depression. Some older types of antidepressants can also be effective, although these typically trigger more adverse side effects. Most antidepressants will take a few weeks before taking effect. You may need to try more than one type of antidepressant before your symptoms are alleviated.
- Antipsychotics: The manic and mixed episodes associated with bipolar disorder respond favorably to atypical antipsychotic medications. These medications can also soothe some symptoms of depression in some people.
- Mood stabilizers: Mood stabilizers help to regulate the mood swings prevalent among those with bipolar disorder.
Many forms of psychotherapy can effectively treat the symptoms of depression and bipolar. These include:
- CBT (cognitive-behavioral therapy)
- DBT (dialectical behavior therapy)
- Interpersonal therapy
- Problem-solving therapy
- Family therapy
In some treatment-resistant individuals, alternative therapies such as ECT (electroconvulsive therapy) and transcranial magnetic stimulation can be effective.
With an accurate diagnosis, most mood disorders are treatable.
Get Help at Renaissance Recovery
One of the key issues faced by those with mood disorders, especially when undiagnosed, is the tendency to self-medicate symptoms with alcohol or drugs. While this may provide fleeting relief, self-medication will inflame symptoms over time, possibly leading to a co-occurring disorder. If you have a mood disorder co-occurring with addiction, our dual diagnosis treatment program will help you unpack both conditions simultaneously.
If you need help with a mood disorder in isolation, we have specialist treatment programs for both depression and bipolar disorder.
All of our outpatient programs offer you access to the following therapies:
- MAT (medication-assisted treatment)
- Psychotherapy like CBT or DBT
- Counseling (individual and group)
If a traditional outpatient program doesn’t provide enough structure and support, we also offer IOPs (intensive outpatient programs) and PHPs (partial hospitalization programs), ideal for those with more severe mood disorders and co-occurring disorders. |
How Big Is Our Galaxy
What if someone travels from one end to another end of the galaxy?
Our earth is floating in the whirlpool of millions of stars in the sky. And how big the spread of the stars is, it is unimaginable in the skin. Looking at the open cloudless sky of a village at night, the drops spread from one end of the sky to the other can be seen as bright white clouds. This is actually part of our galaxy MilkyWay. The galaxy seems to be the milk spread across the sky. That’s why the Milkyway was named by ancient people. Of course, there is an interesting story behind this name in Greek mythology. Anyone interested in can read that from here.
The shape of our galaxy
We see very little parts of the Milkyway Galaxy with empty eyes. No matter how hard you try in the dark night, you will never see the whole of our galaxy. Because we live inside this galaxy. There is an ancient proverb, the insect inside the fruit does not recognize or see the whole fruit. We are in the same situation. For the same reason, you will never see the whole city from the front door of your house. Only a little part of the town can be seen from your doorstep. But somehow, if the Milkyway galaxy was seen from above, called Birds Eye View, it would be a great adventure. In that case, the bright whirlpool of numerous stars could be seen. They’re four spiral arms shaped like a pinwheel or wheels spread around each other. Such a twisted or spiral is the position of our Earth or Solar System on the inner end of an arm.
One of the four arms of the Milkyway is called Orion. This is where our sun is located. Considering the center of the galaxy and the outer scope of the galaxy, the Sun is staying at two-thirds of the range from the center. The distance of the sun from the center of the galaxy is about 30 thousand light-years. The Sun is one of the billion rotating stars of the Milkyway.
Traveling with the speed of light
If you want to explore the whole galaxy with the fictional spaceships shown in science fictions, you’ll need a quick renovation. We should not have the speed of the current conventional rocket, at least that spaceship must have the ability to move at the speed of light. However, no one knows whether it is possible for us to achieve this speed or not.
If you walk at the speed of light, you can cross about three hundred thousand kilometers in just one second. Then you can travel around the earth seven times in just one second. You can go to the moon and return to your home in just one and a half seconds. You can visit Mars in just 20 minutes.
How much time will take to travel to the nearest star?
The closest star to the sun is called Proxima Centauri. If you move at the speed of light, it will take 4 years to reach the neighborhood’s star. But if you want to go there in our highest speed spaceship at present, then it will take 5 million years. However, if any unbelievable light-speed spaceship ever comes into our hands, it’s better to think a little if you want to go from the earth to the center of our galaxy. Because it will definitely take you 30 thousand years to go. And if you want to go from one end of the Milkyway galaxy to the other, it will take you a hundred thousand years. Because the distance from this end of our galaxy to the other is hundred thousand light-years (the way light passes in a year is called one light-year). That means the diameter of the Milkyway Galaxy is one hundred thousand light-years. Our sun is roaming around the center of the galaxy. We are roaming with it with the whole solar system. As seen, the sun is rotating at 230 kilometers per second. The sun takes about 220 to 250 million years to spin a cycle around the Milkyway Galaxy. This time is called a cosmic year.
The neighboring galaxy of the Milkyway Galaxy is called Andromeda. Its distance from our earth is 2.5 million light-years. That means it will take 2.5 million years even if it goes there at the speed of light. Now you can definitely guess how far the galaxies can be from us.
The total starts in the Milkyway galaxy
Scientists predict our Milkyway galaxy has around 30 thousand crores (300,000,000,000) stars. Can you imagine how huge the number is? Let me help you out a little. Compare the number to the sand on the beach the number will be a bit understandable. Let’s catch, every sand part is a star. If you pick up a handful of sand from the beach, it’s assumed there are thousands to almost a hundred thousand of sand. So this is the number of stars you see with your eyes at night. But that would be a very small part of the total number of stars in this galaxy. You must have seen a dump truck carrying sand on the road. The amount of sand in a proof-sized dump truck could be the total number of stars in the Milkyway Galaxy; meaning 30 thousand crores. Surprisingly, the Milkyway Galaxy is not the only galaxy in the universe. There are millions of more galaxies like this. Numerous of them are still unknown to us.
Diameter of Milkyway galaxy
The diameter of our observable universe as scientists present it is about 9,300 crores (93 billion) light-years. So how many stars are there in the universe, can it be said? Whether it is true or false, there is no fault in assuming. Scientists guess there are 200,000,000,000,000,000,000,000 stars in our universe (23 zeros after 2). This is a huge number. Let’s go back to the sand to guess how big the number is. Let’s take all the stars of the universe as sand. So all the sand on all the beaches of our world is equal to the number of stars in the universe.
If your head hasn’t been dissolved by this thought, then pick sand with your fingertip. That’s also a star, our sun. If the size of our sun is equal to this sand, you will not be able to see our beloved world with empty eyes.
By now you must have understood how big our galaxy and universe are. The size of this and the number of stars in it is unimaginable. Knowing this truth, you may feel small and trivial. But until we found any alien, remember one thing, we may be nothing in the universe, but in all living beings only we understand and know where and how we are in the universe. |
- Food preservation
Preservation usually involves preventing the growth of bacteria, yeasts, fungi, and other micro-organisms (although some methods work by introducing benign bacteria, or fungi to the food), as well as retarding the oxidation of fats which cause rancidity. Food preservation can also include processes which inhibit visual deterioration that can occur during food preparation; such as the enzymatic browning reaction in apples after they are cut.
Many processes designed to preserve food will involve a number of food preservation methods. Preserving fruit, by turning it into jam, for example, involves boiling (to reduce the fruit’s moisture content and to kill bacteria, yeasts, etc.), sugaring (to prevent their re-growth) and sealing within an airtight jar (to prevent recontamination). There are many traditional methods of preserving food that limit the energy inputs and reduce carbon footprint.
Maintaining or creating nutritional value, texture and flavour is an important aspect of food preservation, although, historically, some methods drastically altered the character of the food being preserved. In many cases these changes have now come to be seen as desirable qualities – cheese, yoghurt and pickled onions being common examples.
Preservation processes include:
- Heating to kill or denature micro-organisms (e.g., boiling)
- Oxidation (e.g., use of sulfur dioxide)
- Ozonation (e.g., use of ozone [O3] or ozonated water to kill undesired microbes)
- Toxic inhibition (e.g., smoking, use of carbon dioxide, vinegar, alcohol etc.)
- Dehydration (drying)
- Osmotic inhibition (e.g., use of syrups)
- Low temperature inactivation (e.g., freezing)
- Ultra high water pressure (e.g. a type of “cold” pasteurization; intense water pressure kills microbes which cause food deterioration and affect food safety)
- Combinations of these methods
Refrigeration preserves food by slowing down the growth and reproduction of micro-organisms and the action of enzymes which cause food to rot. The introduction of commercial and domestic refrigerators drastically improved the diets of many in the Western world by allowing foods such as fresh fruit, salads and dairy products to be stored safely for longer periods, particularly during warm weather.
Freezing is also one of the most commonly used processes commercially and domestically for preserving a very wide range of food including prepared food stuffs which would not have required freezing in their unprepared state. For example, potato waffles are stored in the freezer, but potatoes themselves require only a cool dark place to ensure many months' storage. Cold stores provide large volume, long-term storage for strategic food stocks held in case of national emergency in many countries.
Vacuum-packing stores food in a vacuum environment, usually in an air-tight bag or bottle. The vacuum environment strips bacteria of oxygen needed for survival, slowing spoiling. Vacuum-packing is commonly used for storing nuts to reduce loss of flavor from oxidation.
Salting or curing draws moisture from the meat through a process of osmosis. Meat is cured with salt or sugar, or a combination of the two. Nitrates and nitrites are also often used to cure meat and contribute the characteristic pink color, as well as inhibition of Clostridium botulinum.
Sugar is used to preserve fruits, either in syrup with fruit such as apples, pears, peaches, apricots, plums or in crystallized form where the preserved material is cooked in sugar to the point of crystallisation and the resultant product is then stored dry. This method is used for the skins of citrus fruit (candied peel), angelica and ginger. A modification of this process produces glacé fruit such as glacé cherries where the fruit is preserved in sugar but is then extracted from the syrup and sold, the preservation being maintained by the sugar content of the fruit and the superficial coating of syrup. The use of sugar is often combined with alcohol for preservation of luxury products such as fruit in brandy or other spirits. These should not be confused with fruit flavored spirits such as cherry brandy or Sloe gin.
Artificial food additives
Preservative food additives can be antimicrobial; which inhibit the growth of bacteria or fungi, including mold, or antioxidant; such as oxygen absorbers, which inhibit the oxidation of food constituents. Common antimicrobial preservatives include calcium propionate, sodium nitrate, sodium nitrite, sulfites (sulfur dioxide, sodium bisulfite, potassium hydrogen sulfite, etc.) and disodium EDTA. Antioxidants include BHA and BHT. Other preservatives include formaldehyde (usually in solution), glutaraldehyde (kills insects), ethanol and methylchloroisothiazolinone.
Pickling is a method of preserving food in an edible anti-microbial liquid. Pickling can be broadly categorized as chemical pickling for example, In chemical pickling, the food is placed in an edible liquid that inhibits or kills bacteria and other micro-organisms. Typical pickling agents include brine (high in salt), vinegar, alcohol, and vegetable oil, especially olive oil but also many other oils. Many chemical pickling processes also involve heating or boiling so that the food being preserved becomes saturated with the pickling agent. Common chemically pickled foods include cucumbers, peppers, corned beef, herring, and eggs, as well mixed vegetables such as piccalilli.
In fermentation pickling, the food itself produces the preservation agent, typically by a process that produces lactic acid. Fermented pickles include sauerkraut, nukazuke, kimchi, surströmming, and curtido. Some pickled cucumbers are also fermented.
Sodium hydroxide (lye) makes food too alkaline for bacterial growth. Lye will saponify fats in the food, which will change its flavor and texture. Lutefisk uses lye in its preparation, as do some olive recipes. Modern recipes for century eggs also call for lye. Masa harina and hominy use agricultural lime in their preparation and this is often misheard as 'lye'.
Canning and bottling
Canning involves cooking food, sealing it in sterile cans or jars, and boiling the containers to kill or weaken any remaining bacteria as a form of sterilization. It was invented by Nicolas Appert. Foods have varying degrees of natural protection against spoilage and may require that the final step occur in a pressure cooker. High-acid fruits like strawberries require no preservatives to can and only a short boiling cycle, whereas marginal fruits such as tomatoes require longer boiling and addition of other acidic elements. Low acid foods, such as vegetables and meats require pressure canning. Food preserved by canning or bottling is at immediate risk of spoilage once the can or bottle has been opened.
Lack of quality control in the canning process may allow ingress of water or micro-organisms. Most such failures are rapidly detected as decomposition within the can causes gas production and the can will swell or burst. However, there have been examples of poor manufacture (underprocessing) and poor hygiene allowing contamination of canned food by the obligate anaerobe Clostridium botulinum, which produces an acute toxin within the food, leading to severe illness or death. This organism produces no gas or obvious taste and remains undetected by taste or smell. Its toxin is denatured by cooking, though. Cooked mushrooms, handled poorly and then canned, can support the growth of Staphylococcus aureus, which produces a toxin that is not destroyed by canning or subsequent reheating.
Food may be preserved by cooking in a material that solidifies to form a gel. Such materials include gelatine, agar, maize flour and arrowroot flour. Some foods naturally form a protein gel when cooked such as eels and elvers, and sipunculid worms which are a delicacy in the town of Xiamen in Fujian province of the People's Republic of China. Jellied eels are a delicacy in the East End of London where they are eaten with mashed potatoes. Potted meats in aspic, (a gel made from gelatine and clarified meat broth) were a common way of serving meat off-cuts in the UK until the 1950s. Many jugged meats are also jellied.
Meat can be preserved by jugging, the process of stewing the meat (commonly game or fish) in a covered earthenware jug or casserole. The animal to be jugged is usually cut into pieces, placed into a tightly-sealed jug with brine or gravy, and stewed. Red wine and/or the animal's own blood is sometimes added to the cooking liquid. Jugging was a popular method of preserving meat up until the middle of the 20th century.
Irradiation of food is the exposure of food to ionizing radiation; either high-energy electrons or X-rays from accelerators, or by gamma rays (emitted from radioactive sources as Cobalt-60 or Caesium-137). The treatment has a range of effects, including killing bacteria, molds and insect pests, reducing the ripening and spoiling of fruits, and at higher doses inducing sterility. The technology may be compared to pasteurization; it is sometimes called 'cold pasteurization', as the product is not heated. Irradiation is not effective against viruses or prions, it cannot eliminate toxins already formed by microorganisms, and is only useful for food of high initial quality.
The radiation process is unrelated to nuclear energy, but it may use the radiation emitted from radioactive nuclides produced in nuclear reactors. Ionizing radiation is hazardous to life (hence its usefulness in sterilisation); for this reason irradiation facilities have a heavily shielded irradiation room where the process takes place. Radiation safety procedures ensure that neither the workers in such facility nor the environment receive any radiation dose from the facility. Irradiated food does not become radioactive, and national and international expert bodies have declared food irradiation as wholesome. However, the wholesomeness of consuming such food is disputed by opponents and consumer organizations. National and international expert bodies have declared food irradiation as 'wholesome'; UN-organizations as WHO and FAO are endorsing to use food irradiation. International legislation on whether food may be irradiated or not varies worldwide from no regulation to full banning. Irradiation may allow lower quality or contaminated foodstuffs to be rendered marketable.
It is estimated that about 500,000 tons of food items are irradiated per year worldwide in over 40 countries. These are mainly spices and condiments with an increasing segment of fresh fruit irradiated for fruit fly quarantine.
Pulsed electric field processing
Pulsed electric field (PEF) processing is a method for processing cells by means of brief pulses of a strong electric field. PEF holds potential as a type of low temperature alternative pasteurization process for sterilizing food products. In PEF processing, a substance is placed between two electrodes, then the pulsed electric field is applied. The electric field enlarges the pores of the cell membranes which kills the cells and releases their contents. PEF for food processing is a developing technology still being researched. There have been limited industrial applications of PEF processing for the pasteurization of fruit juices.
Modifying atmosphere is a way to preserve food by operating on the atmosphere around it. Salad crops which are notoriously difficult to preserve are now being packaged in sealed bags with an atmosphere modified to reduce the oxygen (O2) concentration and increase the carbon dioxide (CO2) concentration. There is concern that although salad vegetables retain their appearance and texture in such conditions, this method of preservation may not retain nutrients, especially vitamins. Grains may be preserved using carbon dioxide by one of two methods; either using a block of dry ice placed in the bottom and the can is filled with grain or the container can be purged from the bottom by gaseous carbon dioxide from a cylinder or bulk supply vessel.
Nitrogen gas (N2) at concentrations of 98% or higher is also used effectively to kill insects in grain through hypoxia. However, carbon dioxide has an advantage in this respect as it kills organisms through hypercarbia and depending on concentration hypoxia and, requiring concentrations of above 35%, or so. This makes carbon dioxide preferable for fumigation in situations where a hermetic seal cannot be maintained.
High pressure food preservation
High pressure food preservation refers to high pressure used for food preservation. "Pressed inside a vessel exerting 70,000 pounds per square inch (480 MPa) or more, food can be processed so that it retains its fresh appearance, flavour, texture and nutrients while disabling harmful microorganisms and slowing spoilage." By 2001, adequate commercial equipment was developed so that by 2005 the process was being used for products ranging from orange juice to guacamole to deli meats and widely sold.
Burial in the ground
Burial of food can preserve it due to a variety of factors: lack of light, lack of oxygen, cool temperatures, pH level, or desiccants in the soil. Burial may be combined with other methods such as salting or fermentation. Most foods can be preserved in soil that is very dry and salty (thus a desiccant), or soil that is frozen.
Many root vegetables are very resistant to spoilage and require no other preservation than storage in cool dark conditions, for example by burial in the ground, such as in a storage clamp. Century eggs are created by placing eggs in alkaline mud (or other alkaline substance) resulting in their "inorganic" fermentation through raised pH instead of spoiling. The fermentation preserves them and breaks down some of the complex, less flavorful proteins and fats into simpler more flavorful ones. Cabbage was traditionally buried in the fall in northern farms in the USA for preservation. Some methods keep it crispy while other methods produce sauerkraut. A similar process is used in the traditional production of kimchi. Sometimes meat is buried under conditions which cause preservation. If buried on hot coals or ashes, the heat can kill pathogens, the dry ash can desiccate, and the earth can block oxygen and further contamination. If buried where the earth is very cold, the earth acts like a refrigerator.
Controlled use of micro-organism
Some foods, such as many cheeses, wines, and beers will keep for a long time because their production uses specific micro-organisms that combat spoilage from other less benign organisms. These micro-organisms keep pathogens in check by creating an environment toxic for themselves and other micro-organisms by producing acid or alcohol. Starter micro-organisms, salt, hops, controlled (usually cool) temperatures, controlled (usually low) levels of oxygen and/or other methods are used to create the specific controlled conditions that will support the desirable organisms that produce food fit for human consumption.
Biopreservation is the use of natural or controlled microbiota or antimicrobials as a way of preserving food and extending its shelf life. Beneficial bacteria or the fermentation products produced by these bacteria are used in biopreservation to control spoilage and render pathogens inactive in food. It is a benign ecological approach which is gaining increasing attention.
Of special interest are lactic acid bacteria (LAB). Lactic acid bacteria have antagonistic properties which make them particularly useful as biopreservatives. When LABs compete for nutrients, their metabolites often include active antimicrobials such as lactic and acetic acid, hydrogen peroxide, and peptide bacteriocins. Some LABs produce the antimicrobial nisin which is a particularly effective preservative.
These days LAB bacteriocins are used as an integral part of hurdle technology. Using them in combination with other preservative techniques can effectively control spoilage bacteria and other pathogens, and can inhibiting the activities of a wide spectrum of organisms, including inherently resistant Gram-negative bacteria.
Hurdle technology is a method of ensuring that pathogens in food products can be eliminated or controlled by combining more than one approach. These approaches can be thought of as "hurdles" the pathogen has to overcome if it is to remain active in the food. The right combination of hurdles can ensure all pathogens are eliminated or rendered harmless in the final product.
Hurdle technology has been defined by Leistner (2000) as an intelligent combination of hurdles which secures the microbial safety and stability as well as the organoleptic and nutritional quality and the economic viability of food products. The organoleptic quality of the food refers to its sensory properties, that is its look, taste, smell and texture.
Examples of hurdles in a food system are high temperature during processing, low temperature during storage, increasing the acidity, lowering the water activity or redox potential, or the presence of preservatives or biopreservatives. According to the type of pathogens and how risky they are, the intensity of the hurdles can be adjusted individually to meet consumer preferences in an economical way, without sacrificing the safety of the product.
Principal hurdles used for food preservation (after Leistner, 1995) Parameter Symbol Application High temperature F Heating Low temperature T Chilling, freezing Reduced water activity aw Drying, curing, conserving Increased acidity pH Acid addition or formation Reduced redox potential Eh Removal of oxygen or addition of ascorbate Biopreservatives Competitive flora such as microbial fermentation Other preservatives Sorbates, sulfites, nitrites
- Blast chilling
- Dietary supplement
- Food and Bioprocess Technology
- Food chemistry
- Food engineering
- Food fortification
- Food manufacturing
- Food microbiology
- Food packaging
- Food processing
- Food rheology
- Food science
- Food spoilage
- Food technology
- Gourmet Library and museum
- Refrigerate after opening
- ^ "Preserving Food without Freezing or Canning, Chelsea Green Publishing, 1999"
- ^ "Historical Origins of Food Preservation." University of Georgia, National Center for Home Food Preservation. Accessed June 2011.
- ^ Nicolas Appert inventeur et humaniste by Jean-Paul Barbier, Paris, 1994 and http://www.appert-aina.com
- ^ anon., Food Irradation - A technique for preserving and improving the safety of food, WHO, Geneva, 1991
- ^ Hauther,W. & Worth, M., Zapped! Irradiation and the Death of Food, Food & Water Watch Press, Washington, DC, 2008
- ^ Consumers International - Home
- ^ NUCLEUS - Food Irradiation Clearances
- ^ Food irradiation - Position of ADA J Am Diet Assoc. 2000;100:246-253
- ^ C.M. Deeley, M. Gao, R. Hunter, D.A.E. Ehlermann, The development of food irradiation in the Asia Pacific, the Americas and Europe; tutorial presented to the International Meeting on Radiation Processing, Kuala Lumpur, 2006. http://www.doubleia.org/index.php?sectionid=43&parentid=13&contentid=494
- ^ Annis, P.C. and Dowsett, H.A. 1993. Low oxygen disinfestation of grain: exposure periods needed for high mortality. Proc. International Conference on Controlled Atmosphere and Fumigation. Winnipeg, June 1992, Caspit Press, Jerusalem, pp 71-83.
- ^ Annis, P.C. and Morton, R. 1997. The acute mortality effects of carbon dioxide on various life stages of Sitophilus oryzae. J. Stored Prod.Res. 33. 115-124
- ^ "High-Pressure Processing Keeps Food Safe". Military.com. Archived from the original on 2008-02-02. http://web.archive.org/web/20080202232043/http://www.military.com/soldiertech/0,14632,Soldiertech_Squeeze,,00.html. Retrieved 2008-12-16. "Pressed inside a vessel exerting 70,000 pounds per square inch or more, food can be processed so that it retains its fresh appearance, flavor, texture and nutrients while disabling harmful microorganisms and slowing spoilage."
- ^ a b c Ananou S, Maqueda M, Martínez-Bueno M and Valdivia E (2007) "Biopreservation, an ecological approach to improve the safety and shelf-life of foods" In: A. Méndez-Vilas (Ed.) Communicating Current Research and Educational Topics and Trends in Applied Microbiology, Formatex. ISBN 9788461194230.
- ^ Yousef AE and Carolyn Carlstrom C (2003) Food microbiology: a laboratory manual Wiley, Page 226. ISBN 9780471391050.
- ^ FAO: Preservation techniques Fisheries and aquaculture department, Rome. Updated 27 May 2005. Retrieved 14 March 2011.
- ^ Alzamora SM, Tapia MS and López-Malo A (2000) Minimally processed fruits and vegetables: fundamental aspects and applications Springer, Page 266. ISBN 9780834216723.
- ^ a b Alasalvar C (2010) Seafood Quality, Safety and Health Applications John Wiley and Sons, Page 203. ISBN 9781405180702.
- ^ Leistner I (2000) "Basic aspects of food preservation by hurdle technology" International Journal of Food Microbiology, 55:181–186.
- ^ Leistner L (1995) "Principles and applications of hurdle technology" In Gould GW (Ed.) New Methods of Food Preservation, Springer, pp. 1-21. ISBN 9780834213418.
- ^ Lee S (2004) "Microbial Safety of Pickled Fruits and Vegetables and Hurdle Technology" Internet Journal of Food Safety, 4: 21-32.
- Riddervold, Astri. Food Conservation. ISBN 9780907325406.
- Dehydrating Food
- Preserving foods ~ from the Clemson Extension Home and Garden Information Center
- National Center for Home Food Preservation
- BBC News Online - US army food... just add urine
- Home Economics Archive: Tradition, Research, History (HEARTH)
An e-book collection of over 1,000 classic books on home economics spanning 1850 to 1950, created by Cornell University's Mann Library.
Wikimedia Foundation. 2010. |
Madagascar is a large island in the Indian Ocean located 250 miles off the southeast coast of Africa. Now home to the Republic of Madagascar, the island was first settled by natives of Borneo, who arrived in waves by outrigger canoe between 350 BC and 550 AD. Arab traders arrived in about 700 AD, introducing Islam and the Arabic script. Bantu tribesmen crossed over from modern-day Mozambique in large numbers in about 1,000 AD. On August 10, 1500, the Portuguese explorer Diogo Dias (brother of Bartolomeu Dias) sighted the island during his second voyage to India. The French established trading posts along the east coast of Madagascar during the late 1600’s, mostly to support their ventures in India. From the last quarter of the eighteenth century through the first quarter of the nineteenth century, a series of pirate strongholds were established on Madagascar. By that time, numerous merchant ships, flying the flags of the British, French, Dutch, and others, were making regular voyages from India, the Spice Islands, China, and Japan to Europe. The merchant ships were richly laden and largely unprotected. As important, naval patrols were few in number and there was no effective government on Madagascar – a precursor to modern-day Somalia. Pirates including Thomas Tew and Captain (William) Kidd operated out of Ranter Bay, Île Ste Marie, and the Bay of Saint-Augustin, as well as other locations. The legendary pirate stronghold of Libertalia (if it existed at all) was on the small island of Nosy Boroha off the northeast coast of Madagascar. The pirates were succeeded by slave-traders. In 1883, France invaded the island. After some years of resistance, opposition was quelled and France formally annexed Madagascar in 1896. The Malagasy Republic was proclaimed in 1958 and the island obtained full independence on June 26, 1960. |
Dentists and dental hygienists use a lot of words to describe parts of your mouth, problems, and procedures. In fact, it often seems as though dental terminology is a language of its own.
Here is a list of some of the more common terms
you may hear at the dentist and what they mean.
- Abscess: Acute or chronic localized inflammation, probably with a collection of pus, associated with tissue destruction and, frequently, swelling; usually secondary to infection.
- Adult Dentition: The permanent teeth of adulthood that either replace the primary dentition or erupt distally to the primary molars.
- Alveolar: Referring to the bone to which a tooth is attached.
- Apex: The tip or end of the root end of the tooth.
- Artificial Crown: Restoration covering or replacing the major part, or the whole, of the clinical crown of a toothor implant.
- Bicuspid: A premolar tooth; a tooth with two cusps.
- Bruxism: The parafunctionalgrinding of the teeth.
- Cavity: Missing tooth structure. A cavity may be due to decay, erosion or abrasion. If caused by caries; also referred to as carious lesion.
- Cementum: Hard connective tissue covering the outer surface of a tooth root.
- Complete Denture: A prosthetic for the edentulous maxillary or mandibular arch, replacing the full dentition. Usually includes six anterior teeth and eight posterior teeth.
- Crown:An artificial replacement that restores missing tooth structure by surrounding the remaining coronal tooth structure, or is placed on a dental implant. It is made of metal, ceramic or polymer materials or a combination of such materials.
- Cuspid: Single cusped tooth located between the incisors and bicuspids.
- Decay: The lay term for carious lesions in a tooth; decomposition of tooth structure.
- Dentin: Hard tissue which forms the bulk of the tooth and develops from the dental papilla and dental pulp, and in the mature state is mineralized.
- Denture: An artificial substitute for some or all of the natural teeth and adjacent tissues.
- Dry Socket: Localized inflammation of the tooth socket following extraction due to infection or loss of blood clot; osteitis.
- Enamel: Hard calcified tissue covering dentin of the crown of tooth.
- Filling: A lay term used for the restoring of lost tooth structure by using materials such as metal, alloy, plastic or porcelain.
- Gingivitis: Inflammation of gingival tissue without loss of connective tissue.
- Impacted Tooth: An unerupted or partially erupted tooth that is positioned against another tooth, bone, or soft tissue so that complete eruption is unlikely.
- Incisor: A tooth for cutting or gnawing; located in the front of the mouth in both jaws.
- Molar: Teeth posterior to the premolars (bicuspids) on either side of the jaw; grinding teeth, having large crowns and broad chewing surfaces.
- Occlusal: Pertaining to the biting surfaces of the premolar and molar teeth or contacting surfaces of opposing teeth or opposing occlusion rims.
- Plaque: A soft sticky substance that accumulates on teeth composed largely of bacteria and bacterial derivatives.
- Posterior: Refers to teeth and tissues towards the back of the mouth (distal to the canines); maxillary and mandibular premolars and molars.
- Pulp: Connective tissue that contains blood vessels and nerve tissue which occupies the pulp cavity of a tooth.
- Root: The anatomic portion of the tooth that is covered by cementum and is in the alveolus (socket) where it is attached by the periodontal apparatus; radicular portion of tooth.
- Root Canal: The portion of the pulp cavity inside the root of a tooth; the chamber within the root of the tooth that contains the pulp.
- Root Planing: A definitive treatment procedure designed to remove cementum and/or dentin that is rough, may be permeated by calculus, or contaminated with toxins or microorganisms.
- Scaling: Removal of plaque, calculus, and stain from teeth.
These definitions were provided by the American Dental Association. For more dental terms and meanings, visit their glossary page here.
Do you need to schedule your next dentist appointment? We make it easy! Visit our online portal today! |
Protect Yourself with Yellow Fever Vaccine!
Yellow Fever is a viral disease that is transmitted to humans through the bite of infected mosquitoes. Yellow Fever occurs in tropical regions of Africa and in parts of South America. Yellow Fever is a very rare cause of illness in U.S. travelers, but Yellow Fever vaccination is strongly recommended and oftentimes required for entrance into certain countries. The last epidemic of Yellow Fever in North America occurred in New Orleans in 1905.
What can people do to prevent becoming infected with Yellow Fever virus?
Yellow Fever can be prevented by vaccination with Yellow Fever vaccine, a live virus vaccine which has been used for several decades. Because Yellow Fever no longer occurs in the United States or most of the world, only travelers to certain parts of the world need to take the Yellow Fever vaccine. A traveler's risk for acquiring Yellow Fever is determined by various factors, including their immunization status, travel destination and itinerary, time of year, length of exposure and level of local occurrence.
Travelers should get vaccinated for Yellow Fever before visiting areas where Yellow Fever occurs. In the United States, the vaccine is given only at designated, government approved Yellow Fever vaccination centers. International regulations require proof of Yellow Fever immunization for travel to and from certain countries in Africa and South America. Yellow Fever vaccine must be given at least 10 days before entry into Yellow Fever endemic areas in order to provide effective protection and to comply with entrance requirements of certain countries. A single dose confers immunity lasting 10 years or more. If a person is at continued risk of Yellow Fever infection, a booster dose is needed every 10 years. Adults and children over 9 months can take this vaccine.
Travelers should also avoid mosquito bites when traveling in tropical areas by wearing protective clothing, using mosquito repellents and sleeping under mosquito bed nets when in areas with Yellow Fever transmission. Mosquitoes that spread Yellow Fever usually bite during the day, especially at dusk and dawn.
If you are traveling outside of the U.S , visit http://wwwnc.cdc.gov/travel/content/vaccinations.aspx for more specific information related to Yellow Fever vaccine and other information you need before you travel.
Most of the above information was taken from http://wwwnc.cdc.gov/travel/yellowbook/2010/chapter-2/yellow-fever.aspx
For more specific information, refer to the Yellow Fever Vaccine Information Statement (VIS) found at http://www.cdc.gov/vaccines/pubs/vis/downloads/vis-yf.pdf
SHOTS, etc. is a government approved provider of Yellow Fever vaccine. This vaccine can only be administered in the office of an approved provider. An appointment is necessary. Visit www.SHOTSetc.com for more information. |
Eclipse Learning Activities & Demos
There are a plethora of high-quality activities, lessons, and demos concerning the upcoming eclipse; here you will find a small subset of these that are especially helpful.
This hands-on activity from the National Informal STEM Education Network demonstrates how the positions of the Earth, Moon, and Sun can align just right to produce a solar eclipse.
Another example from the National Informal STEM Education Network, this activity uses a beach ball and a tennis ball to demonstrate how the Moon, though much, much smaller than the Sun, can completely obscure the solar disk during an eclipse thanks to its comparatively short distance from the Earth relative to the Sun.
In this activity from the Night Sky Network of the Jet Propulsion Laboratory, participants use simple materials to construct a 3-D model of the Earth-Sun-Moon system to demonstrate how a solar eclipse occurs.
This demo, again from Night Sky Network of the Jet Propulsion Laboratory, constructs a different 3-D model of the Earth-Sun-Moon system to illustrate why solar eclipses don’t occur every time there is a new moon. |
Brain and Neuropsychology
This section provides revision resources for AQA GCSE Psychology and the Brain and Neuropsychology chapter.
The revision notes cover the AQA exam board and unit 8182 (new specification).
First exams for this course are in 2019 onwards.
As part of your GCSE Psychology course, you need to know the following topics within this chapter:
Brain and Neuropsychology
The structure of the nervous system
The nervous system is an extremely complex network of nerve fibres and nerve cells that pass information around the body (see model below outlining this). As the nervous system is very complicated with many different functions, it is practical to divide it into sections to better understand how it works. The first division is between the central nervous system (CNS) and the peripheral nervous system (PNS).
The central nervous system coordinates incoming information and makes decisions about movement or other activities. It consists of the brain and the spinal cord.
The peripheral nervous system (PNS) collects information from, and sends information to, different parts of the human body. The peripheral nervous system consists of two sections which are the somatic nervous system (SNS) and the autonomic nervous system (ANS).
The somatic nervous system is a network of nerve fibres running throughout the body, and sense receptors such as those in our skin, muscles and internal organs. The nerve fibres pass information to and from the CNS using sensory and motor neurons that are myelinated (covered with a myelin sheath which is a fatty wrapping), which helps the messages travel faster.
The autonomic nervous system (ANS) is a network of special nerves which also take information to and from the CNS but does so more slowly as the nerve fibres are not myelinated. The ANS uses information from our internal organs to coordinate our general physiological functioning while also responding directly to information such as stressful or emotional events.
The functions of the nervous system
The different divisions of the nervous system all have different functions. The central nervous system coordinates incoming sensory information and responds to it by sending appropriate instructions to other parts of the nervous system. Thinking, memory, decision-making and language are all part of the central nervous system as it also contains our store of knowledge, habits and other forms of learning allowing us to combine past experience with current situations to make relevant decisions.
The two sections of the peripheral nervous system are the the somatic system (SNS) and the autonomic nervous system (ANS). The somatic nervous system collects information from both the outside world and our internal organs and passes this on to the central nervous system. It also receives instructions from the central nervous system for big movements or small reactions to stimuli. In short, this is what allows us to feel and move.
The autonomic nervous system reacts more slowly because it is concerned with moods and feelings. It deals with many different emotions we feel, responds to threats and is also involved in major changes to the body such as during puberty or pregnancy.
The autonomic nervous system
The autonomic nervous system (ANS) is split into two divisions: the sympathetic and the parasympathetic divisions.
- The sympathetic division sets off arousal, which can be mild like a feeling of anxiety, or extreme such as the fight or flight response. This is activated when an individual feels “under threat”.
- The parasympathetic division allows the body to store up energy when we are not “under threat”.
Therefore the autonomic nervous system (ANS) is the part of the nervous system which helps us react quickly and strongly to emergency situations. It’s other functions include breathing, digestion and it is the main link between the brain and the endocrine system which is a set of glands that release hormones into the blood stream.
Hormones change the state of the body. When adrenaline is released, it activates the heart, making it beat faster ready for action. The release of adrenaline is part of the fight or flight response.
What is the fight or flight response?
The fight or flight response has evolved to help us survive threatening situations. Imagine you are in a forest and you come across a fierce animal which is ready to attack you. Your body activates the fight or flight response so you can either fight the animal or run away (flight).
The fight or flight response allows you to call on energy and strength to deal with the situation regardless of whether you choose to run away or stay and fight. It does this as there is no need for energy reserves if the encounter may potentially leave you dead. The autonomic nervous system (ANS) therefore steps in when a threat is detected and sends messages to your body, making it ready for action which is what we know as the fight or flight response.
The autonomic nervous system switches from parasympathetic activity to sympathetic activity during the fight or flight response. The result is we breath more deeply, our heart rate increases and the blood carries more oxygen. Our eyes also dilate and we also begin to sweat more to cool our muscles. The digestive system also changes so we metabolise sugar quickly, enabling instant energy. The blood also thickens in preparation for possible injury so it clots more easily. The brain also produces natural painkillers known as endorphins. This states is maintained by the endocrine system, which continues to release adrenaline to keep the body in the aroused state.
The parasympathetic division is in control of the body under normal conditions, storing energy. However, if a threat is detected, the sympathetic division activates and the body begins to prepare for action with the fight or flight response. Once the threat as gone, the ANS switches back to having the parasympathetic division in control.
The James-Lange theory of emotion
William James was one of the first to investigate the fight or flight reaction and how the body reacted to stressful events by increased heart-rate, deep breathing and sweating James concluded that it is then that a person experiences fear. In short he believed the body changes are interpreted as an emotion.
James believed that emotions were simply us perceiving physical changes in the body which the brain interprets and concludes which emotions are being felt. James described this in his own words as: “We do not weep because we feel sorry: we feel sorry because we weep”.
Evaluating James-Lange theory of emotion
- Not all researchers have not been convinced that the theory is an accurate explanation of how we experience emotional arousal. This is especially the case because for the theory to be correct, there would need to be separate and distinctive patterns of physiological arousal and a different pattern for every different emotion we experience. Researchers have found this is not the case which undermines the James-Lange theory of emotion.
- Schachter and Singer suggested it is not only physiological changes that occur when we perceive a threatening situation, but there was also a cognitive component. The argument is that when we experience stimulation in the ANS, we also interpret the situation we are in. It is these two things that lead to the emotion we then experience. This idea is supported by research evidence which shows physiological change and cognitive interpretations both lead to emotional experiences.
- The James-Lange theory did promote research and recognised the importance of the ANS in emotional experiences.
Neuron Structure and Function
Before we look into neuron structures and their functions, we need to establish what neurons are. The brain works by electricity, believe it or not. The nervous system is made up of special cells which exchange chemicals to generate small electrical impulses and this is how information is passed around. These special cells are called neurons and there are three types in the nervous system, each with a different function:
Sensory neurons carry information from the sense organs to the central nervous system (CNS). They have a cell body, with two stems on either side. One end receives information from the sense organs, and the other passes this on. Each stem ends in small branches called dendrites, which spread out and connect with other cells.
Motor neurons stimulate muscles for movement. Motor neurons send signals from the brain to the muscles. They begin at the spinal cord, a long axon (or stem), leads to the muscle, where it divides into a spread-out set of dendrites called the motor end plate, which connects with the muscles.
Relay neurons have a cell body surrounded entirely by dendrites. Relay neurons pass messages to other neurons within the central nervous system (CNS) and make millions of connections between each other, sensory neurons and motor neurons.
Synaptic transmission is the process where neurons pass messages to other neurons or muscles by releasing special chemicals known as neurotransmitters into tiny gaps between dendrites.
These tiny gaps are called synapses. The chemical is released from swellings at the end of each dendrite, called synaptic knobs. These contain vesicles of neurotransmitter and when an electric impulsive reaches them, the vesicles open and release the chemicals into the synapse. These chemicals are then picked up at receptor sites on the next neuron, which are sensitive to that particular neurotransmitter. This is the process of synaptic transmission.
When a neurotransmitter is picked up at a receptor site, it alters the neurons chemistry slightly. Some synapses will make the receiving neuron more likely to generate an electrical impulse, this is called excitation. Meanwhile other synapses will make the neuron less likely to fire, this is called inhibition. Once the neuron has fired (or not), the neurotransmitter in the receptor sites are released back into the synapse. The re-uptake process then happens so the neurotransmitter can be reused when the next impulse arrives.
Donald Hebb’s theory of learning and neuronal growth
Hebb suggested that if a neuron repeatedly excites another neuron, neuronal growth occurred and the synaptic knob becomes larger. This meant that when certain neurons act together frequently enough, they become established as a connection and form neural pathways. Hebb referred to these combinations of neurons as “cell assemblies”. He suggested each cell assembly formed a single processing unit.
Hebb suggested that whenever we learn, do or remember certain things, we are developing stronger cell assemblies, and the more we use them, the better we learn and hold on to the information in that neural pathway.
Although Donald Hebb’s theory was proposed in the 1950s, it has been the basis for research into how computers should be developed and modern neuropsychological research supports Hebb’s ideas of learning and neuronal growth.
The structure and function of the brain
The brain consists of millions of relay neurons that are tightly packed together. The cerebrum is the top layer of the brain. The brain consists of two cerebral hemispheres, one on each side of the head and with each hemisphere divided into four areas known as lobes. The four lobes in the hemispheres are broken down in the image below:
The frontal lobe
The frontal lobe control thought, memory, problem-solving, planning, cognitive and social behaviours and movements such as facial expressions. This area of the brain is most often affected by brain injuries from motor vehicle crashes.
The parietal lobe
The parietal lobe is responsible for integrating information from other areas to form the basis of complex behaviours which includes behaviours that involve the senses (such as vision, touch, body and spatial awareness). The parietal lobe is also responsible for language and helps us form words and thoughts.
The temporal lobe
The temporal lobe helps us understand and process what we hear. This lobe is also responsible for the comprehension and production of speech and language as well as how we learn and organise information. It is also responsible for emotions and emotional memory.
The occipital lobe
The occipital lobe is where visual information such as colour, shape and distance is processed. Injury or damage to the primary visual cortex can cause impairments in vision such as blindness or blind spots in a persons visual field.
The cerebellum is found at the back of the brain and is involved in balance and coordination. These activities are carried out automatically by the brain and are not under conscious control. As we become more experienced and practiced in our physical movements, the cerebellum controls these actions so they become smoother and more automatic.
Localisation of function in the brain
Research into the brain has helped us understand the brain better and although we do not fully understand every part of it, we do know that some brain functions are associated with particular areas on the folded outer layers of the cerebrum known as the cerebral cortex. These localised functions include movement, vision, hearing, language and touch.
The area responsible for controlling movement is called the motor area. It controls deliberate movement, using motor neurons to send signals to our muscles. Our fingers and thumbs have a larger share of the motor cortex than less active parts like the torso. The area behind this is the somatosensory area, which is responsible for touch. The more sensitive a part of the body is, the larger the amount of the somatosensory cortex it will involve.
The two cerebral hemispheres of the brain control opposite sides of the body. For example, the right hemispheres sensory and motor strips deal with the left side of the body while those on the left hemisphere deal with the right side of the body.
The visual cortex is in the occipital lobe which is just above the cerebellum. This was linked to vision during the first world war when soldiers suffered shrapnel damage to the back of the head and became partially blind. The visual cortex receives information from both the eyes through the optic nerves while another area on the temporal lobe, the auditory cortex, serves the same job for hearing. The auditory cortex receives information from the ears so damage to this area of the brain can lead to hearing loss.
Language areas of the brain
One of the key features that distinguishes human beings from animals is how we use language. Humans have specialised areas on the left hemisphere of the brain, which are dedicated to language processing. For example, Broca’s area is at the base of the left frontal lobe and deals with speech production (see image below). If this area is damaged, people may still understand what is being said to them, but struggle with saying things themselves. This condition is known as motor aphasia.
Wernicke’s area is in the temporal lobe, and is concerned with understanding speech. When this area is damaged for people, they can speak perfectly well however they have problems understanding what other people are saying to them. This condition is known as Wernicke’s aphasia. The angular gyrus is located at the back of the parietal lobe and receives information about written language from the visual cortex and interprets it as being similar to speech. When people experience injury to this area, they develop a condition known as acquired dyslexia where they experience difficulties in reading.
Penfield’s interpretive cortex study 1959
Aim: The study looked to investigate the workings of the conscious mind.
Study design: Clinical case studies investigated the brain functioning of a number of patients who were undergoing open brain surgery.
Method: Some of the brain surgery being conducted required patients to be conscious so the surgeon can be sure that any actions occurred in the right place. This is painless as the brain has no sense receptors. In this study, the surgeon probed different areas of the cortex using gentle electrical stimulation and asked the patients to report what they experienced.
Results: The study produced qualitative results.
- One patient had their temporal love stimulated and reported they could hear a piano playing and could even identify the son being played. When another part of the brain was stimulated, they reported on a clear memory. As a method of control over the study, the surgeon told the patient he would stimulate the area again but did not activate the electrode to check their response. The patient reported not to experience anything.
- A female patient had her temporal lobe stimulated and reported to hearing an orchestra playing a particular tune. When the electrode was removed they reported that the music had stopped. They could hear it again once the electrode was stimulated again and this was confirmed once the experiment was repeated several times.
- A young boy reported to be hearing his mother telling his brother that he had got his coat on backwards when the same area of the temporal lobe was stimulated. When this area was stimulated again he reported to be hearing the same conversation ever after some period of time.
- Other research by Penfield found that stimulating the visual cortex resulted in subjects reporting to see images such as balloons floating into the sky. Stimulation of the motor and sensory areas produced movements or sensations of being touched for the patients. Penfield concluded that the temporal love was therefore active in the interpretation of meaning.
Conclusion: Penfield concluded that there was evidence for localisation of function; the idea that some psychological functions are controlled from particular parts of the brain within the cerebral cortex.
Evaluating Penfield’s interpretive cortex study 1959
- The study demonstrated how certain areas of the cerebral cortex were involved in particular functions of the brain through studying living brains rather than post-mortems.
- The study also demonstrated how complex memories, such as conversations, are stored in the brain.
- A limitation of Penfield’s study is the patients were epileptic and therefore they may not be typical of the general population.
- The findings from this study were different for each individual so this makes it hard to generalise the findings.
- Participants may have found it difficult to articulate their experiences into words so this makes it difficult to know exactly what their experiences were.
The last section of your GCSE psychology brain and neuropsychology topic covers cognitive neuroscience. So what is this? Cognitive neuroscience is all about understanding the relationship between the brain, our cognitive processes and our behaviour. Our behaviour is not simply about reacting and responding, we also think, make decisions, use our imagination, plan and engage. All these brain activities affect our behaviour and cognitive neuroscience investigates the relationships
Scanning techniques to identify brain functioning: CT, PET and fMRI scans.
Brain scans allow us to look at brain function within living people. There are various types of brain scans however we will focus on three which are CT scans, PET scans and fMRI scans.
CT scans are able to map the brain by taking a number of X-ray “slices” of the brain, and then combining them together to build a complete image. As some types of tissue is more denser than other, CT scans enable them to show up in X-rays. Bone is the most dense however nerve cell bodies (grey matter), are less dense than myelinated nerve fibres (white matter), so they appear different too. CT scans are helpful for identifying tumours and blood clots are they show up different too within CT scans which provides a useful medical purpose.
Pet scans (also known as positron emission tomography) work by monitoring a small amount of radioactive chemical which is put into the blood supply. Active brain cells use more blood than passive brain cells which enables the scanner to see which parts of the brain active and in use. PET scans are able to highlight the brain pathways in use, as well as specific areas of activity or if there are blockages in blood flow around the brain. Due to the slight risk from radioactivity, PET scans are not used as much but they do provide medical uses when required.
Functional Magnetic Resonance Imaging (fMRI) is a modern and accurate tool used by researchers. fMRI scanners work as the water molecules in the brain cells have tiny magnetic fields which can be influenced by the strong magnetic field of the scanner and are slightly different when the cell is active rather than quiet. A complete fMRI scan of the brain takes only 2 seconds enabling researchers to explore brain and cognition activity. For example, if a person reads out a word or is asked to think of a specific event, the fMRI scan will show which parts of the brain are active as they do this. fMRI scanners are a popular method for researchers due to their accuracy but also the fact there there is less health risks involved, unlike the use of X-rays or radioactive substances.
Tulving’s “Gold” Memory Study 1989
Aim: To explore the connection between different types of memory and brain activity.
Study design: Case studies were used.
Method: Six volunteers were injected with a gold radioactive isotope which spread within the bloodstream and up to the brain. The gold isotope had a half-life of only 30 seconds and therefore presented little risk to participants. The distribution of the isotope was measured using a PET scanning technique called regional cerebral blood flow, which measures blood flowing in different areas of the brain.
Tulving’s study compared episodic memory, in this case, the memory of something participants had experienced personally like a trip or holiday, with semantic memory, such as knowledge they had learned through reading a book. Researchers also compared whether the memory was recent or established some time ago. The volunteers all chose their own topics.
Each volunteer lay on a couch and closed their eyes and began thinking about their chosen topic. After 60 seconds, the gold isotope was injected and 7-8 seconds, a rCBF reading was taken. The reading lasted 2.4 seconds and consisted of 12 rapid scans of 0.2 seconds each.
Each participant experienced eight trials in total, with a two minute rest inbetween.
Results: Three volunteers were dropped from the analysis because their results were inconsistent. The remaining three however showed a clear difference in blood flow patterns depending on whether what they were remembering was episodic or semantic information. This difference was the same regardless of whether what they were remembering was a recent memory or a memory from some time ago. Episodic recollection generally produced more activation of the frontal and temporal lobes, while semantic recollection produced more activity in the parietal and occipital lobes of the cerebral cortex.
Conclusion: Tulving concluded that semantic and episodic memories produce activity in different parts of the brain.
Evaluating Tulving’s Gold Memory Study 1989
- A strength of the study is, it was one of the first to show how we can investigate cognitive processes in a living brain.
- The study demonstrated how different areas of the brain activity are related to cognitive processes.
- Tulvings study used ethical procedures with the participants fully informed before they gave their consent.
- Only three participants showed the effects and as the sample is incredibly small, the results may not generalise to everyone.
- There is no way to know for certain what people are actually thinking about at the exact moment of the scan. Therefore the study may lack internal validity as researchers may not actually be measuring what they want to measure.
- As the volunteers were fully informed members of the study, they may have tried very hard to get the procedure to work or present demand characteristics. For example, they may have said they were thinking about the memories to please researchers even if they were not at the time of the scans. |
Visit the Metropolitan Museum of Art (MET) (Links to an external site.) website and search their collection of ancient Egyptian art. The link provided will take you directly to their collection of ancient Egyptian art. Scroll down and click through the various pages to browse the collection. When you click on an artifact it will take you to a page that contains additional information about that artifact.
- Find and research one artifact that contains iconographic symbolism. Recall from Chapter 5 Chapter 5that iconography refers to symbolic representations. This is an excerpt from Chapter 5 defining iconography: At the simplest of levels, iconography is the containment of deeper meanings in simple representations. It makes use of symbolism to generate narrative, which in turn develops a work’s meaning.
- In a paragraph (at least 5 or 6 sentences), briefly describe what the artifact is and identify the iconographic imagery. What is being represented visually and what does it symbolize iconographically? (e.g., a depiction of a dove meant to symbolize peace)
- Include a link or a picture of the artifact. |
●Design I: Deepening of textile printing / reserve printing techniques and study on color harmony
Outline・Meaning of textile changes drastically by adding colors to effective repetitious patterns (repeat).
Students perform printing of Repeat (4 sides / half step), material, pattern and grand, coloring reserve printing (pigments and dyes) on cotton fabric with several different textures. In textile design production, color and pattern creates rhythms, while indicating the area of use at the same time. It also leads to understanding of the history of patterns as means to avoid a blank area.
After a workshop on tactile perception, students perform silkscreen printing, study effects of frottage and then make yukata, while conducting studies on hue harmony throughout the entire process.
●Design II: Tactile perception and weaving <OFF LOOM・FELT>
Students experiment with expressions of textile art through fabric art produced by the act of weaving. Possibilities of textile are enhanced by creating a system of language, form, and body. Students master OFF LOOM, hand spinning, felt, batik, and Yuzen.
Students start from drawings, and then learn basics of OFF LOOM using wood frame, hand spinning, felt, batik, and Yuzen.
●Design III: Structure of weaving and color study
It is necessary to understand three elements, namely organization, color and material when designing textile. Students learn and understand three basic organizations of weaving (plain weave, shamon ori (twill), satin weave) and multilayered organization, and then develop original designs. As a study of color effect, students make stripe composition (separation effect) first, and then explore images of checker pattern, and stripes through two-dimensional composition and collage. Weft face (tapestry weave) expression is studied by making the miniature.
Students learn and understand structure of weaving and color study by weaving wool using tabletop weaving machine. Tapestry technique is mastered by using OFF LOOM, and students conduct research for assignments in the third year.
Tapestry, decorating the wall surface, is known as the art form resembling paintings, expressed through weaving (tapestry weave). Masterpieces of tapestry include a series of Gobelins tapestry work entitled “Tapisserie de l'Apocalypse”, and “La Dame à la Licorne” at Musée de Cluny. The role of tapestry in today’s living space is much more than decoration, and students are asked to explore further its significance. Based on the theme “Dialogue between space and wall”, expression by weaving and dyeing, and its relationship with space are explored.
Students learn and consider the origin and transition of tapestry and its spiritual and physical effects on space from lectures. Studies on textile utilizing shape-memory property and open screen are also conducted.
●Textile art research
Students study Wall Hanging from wide perspectives and explore its development in contemporary art. Students learn and understand the historical progress and its transition, and propose artwork using fiber material.
Transformation of tapestry is a quest for significance of art, design and expressions through the material and technique. Students conduct research on textile art in Japan, while discussing diversity of expression in contemporary art.
●Interior fabric “At the Window”
Relationship of space and textile is explored by studying color, material and weaving technique of curtain. Students study and understand primary purposes of curtain such as keeping warmth, sound insulation and shading, and propose a textile product while considering its making method as well as its significance as art.
Students conduct production and research of curtain or some textile object covering the window, while considering the meaning of function and decoration from a wide perspective. If one assumes curtain as a form of textile performing art, it is possible to propose new spatial expression.
●Pattern Design / from reproduction to original design
When planning textile design on paper, it is important to understand and acknowledge that the final goal is expression by fabric. The expression requires acquisition of expression technique as well as consideration for material, technique and function.
Then students conduct reproduction of traditional Japanese patterns focusing on shape, layout and color, which eventually lead to development of original design. Students work as a group and produce original fabric inspired from rhythms they experienced at the live performance of percussion instruments.
Students gain knowledge on materials, and foster abilities to “draw” as well as creating color harmony. Significance of historical patterns can be acknowledged through reproduction process. In addition, students take photos of surrounding textures and then develop a two-dimensional work while demonstrating expression techniques.
●Textile as membrane
Pursuit of possibilities in textile is diversified and far beyond the scope of conventional dyeing and weaving. It is also necessary to develop new ideas incorporating possibilities in other fields. In order to establish the existence of fabric, we should not depend only on subdivided techniques, but are expected to pursue design with possibilities generated from new integration.
(Excerpt from Whole Cloth by Constantine Reuter) It is important to evaluate textile design from a bird’s eye perspective. Students explore the expression of art and design based on the theme of “textile - membrane”.
Significance of material, thread, and textile, as well as color and pattern are assessed and formative artwork in textile is proposed using specific methods.
●Textile creates people, society, and environment
New creation through integration is an important challenge in the field of textile. Collaboration of art and design in concept work and design development is integration of expressions, which makes us aware that design is for art, and art for design. Interpretation of membrane from the previous assignment is applied to a proposal of relationship between body, environment, and object using various expression methods. Students produce and propose textile work based on the themes such as “textile transmitting messages to the society”, “textile with ideas”, and formative art made of thread.
Students make proposal based on possibilities in the aforementioned fields, which leads to specific ideas regarding formative art, material, environment, and production.
While anticipating graduation work, students are encouraged to experiment with possibilities of new expressions during this period. Each student is required to articulate his/her goal and exert advanced expression technique, to enrich his/her research theme. Students are free to present in any styles, such as actual work, prototypes, or other formats. Research theme should address appealing points and problems of textile as craft, and others to prepare for creation of his/her own expression.
Students conduct specific research and production as a preliminary study for graduation work. |
Part 1 in the series Learning In the New Economy of Information
In the past 10 years, perhaps nothing has changed more than the relationship between teachers and the information being distributed in their classrooms. Historically, the role of teacher has always been that of gatekeeper and distributor of the course canon. Information was dispensed. Students were encouraged to arrive at their own conclusions and interpret information, but they were limited by the fact that they were operating in a scarce economy of information (teacher, textbook and a limited number of outside sources). For the most part, the teacher was the sole provider of content, and though many teachers worked to provide quality materials and move away from a lecture-based curriculum, even these provided resources were no less teacher directed.
With the proliferation of mobile technology, our ability to access information has increased, dramatically changing the practice of teaching. Comparing the two scenarios, the circumstances couldn't be more different.
Teaching in an Environment of Scarcity
When teaching in an environment of informational scarcity, lessons that deeply explored a subject were limited by the resources that the school library had available, as well as students' ability to access them. (Remember encyclopedias labeled “Reference: Do not remove from library”?) Even the most thorough research might yield only a few resources. To put it simply, your Lego castle could be no larger than the maximum number of bricks that you possessed. There just weren’t that many bricks available for building. |
The following detailed definition was developed specifically for forests areas, in the High Conservation Value Forest Toolkit (the "Global Toolkit"). The definition can readily be adapted to other types of habitat.
This value is intended to include areas with extraordinary concentrations of species, including threatened or endangered species, endemics, unusual assemblages of ecological or taxonomic groups and extraordinary seasonal concentrations.
Any forest that contains the species identified as HCVs, or which contains habitat critical to the continued survival of these species, will be a HCVF. This will include forests with many species that are threatened or endangered or many endemic species (e.g. “Biodiversity hotspots”). Exceptionally, it may even be that a single species is considered important enough to be an HCV on its own.
However, there will be many forests that contain rare or endemic species that are not HCVFs because there is not a globally, regionally or nationally significant concentration. These forests should still be managed appropriately, but they are not HCVFs.
Since there is a range of ways in which biodiversity values can be identified, this value has been sub-divided into four elements:
- HCV1.1 Protected areas: Protected areas perform many functions, including conserving biodiversity. Protected area networks are a cornerstone of the biodiversity conservation policies of most governments and many NGOs and the importance of them is recognised in the Convention on Biological Diversity (CBD). Although the processes of selecting areas for protection have varied greatly in different countries and at different times, many are nonetheless vital for conserving regional and global biodiversity values.
- HCV1.2 Threatened and endangered species: One of the most important aspects of biodiversity value is the presence of threatened or endangered species. Forests that contain populations of threatened or endangered species are clearly more important for maintaining biodiversity values than those than do not, simply because these species are more vulnerable to continued habitat loss, hunting, disease etc.
- HCV1.3 Endemic species: Endemic species are ones that are confined to a particular geographic area. When this area is restricted, then a species has particular importance for conservation. This is because restricted range increases the vulnerability of species to further loss of habitat etc, and at the same time the presence of concentrations of endemic species is proof of extraordinary evolutionary processes.
- HCV1.4 Critical temporal use: Many species use a variety of habitats at different times or at different stages in their life-history. These may be geographically distinct or may be different ecosystems or habitats within the same region. The use may be seasonal or the habitat may be used only in extreme years, when, nevertheless, it is critical to the survival of the population. This component includes critical breeding sites, migration sites, migration routes or corridors (latitudinal as well as altitudinal) or forests that contain globally important seasonal concentrations of species. In temporal and boreal regions, these critical concentrations will often occur seasonally (e.g., winter feeding grounds or summer breeding sites), whereas in the tropics, the time of greatest use may depend more on the particular ecology of the species concerned (e.g., riverine forests within tropical dry forests may be seasonally critical habitat for many vertebrate species). This element is included to ensure the maintenance of important concentrations of species that use the forest only occasionally. |
HOPKINS GLOSSARY OF WEATHER TERMS D
- A thermal sensor (thermometer) with a sensing element that
bends or deforms by an amount which is a function of temperature.
An example of a deformation-type thermometer is the bimetallic
thermometer used in a thermograph.
dewpoint temperature (or dewpoint)
- Temperature to which an air parcel must be cooled at constant
pressure and constant water vapor content in order for saturation
of that air parcel with water vapor to occur with respect
to a liquid water surface. Dewpoint can be measured by a dewpoint
hygrometer or indirectly with a psychrometer. The units
for dewpoint are the same for air temperature. Compare with
- An instrument used for determining the dewpoint; a
type of hygrometer operating on the principle that air
in contact with a polished metallic surface is cooled to point
where a thin film is observed to form on the metal surface, where
upon the temperature of that surface is measured. This observed
temperature is essentially the same as the theoretical dewpoint
- A radar unit that, in the velocity mode, determines
the radial movement of airborne hydrometeor or aerosol targets
either to or from the unit; based upon the Doppler Effect
or Shift, where a slight change in the frequency (or phase) between
the broadcast and reflected microwave radiation signaloccurs
because the target is moving toward or away from the radar antenna.
Precipitation location and intensity are determined also by Doppler
radar when operated in a reflectivity mode.
- A modified radiosonde package that is dropped by parachute
from an aircraft to obtain temperature, pressure, and humidity
profiles of the atmosphere below flight level; often used by aircraft
weather reconnaissance of hurricanes.
- The temperature indicated by the dry-bulb thermometer of a
psychrometer; the dry-bulb temperature is identical with
the ambient air temperature. Contrast with wet-bulb temperature
- The thermal sensor in a psychrometer that is not moistened,
but kept dry. The dry-bulb thermometer indicates or records the
dry-bulb temperature. Contrast with wet-bulb thermometer.
Last update 6 June 1996
Edward J. Hopkins, Ph.D.
Department of Atmospheric and Oceanic Sciences
University of Wisconsin-Madison |
What are the Zones of Regulation?
The Zones of Regulation is an internationally renowned intervention, which helps children to manage difficult emotions, known as ‘self-regulation’.
Self-regulation can go by many names such as ‘self-control’, ‘impulse management’ and ‘self-management’. Self-regulation is best described as the best state of alertness for a situation. For example, when your child takes part in a sports game, they would need to have a higher state of alertness than when, for example, they were working in a library.
From time to time, all of us (including adults) find it hard to manage strong feelings such as worry, anger, restlessness, fear or tiredness, and this stops us from getting on with our day effectively. Children who feel these emotions often find it hard to learn and concentrate in school. The Zones of Regulation aims to teach children strategies to help them cope with these feelings so they can get back to feeling calm and ready to learn. These coping strategies are called ‘self-regulation’.
Why are we using them?
At Glebe Primary School, we are launching the Zones of Regulation throughout the whole school. We want to teach all of our children good coping and regulation strategies so they can help themselves when they experience anxiety and stress. In the classroom, sometimes children panic when faced with a tricky learning problem or challenge. By teaching them how to cope with these feelings might make them better at tackling learning challenges and build better resilience so they don’t give up so easily when faced with difficulty.
We want children at Glebe to grow into successful teenagers then adults. Teaching the children at a young age about managing their feelings will support them in later life so that they don’t turn to negative coping strategies which affect their mental and physical wellbeing.
What are the Different Zones?
Blue Zone: low level of arousal; not ready to learn; feels sad, sick, tired, bored, moving slowly.
Green Zone: calm state of alertness; optimal level to learn; feels happy, calm, feeling okay, focused.
Yellow Zone: heightened state of alertness; elevated emotions; has some control; feels frustrated, worried, silly/wiggly, excited, loss of some control.
Red Zone: heightened state of alertness and intense emotions; not an optimal level for learning; out of control; feels mad/angry, terrified, yelling/hitting, elated, out of control.
We will teach the children that everyone experiences all of the Zones. The Red and Yellow zones are not ‘bad’ or ‘naughty’ Zones. All of the Zones are expected at one time or another. We will show them that the Blue Zone, for example, is helpful when you are trying to fall asleep.
However, we will be helping the children to identify strategies that might help them regulate their anger, their worries or their sadness - for example pressing their hands against the wall, getting a drink, walking away, using a fidget spinner. For the older children, they may be more able to identify strategies that have helped them before. Whereas, the younger children may need adults to suggest techniques for them and find the ones which are successful and which ones are not.
How can you help your child use The Zones of Regulation at home?
- Identify your own feelings using Zones language in front of your child (e.g.: I’m frustrated. I think I am in the Yellow Zone.”)
- Talk about what tool you will use to be in the appropriate Zone (e.g.: “I need to take four deep breaths to help get me back to the Green Zone.”)
- At times, wonder which Zone your child is in. Or, discuss which Zone a character in a film / book might be in. (e.g.: “You look sleepy. Are you in the Blue Zone?”)
- Engage your child in discussion around Zones when they are in the Red Zone is unlikely to be effective. You need to be discussing the different Zones and tools they can use when they are more regulated / calm.
- Teach your child which tools they can you. (eg: “It’s time for bed. Let’s read a book together in the comfy chair to get you in the Blue Zone.”)
- Regular Check-ins. “How are you feeling now?” and “How can you get back to Green?”
- Modelling It is important to remember to show the children how you use tools to get back to the green zones. You might say “I am going to make myself a cup of tea and do some breathing exercises because I am in the blue zone” and afterwards
- Share how their behaviour is affecting your Zone. For example, if they are in the Green Zone, you could comment that their behaviour is also helping you feel happy / go into the Green Zone.
- Put up and reference the Zones visuals and tools in your home.
- Praise and encourage your child when they share which Zone they are in.
Tips for practising the Zones of Regulation
- Know yourself and how you react in difficult situations before dealing with your child’s behaviours.
- Know your child’s sensory threshold. We all process sensory information differently and it impacts our reactivity to situations.
- Know your child’s triggers.
- Be consistent in managing your child’s behaviour and use the same language you use at home.
- Empathise with your child and validate what they are feeling.
- Have clear boundaries/routines and always follow through.
- Do not deal with an angry, upset child when you are not yet calm yourself.
- Discuss strategies for the next time when you are in a similar situation.
- Remember to ask your child how their choices made you feel (empathy).
- Praise your child for using strategies. Encourage your child to take a sensory break to help regulate their bodies.
Can my child be in more than one zone at the same time?
Yes. Your child may feel tired (blue zone) because they did not get enough sleep, and anxious (yellow zone) because they are worried about an activity at school. Listing more than one Zone reflects a good sense of personal feelings and alertness levels.
Should children be punished for being in the RED Zone?
It’s best for children to experience the natural consequences of being in the RED zone. If a child’s actions/choices hurt someone or destroys property, they need to repair the relationship and take responsibility for the mess they create. Once the child has calmed down, use the experience as a learning opportunity to process what the child would do differently next time.
Can you look like one Zone on the outside and feel like you are in another Zone on the inside?
Yes. Many of us “disguise” our Zone to match social expectations. We use the expression “put on a happy face” or mask the emotion so other people will have good thoughts about us. Parents often say that their children “lose it” and goes into the Red Zone as soon as they get home. This is because children are increasing their awareness of their peers and expectations when in the classroom. They make every effort to keep it together at school to stay in the Green Zone. Home is when they feel safe to let it all out. |
Caroline J. Campbell
Associate Features Editor
It is Groundhog Day. Flashes of the camera, the plump mammal emerges—or more like is pulled from its home and hibernation. Media reporters and spectators gather around and wait for the verdict: Will it be spring or six more weeks of winter?
On Feb. 2 every year, Punxsutawney Phil predicts the changes of weather in the seasons. If it sees its shadow, there will be six more weeks of winter. But the question is who entrusted such an important responsibility to a groundhog—which is also known as a whistlepig?
According to The History Channel, Groundhog Day originates from the Christian tradition of Candlemas, where clergy would distribute candles for winter. The candles represented how long winter would be. The Germans put a spin on this tradition and used a hedgehog instead to make this determination. When the German settlers came to America, they replaced the hedgehog with the groundhog, because groundhogs were easily found in Pennsylvania.
Then, in 1887, a newspaper editor declared Phil the groundhog of Punxsutawney as the official weather-predicting groundhog. The name stuck and has been carried through the groundhog lineage. However, this tradition carried over to other states and countries. There is Birmingham Bill, the Staten Island groundhog, and Shubenacadie Sam, the Canadian groundhog.
The annual emergence of Phil is surrounded by three days of festivities including breakfast with Phil, the crowning of Little Mr. & Mrs. Groundhog, the Groundhog Ball, and various education opportunities to learn about groundhogs and the weather. |
The 47th Lunar and Planetary Science Conference (LPSC), a large international event held every spring in Houston, Texas, once again showcased a wide variety of the subjects that make up modern planetary science. My own contribution followed up on some previous work, locating and identifying deposits of shock-melted material produced by the formation of giant multi-ring impact basins. These features were created early in lunar history and because they are found globally, their relative and absolute ages can inform us about the cratering history of the Moon.
When an impact crater forms, a small zone near the point of impact is vaporized and melted by the intense shock pressures created by the collision. This melt (called impact melt) is an important product for two reasons: 1) its chemical composition represents an average of the target rocks, which allows us to deduce the make up of the pre-impact curst; and 2) it is the material whose radiogenic isotopes are “re-set”, which allows us to determine exactly when an impact occurred. Our understanding of the time scale of lunar history is determined by the radiometric dating of rocks, which explains why shock-melt from large impacts are prime targets for study.
The problem with ancient basins (a basin is any impact crater larger than 300 kilometers in diameter) on the Moon is that because they are so old, they have been heavily modified—partially buried by other crater and basin deposits and filled with volcanic lava flows. Impact melt is concentrated in the center of a crater. Since most basins are subsequently filled with mare lava, few exposed melt sheets survive for us to sample.
For the last few years, my students and I have been working on mapping the occurrence of melt sheet features that have survived. Melt may survive burial in one of two ways. First, some basins are not completely filled with lava, retaining their original configuration for more than 3.8 billion years. The classic example of this type of basin is the spectacular Orientale basin, almost 1,000 km across on the western limb of the Moon. (Side note: Why is a basin on the western limb called “Mare Orientale” (Eastern Sea)? Because in the old days of telescopic study, the lunar east and west convention referred to Earth, not lunar, coordinates. The “Orientale” limb of the Moon was on the left (east) when facing the Moon, with north at top. Hence, the Eastern Sea. Yes, crazy—but then, no one in the early 20th Century expected people would be flying to the Moon). The Orientale basin is nearly perfectly preserved, with just a small amount of mare lava flooding the innermost basin. Large regions of the interior are covered by wrinkled, cracked and fissured deposits—the remnant of the basin impact melt sheet (see photo below). Thus, Orientale is a basin whose melt can be directly sampled.
As the youngest lunar basin, the absolute age of Orientale is already constrained by the ages of the oldest, post-basin units on the Moon (old mare lavas, dated at 3.8 billion years, abbreviated “Ga”). What we really need to know are the ages of older and middle-aged basins. Although all lunar basins have been dated on a relative scale (i.e., the Nectaris basin is older than the Imbrium basin), only Imbrium (3.85 Ga) and (possibly) Serenitatis (3.88 Ga) have absolute dates, and both of those are uncertain. So we are left with the problem of having no widespread exposures of the impact melt of the basins we most want to date.
However, once again, the Moon has obliged us. In all basins, small amounts of material ejected from the central structure appear to be impact melt. Although we cannot be certain of its origin until someone lands there, impact melt has some distinctive morphological properties—as mentioned above, it typically has a cracked surface texture, visual evidence of past liquid flow, and possibly a contrasting composition to its surrounding terrain. I have mapped and found evidence for ejected melt from at least three basins (Orientale, Imbrium and possibly Nectaris). At the conference, I reported on some newly recognized deposits of impact melt for the Crisium basin (see the map below) on the eastern limb of the Moon (i.e., opposite Orientale!).
The Crisium basin is a large (~1,000 km diameter) impact feature centered at 17°N, 59°E. It has a broadly elliptical outline and a polygonal rim shape. It has been proposed that the basin appears elongated because of a pre-existing smaller basin in the east, but a more plausible explanation is that Crisium is a result of an oblique impact by a projectile traveling from west to east, and sheared off, such that its upper half impacted separately downrange of the main basin. Such an impact would create an elliptical feature, elongated in the east-west direction. The inner basin of Crisium is nearly filled with lava, but during mapping, we noticed some small “islands” of pre-mare rock that had not been covered (left side of the photo, bright red (Nm) in the geological map). Study of these islands (called “kipukas” in Hawaii) showed that they were of highlands composition and had a cracked and fissured surface. This relation is nearly identical to that seen at Orientale. We suggested that these features were parts of the original basin floor—the Crisium basin impact melt sheet.
This new finding is significant in two ways. If we could visit these sites on the Moon, we could obtain direct samples of impact melt from the Crisium basin, a large feature that sampled the entire lunar crustal column in this region. Study of its chemical and mineral composition could help us better understand the complex igneous history of the Moon’s crust. In addition, the Crisium basin is of intermediate age (older than Imbrium but younger than Nectaris) and determining its absolute age could help us understand whether the Moon experienced an impact “cataclysm” or massive increase in the rate of impacts around 4 Ga. Answering this question is important not only for deciphering lunar history, but for the interpretation of the cratering histories of all the terrestrial planets, including the Earth.
I have written previously on some of the problems attendant with using samples to reconstruct the history and evolution of the Moon. It’s easy to pick up rocks but doing science requires that we collect samples of known geological context. Unless we fully understand what the samples represent, we cannot make broad generalizations that allow us to reconstruct planetary geological history. The current focus of much of the lunar sample community is on sampling the melt sheet of the enormous South Pole-Aitken basin, the largest (2,600-km diameter) basin on the Moon. Although we know it is the oldest basin in relative terms (all other basins lie on top of it), we do not know its absolute age. It could be as young as 3.9 Ga, in which case a cataclysm is required, or it could be as old as 4.3 Ga, which lessens the possibility of a cataclysm.
Because this feature is so old, the geological context of samples from it is unclear. It is better to sample other, less degraded basins in an attempt to decipher the early cratering history of the Moon, largely because their contexts will be clear, as the original units are better preserved. Finding exposed impact melt at Crisium means that it is worth searching for additional melt deposits from other lunar basins. By finding these exposures, we can plan for future missions to either return samples to Earth or date the rocks onsite with automated equipment. By compiling a set of ages from many basins distributed widely over the Moon, we will understand more about the earliest evolution of the planets. |
The Python getattr Function
December 7, 2005 | Fredrik Lundh
Python’s getattr function is used to fetch an attribute from an object, using a string object instead of an identifier to identify the attribute. In other words, the following two statements are equivalent:
value = obj.attribute value = getattr(obj, "attribute")
If the attribute exists, the corresponding value is returned. If the attribute does not exist, you get an AttributeError exception instead.
The getattr function can be used on any object that supports dotted notation (by implementing the __getattr__ method). This includes class objects, modules, and even function objects.
path = getattr(sys, "path") doc = getattr(len, "__doc__")
The getattr function uses the same lookup rules as ordinary attribute access, and you can use it both with ordinary attributes and methods:
result = obj.method(args) func = getattr(obj, "method") result = func(args)
or, in one line:
result = getattr(obj, "method")(args)
Calling both getattr and the method on the same line can make it hard to handle exceptions properly. To avoid confusing AttributeError exceptions raised by getattr with similar exceptions raised inside the method, you can use the following pattern:
try: func = getattr(obj, "method") except AttributeError: ... deal with missing method ... else: result = func(args)
The function takes an optional default value, which is used if the attribute doesn’t exist. The following example only calls the method if it exists:
func = getattr(obj, "method", None) if func: func(args)
Here’s a variation, which checks that the attribute is indeed a callable object before calling it.
func = getattr(obj, "method", None) if callable(func): func(args) |
4 Arctic species that depend on ice
Canada’s North is as unforgiving as it is stunning. This is never more apparent on film than in the “Frozen Worlds” episode of Netflix’s new docuseries, Our Planet.
The Canadian Arctic is home to unique — and universally adored — species, all of which depend on sea ice for survival; it’s where they hunt, socialize, rest and rear their young. But this critical habitat is in peril, and if it disappears, they will, too.
Sea ice is in steady decline — and it has been for 30 years now. That’s because the Arctic is warming at twice the rate of the rest of the Earth. Soaring temperatures due to climate change have caused sea ice to thin, shrink and melt earlier in the year. When sea ice recedes, wildlife habitats do, too.
Slowing climate change will ensure the long-term survival of these ice-dependent species.
When they’re not swimming in open sea, deep-diving narwhals get air from small cracks in the ice, which also keep them safe from predators. When cracks widen and ice disappears, narwhals become vulnerable to predation.
A warmer, ice-free Arctic is attractive to curious marine species. Killer whales are showing up in the region in increasing numbers, which means trouble for the whale. This population of killer whales likes to gorge on narwhal, so when the apex predator is around, narwhals tend to cower close to shorelines where food is scarce. Since this is a new phenomenon, scientists are still learning how the growing number of predators are affecting the overall health and behaviour of narwhals.
Ice has many uses for polar bears in Canada, but they primarily use it to hunt. When sea ice is plentiful in the winter, it gives them access to seals. Once ice melts and polar bears retreat to land, bears must use their energy reserves to survive until sea ice reforms. When ice forms late or is too thin for seals to make their dens, food becomes scarce and polar bears run the risk of starvation during summer months. And while many won’t necessarily starve, a lack of food could hinder a female’s ability to bring a cub to term.
Studies show that polar bears at the southern point of their range are giving birth less frequently and fewer cubs are surviving. In these areas it was quite common to see a mother with three cubs. Today, researchers are observing more and more mothers with a single cub.
The Pacific walruses showcased in the Netflix series prefer to spend their time on ice in groups known as haul-outs. It’s here they socialize, rest and reproduce. This ice-based habitat also provides access to food.
When sea ice shrinks or melts entirely, it forces walruses to haul out on land where food availability is limited. Instead of gathering in smaller groups, land-based haul outs are much larger. This can result in a stampede, which you can see in a shocking scene of Our Planet. Meanwhile, Atlantic walruses are found in Canadian haul outs on land in late summer and early fall, when sea ice is at its minimum. As the Arctic warms, the push for industrial development is on the rise, increasing the risk of disturbances to this critical habitat.
Arctic caribou are known for their epic long-distance migrations so it’s no surprise ice serves as a migratory habitat for some caribou in the North.
Peary caribou, found in the high Arctic Archipelago and Ellesmere Island, need to travel on ice in search of the limited amount of forage between high Arctic islands. The Dolphin and Union herd cross between Victoria Island where they give birth and rear their young, and their wintering habitat on the mainland of Nunavut and the Northwest Territories. The absence of sea ice would be disastrous for these groups, completely disrupting their migrations. Less reliable sea ice that is thin or forms later in the year could result in a population decline for both groups.
Our Planet is streaming on Netflix now. |
Nearly three decades after the world banned chemicals that were destroying the atmosphere’s protective ozone layer, scientists said Thursday that there were signs the atmosphere was on the mend.
The researchers said they had found “fingerprints” indicating that the seasonal ozone hole over Antarctica, a cause of concern since it was discovered in 1984, was getting smaller. Although the improvement has been slight so far, it is an indication that the Montreal Protocol — the 1987 treaty signed by almost every nation that phased out the use of chemicals known as chlorofluorocarbons, or CFCs — is having its intended effect.
Full recovery of the ozone hole is not expected until the middle of the century.
“This is just the beginning of what is a long process,” said Susan Solomon, an atmospheric chemist at the Massachusetts Institute of Technology and lead author of the study, published in the journal Science.
Ozone high in the stratosphere protects life on Earth by absorbing damaging ultraviolet rays from the sun. But ozone is destroyed by reactions with chlorine and other atoms that are released by CFCs and similar chemicals, which were used for decades as refrigerants and propellants.
More ultraviolet radiation leads to increased incidence of skin cancers, cataracts and other health problems.
Scientists who pushed for the Montreal Protocol always acknowledged that recovery of the ozone layer would be very slow, because CFCs linger in the stratosphere for a long time.
“Think of it like a patient with a disease,” Dr. Solomon said. “First, it was getting worse. Then it stopped — it was stable but still in bad shape.”
Now, she said, “as molecules slowly decay away from the atmosphere, it’s getting just a little bit better.”
David Fahey, a research physicist at the National Oceanic and Atmospheric Administration who was not involved in the ozone study, said Dr. Solomon’s work “gives us a critical level of confidence that we are moving in the direction we want to see.”
It also reinforces that the Montreal Protocol has been a “resounding success,” Dr. Fahey said. “It stands head and shoulders above any other environmental treaty.”
While ozone has been depleted in the Arctic and mid-latitude regions as well, the destruction over Antarctica is greater, in part because temperatures there are so cold. Technically, the depleted area is not a hole, but rather a large region of the stratosphere — in some years, it is larger than the North American continent — where the concentration of ozone is below a certain threshold.
Because the reactions that cause ozone to be destroyed require sunlight, this thinning begins each year in late August, when winter in the Southern Hemisphere is ending, and reaches its maximum by September and October. The ozone layer recovers later in the year, and then the cycle repeats.
Ozone depletion is a complex process that is affected by variables like temperature, wind and volcanic activity. So Dr. Solomon and the other researchers looked at data from satellites and balloon-borne instruments taken each September. That made it easier to separate the effects of the decline in chlorine atoms from the other factors. They also compared the data with the results of computer models.
The study found that the ozone hole had shrunk by about 1.5 million square miles, or about one-third the area of the United States, from 2000 to 2015.
A 2009 analysis by NASA scientists showed what the world would have been like had there been no Montreal Protocol, and CFC production and use had continued. By midcentury, their simulations showed, the ozone hole would have covered the world, and at noon on a clear summer day in a city like New York, the UV index, a measure of the damage the sun can do, would have caused a noticeable sunburn on unprotected skin in 10 minutes.
That dire situation has been avoided thanks to society’s collective efforts, Dr. Solomon said.
“We are seeing the planet respond as expected to the actions of people,” she said. “It’s really a story of the public getting engaged, policy makers taking action, and business getting engaged.” |
Are We Martians?
Studies of the earliest life forms on Earth can provide insights into how to look for traces of possible life on Mars.
What does habitability mean? Quite simply, it means liveable. The definition of the limiting conditions for life has expanded dramatically in recent decades. Today’s “bottom line” includes: liquid water, carbon, energy source, and nutrients.
The planetary habitable zone is the zone where it is possible to have water existing as a stable liquid on the surface of the planet. When our solar system first began to take shape the sun was weaker and the early Earth may not have been in a habitable zone (Mars was definitely not). Nevertheless both planets had liquid water at their surfaces for various reasons – the planets were hotter and they had dense atmospheres of carbon dioxide possibly mixed with other greenhouse gases that ensured the preservation of water at their surfaces. Mars may even have had a small ocean in its northern hemisphere. Thus early Mars was habitable and may have, in fact, resembled the Earth of the Archaean period
Researcher Frances Westall is not sure that we may find evidence of living organisms on Mars today, but she believes that there is a chance we will find evidence that clearly demonstrates that life has once lived on Mars. Westall says that finding traces of life is dependent on knowing what to look for and how to recognise the evidence if and when we find it. It could even be that life on Earth originated on early Mars. The red planet is smaller than the Earth and it cooled faster. Liquid water would have condensed on its surface well before it did so on Earth. If life appeared on Mars before it appeared on the Earth, it could have been transported to Earth via a meteorite – meaning that we might be descended from Martians! (Read about the research on Martian meteorite ALH48001 and more about the unlikelihood of panspermia – the seeds of life on earth being sown from an extraterrestrial body)
The search for life takes a lifetime!
Frances Westall is a member of the CGB Scientific Advisory Committee. She is Director of Research at the Centre for Molecular Biophysics, a CNRS lab located in Orléans, France (read an interview in French) and the European Space Agency’s ExoMars’ microscope team coordinator. She visited CGB recently and gave a fascinating presentation about the science (and compromises) involved in preparing for a Mars Mission.
If only a geologist could spend a day on Mars!
Unfortunately this is not possible; researchers and policy makers have to try to balance the ideal with the possible in terms of planning a mission to Mars. Since 1996 Westall has been a part of the team planning the ESA-NASA ExoMars missions – a truly international cooperation between Europe and the USA. (Read more on the European Space Agency website.)
Questions being addressed by Westall and CGB:
- Did life once exist on Mars?
- What can Mars tell us about the way our solar system evolved, the early Earth, the way life may have begun on Earth?
- And, what can Early Earth studies teach us about what to look for on Mars?
The European space Agency believes that human beings have an innate need to explore and understand, and that space exploration is a natural outgrowth of this. Mars is the first step. |
From Life to Death
Extermination camps were extremely popular during the Holocaust, as Jews and other racial minorities were sent there to be killed by the Nazis. At these extermination camps, it was likely that prisoners would be killed by means of a gas chamber, a mass shooting, or a medical experiment gone wrong. Some of the most well-known extermination camps were Auschwitz and Treblinka, however there were six camps in total that the Nazis built which had the main function of extermination.
Auschwitz is the most well-known extermination camp, and for good reasons. It was the largest of the extermination camps that the Nazis built, and when it first opened, had other purposes besides just extermination. Auschwitz first started as a labor concentration camp, and then moved on to be more of just an extermination camp. Its main source of extermination was the use of the gas chamber, as it was the quickest and most efficient way to execute a large number of people in a relatively short period of time. View this website to see images taken at Auschwitz during the Holocaust to experience what life was really like for those who passed through the camps.
Another fairly popular extermination camp of the Holocaust would be Treblinka. Located about fifty miles out of Warsaw, Poland, it was responsible for the extermination of around 265,000 Jews throughout the Holocaust. This camp was handled with as much secrecy as possible, as the staff of the camp did not wish to face rebellion of the Jews that were coming to inhabit the camp. The camp first opened with only three gas chambers for its victims, but by the end of the war, it had expanded to six chambers. However,
not of the people who were deported to the camp went right to the gas chambers. Some were picked to work in the field for a few days at a time, and then once they got weak, then they were sent to the gas chambers for extermination. View more pictures of life at Treblinka here. Both Auschwitz and Treblinka ran on gas chambers in order to exterminate their victims, however there were a few other methods that were used throughout the time period, such as mass shootings and medical experiments gone wrong. |
Henry Fairfield Osborn was the first curator of vertebrate paleontology at the American Museum of Natural History, in New York, and its first scientist-president. He was hired in 1891, just 15 years after the museum opened. One of Osborn's most famous projects involved the naming and description of what was once only a modestly important dinosaur discovered in Montana, Tyrannosaurus rex. It was gigantic, fierce looking, and extraordinarily popular as an exhibit skeleton mounted in the museum's halls, and Osborn helped make the shorthand label for this fascinating beast, T. rex, a household expression, fitting even to become the marquee of a British rock and roll supergroup of the early 1970s.
More recently, however, after being part of our vocabulary for a century, that name was challenged. Paleontologists recently discovered that the species we know as T. rex had an earlier christening. Manospondylus gigas is its "real" name. The reason? Edward Drinker Cope, a self-taught paleontologist, proposed and published that name in 1892, about a dozen years before Osborn announced T. rex. Since it was based on a single bone, Osborn could not have known that Cope's M. gigas was the same species as his. But with many more fossils that appear to be from the famous "tyrant lizard," what should be done with multiple names?
Goals of the Linnaean naming system
Problems like this, the accidental duplication of names, were obvious to the father of taxonomy, Carolus Linnaeus. His response was to establish a logical, uniform approach to the naming process in the hope that it would be recognized and accepted the world over (see our Taxonomy I: What's in a name? module). Linnaeus knew that the creation of duplicate, different-sounding names for the same species, called taxonomic synonyms, was only one of many barriers relating to names that could impede accurate scientific exchange. Differences in language and culture, the idiosyncrasies of individual scientists, difficulty obtaining the writings of other scientists, unavoidable mistakes such as typographical errors – all can contribute to confusion and a host of problems when identifying and cataloging organisms. Thus, the central idea behind the Linnaean taxonomic system was to provide a stable, enduring list of names so that we can communicate effectively in all the fields of the life sciences, retrieve information efficiently, and be confident that each species name is one of a kind.
The solution that Linnaeus adopted was the consistent use of a two-name system called binomial nomenclature. He recognized that by giving every species a fixed pair of names, analogous to our "family" and "given" names, each one could be designated uniquely. The titles for the two official names were those that John Ray, a British naturalist, had proposed a century earlier, the genus and species. In practice, these terms are tied together and used in combination. The combination is presented as a sequence, first the genus name (plural genera, related to the word generic) and then the species name (plural species, related to the word specific), as in the binomial Homo sapiens.
Taxonomists have also extended this reasoning to employ a three-name set, a trinomial, which applies to the subspecies of a species. Gorilla gorilla gorilla (Western Gorilla) and Gorilla gorilla beringei (Eastern Gorilla) are examples. That scientists still quibble over whether or not the Western and Eastern populations of gorillas ought to be interpreted as different species or merely different subspecies doesn't really matter. As species, they would be known as G. gorilla and G. beringei; as subspecies, we'd call them G. gorilla gorilla and G. gorilla beringei. Trinomials even apply to our own species, as shown by the recent naming of an extinct subspecies from Ethiopia that was based on fossils that are about 160,000 years old. It is called Homo sapiens idaltu to contrast it with all of us modern people – Homo sapiens sapiens.
Linnaeus devised a naming system
Other rules for naming species
For clarity and consistency, there are other rules governing the naming of species, among them:
- Generic and specific names are italicized when typewritten.
- The first letter of the genus name is always capitalized, while the species name is entirely lowercase.
- Species names are constructed in the Latin form, in the tradition of the early European taxonomists.
- When more than one name is attributed to a single species, the oldest published synonym name takes precedence over others.
Of course, the rules of Linnaean nomenclature apply only to official names, not to informal, everyday language, which is virtually impossible to track and enforce. Thus an informal reference to a species is simply written lowercase in plain text (e.g., gorilla) while a formal reference, for example to the genus, would appear in italics (e.g., Gorilla). As you have probably noticed, our gorilla example is also an unusual case of taxonomic nomenclature, where the common name and the scientific name are one and the same. It is also unusual for its historical simplicity – the formal genus name, Gorilla, has a fairly straightforward history, much less complicated than the story of the name for chimpanzees, Pan, as you see from the table below. Gorillas have only been given two generic (i.e., genus) names, and the oldest is easily decided as the proper one for us to use. Chimpanzees, on the other hand, have been given at least 11 different generic names. Its first name, Troglodytes (also once used for gorillas), is not the one we use today because before it was applied to chimps, it was given to a very successful bird, the wren, Troglodytes troglodytes. The tiny wren trumps the chimp in this case, since the rules of zoological nomenclature apply equally to all animals.
Species names are constructed to sound like
Examples of conflicts among scientific names
Genus Gorilla (I. Geoffroy, 1853)
|1853||Gorilla I. Geoffroy, based on Troglodytes gorilla (Savage and Wyman, 1847)|
|1913||Pseudogorilla Elliot, based on Gorilla mayema (Alix and Bouvier, 1877)|
Genus Pan (Oken, 1816)
|1812||Troglodytes E. Geoffroy, based on Troglodytes niger (E. Geoffroy, 1812)|
|1816||Pan Oken, based on Pan africanus (Oken, 1816)|
|1828||Theranthropus Brookes, based on Troglodytes niger (E. Geoffroy, 1812)|
|1841||Hylanthropus Gloger, based on Simia troglodytes (Blumenbach, 1799)|
|1860||Pseudoanthropus Reichenbach, proposed as a replacement for Troglodytes|
|1866||Engeco Haeckel, based on Simia troglodytes (Blumenbach, 1799)|
|1866||Pongo Haeckel, replacement for Troglodytes|
|1895||Anthropithecus Haeckel, correction for Anthropopithecus|
|1905||Fsihego de Pauw, based on Fsihego iturensis(de Pauw, 1905|
When we use formal taxonomic names in the literature, the names themselves are often accompanied by a compact citation that identifies its author and date of publication, like this: Gorilla gorilla gorilla (Savage, 1847). Which brings us back to Henry Fairfield Osborn and his unavoidable nomenclatural faux pas. Tyrannosaurus rex (Osborn, 1905) is a name that breaks one of the cardinal rules of taxonomy, the principle of priority, which requires that in cases where taxonomic synonyms are known to occur, the first name given to a species is recognized as the authentic one. The bottom line for T. rex is that it is not being replaced by its older synonym, Manospondylus gigas (Cope, 1892), for a more practical reason: It is so familiar to us all. Consider how much confusion a taxonomic change would bring to the world of science, where T. rex is an accepted name, and to the culture at large, where T. rex is one of the world's most famous dinosaurs.
One of the interesting lessons this situation highlights is the way scientists voluntarily abide by Linnaean practices. This is not simply to avoid the chaos that would occur if they did not. When scientists describe new species, they do so in a journal article or other form of publication, and that work is subject to review by their peers (see our Module on Peer Review in Scientific Publishing. If scientists were to disregard a well-established procedure, their peers would likely not allow it to be published. Disputes and questions over Linnaean names can still arise, but most resolve themselves in the literature, where scientists present not only their research about species biology and evolution but also historical information about taxonomic names – all in an effort to keep the names straight. In cases where confusion persists, or adhering to the rules might upset the stability of names, scientists may petition one of the decision-making bodies recognized by scientists around the world for an exception to the rules. These commissions also introduce changes to the taxonomic code from time to time.
On January 1, 2000, one such amendment written by the International Commission on Zoological Nomenclature came into effect. In the spirit of Linnaeus, always hoping to maintain the stability of taxonomic names, a new ruling upheld the common sense solution to the dilemma of Tyrannosaurus vs. Manospondylus. The Commission provided a clear, legal definition of what is meant by general acceptance, as opposed to rare usage, of a taxonomic name. If a name is in use for 50 years, it does not have to revert to a rarely used prior name that may be lurking in the shadows. Osborn's T. rex has been among us, called by that name, for a hundred years, almost as long as Manospondylus gigas lay quietly buried in the literature. So, wisely – or might it be expectedly? – the challenge to the reign of Tyrannosaurus rex has bitten the dust.
Scientists voluntarily go by the Linnaean system to names species in order to
Brontosaurus: A case of mistaken identity
In contrast, the name of another giant, Brontosaurus (Marsh, 1879), has been sunk, as taxonomists are apt to say, when a replacement name wins out. It was changed to Apatosaurus (Marsh, 1877). Both terms were widely used for a long time but here, too, paleontologists learned recently that the bones bearing those names actually came from one species. The oldest name for that species is Apatosaurus ajax.
The consensus among paleontologists is that a name change in this case would not be too upsetting, and the giant herbivore's more familiar name "Brontosaurus" has been set out to pasture. As further insult to this case of mistaken identity, Apatosaurus is also suffering a required cosmetic makeover. For decades this gigantic animal, originally found headless, was displayed grandly and whole at the American Museum of Natural History and elsewhere, but with the wrong face. During the 1970s, paleontologists finally were able to match up skulls and skeletons with certainty, only to prove what was long suspected. The tiny heads chosen long ago as a best fit to crown those gigantic bodies were accidental imposters: They belonged to another dinosaur called Camarasaurus. So, "Brontosaurus," who is actually Apatosaurus, got its head size fixed and a new name as well, because even giants have to follow the rules.
Carolus Linnaeus, the “father of taxonomy,” developed a uniform system for naming plants and animals to ensure that each species has a unique name. This module outlines rules of forming two-term taxonomic names according to genus and species. The module gives examples of naming controversies and describes how they were resolved, including by bending the rules in regard to certain famous beasts.
The system of binomial nomenclature was Linnaeus' response to the need of a clear, distinct naming of species that would be recognized around the world and reduce the chance of one species being known by multiple names.
Scientific names are always written in italics, with the genus capitalized and the species lowercase, and should sound as though they are Latin. |
Ward's Book of Days.
Pages of interesting anniversaries.
What happened on this day in history.
On this day in history in 1397, Geoffrey Chaucer first recounted The Canterbury Tales.
Chaucer was a civil servant who introduced English as the language of the court, by writing literature in the vernacular.
Geoffrey Chaucer was born in 1342 or 1343, at London, the son of a vintner to the king. The family had been in royal service for four generations. The original Chaucers were French, their name was Chaussier, shoemaker, and they continued to speak French, even after a century of residence in England. In those days, French was the language of the ruling class, aristocrats being descended from the Normans who had conquered England in 1066. English was spoken by the peasantry and the working people, while the middle class usually spoke both languages. Anyone who wanted to make social advancement had to speak French, and the English language was dying out. The young Chaucer, born to a French speaking royal attendant, was to reverse the decline of English, and work such changes that would lead to the expunging of the French language from English society.
Records show that in 1357, Chaucer was employed at court, in the service of one of Edward IIIís sons, Prince Lionel. By 1359, he was employed in the kingís army in Rheims, taking part in the Hundred Years War, and in 1360, he became involved in protracted peace negotiations, on behalf of the king, with the French army. We know that he was involved in diplomatic missions to European cities. When at Florence, he read the works of the Italian author Boccaccio, who wrote simple but poignant stories, and preferred to use the Italian language, the speech of the country people, rather than Latin, the language of choice for the gentry.
Influenced by Boccaccio, Chaucer started to write in English. He first wrote The Parlement of Foules, an allegorical tale, in which various species of birds, representative of a variety of human characters, voice their respective opinions and vent their anger on one another. He also began work on The Canterbury Tales, a series of concise yarns, narrated by a variety of individuals from different walks of life. The Tales are down to earth stories about the lives of the aristocracy, the gentry, the middle class and the working class. The stories are about England and they stress the unity of the English state and people. They are written in English, the language of the peasants, and certain words, including the word Ďarseí, occur more often than would be expected for a polite language.
Chaucer seems to have written The Tales in the hope of having English accepted as a courtly language. On 17th April 1397, he was given the privilege of reciting his own works before the ladies and gentlemen of the court. In those days, public entertainment consisted of a reader, reciting works from a book. Although readers were scarce and books scarcer still, the king could always afford to pay for a narrator to read at his court.
From the moment that Chaucer read out his works, the use of English expanded and continued to grow. Court papers were written in English, and gentlemen at court began to use English amongst themselves and in communications with the king. Lawyers and judges dispensed with Norman French, the usual language of law, and started writing their judgements in English. Much of the increase in the use of English was fuelled by anti-French sentiment. England was theoretically at war with France, fighting the Hundred Years War, although there was not a great deal of fighting taking place. The French king, Charles VI, had ordered all English nobles who had estates in France, to surrender their land in England and return to France, or to forfeit their French holdings. Any noble who had given up his French lands, for the sake of his English possessions, would naturally now consider himself to be English.
By the time Henry IV came to the throne in 1399, English had become so customary that no-one was surprised when he became the first king to take his Coronation Oath in English. In so doing, he settled the fate of French in England. The French language withered and died and English became the norm.
Previous day Next day
©2006 Wardís Book of Days |
We found 21 resources with the concept present participles
Grammar Presentations: Gerunds, Superlatives, and More
3rd - 8th CCSS: Adaptable
Express yourself in the correct tense, verbal phrase, and sentence structure with a series of illustrative grammar presentations. Helpful for elementary and middle school writers alike, the slideshows are a strong instructional guide for any grammar lesson.
Students as "Grammarians": Discovering Grammatical Rules, Lesson on Reduced Clauses of Reason
6th - 8th
In this grammar activity, middle schoolers work in pairs to complete the 24 reduced clauses of reason questions. In some questions, students write answers to specific questions, and in some they fill in blanks in sentences. |
This set of Energy & Environment Management Multiple Choice Questions & Answers (MCQs) focuses on “Acid Rain”.
1. Which one of the following cause acid rain?
a) Water pollution
b) Soil pollution
c) Air pollution
d) Noise Pollution
Explanation: Acid rain is mainly caused due to a result of air pollution. When any type of fuel is burnt, lots of different chemicals are produced. Some of the gases react with tiny droplets of water. This rain from clouds forms acid rain.
2. What are two acids formed when gases react with the tiny droplets of water in clouds?
a) Sulphuric acids and nitric acid
b) Hydrochloric acid and nitric acid
c) Sulfurous acid and acetylsalicylic acid
d) Sulphuric acid and hydrochloric acid
Explanation: The gases of nitrogen oxides and sulphur dioxide react with the tiny droplets of water in clouds to form sulphuric and nitric acid. The rain from these clouds falls as very weak acid known as ‘Acid rain’.
3. What is the nature of acid rain?
Explanation: The nature of acid rain is corrosive. This corrosive nature of acid rain produces many forms of environmental damages. It affects rivers, vegetation, soil and organisms. Acid rain is known to cause widespread environmental damage.
4. Which of the following way acid rain affects the plants?
a) By nourishing the nutrients from the soil
b) By increasing the nutrients from the soil
c) By removing nutrients from the soil
d) By balancing the nutrients in the soil
Explanation: Acid rain indirectly affects plants by removing nutrients from the soil during which they grow. Acid rain dissolves and washes away all the vitamins in the soil which are very much essential for plants.
5. What is the result of acid rain when it falls into water bodies?
a) The water becomes acidic
b) The water becomes pure
c) The water increase its nutrients value
d) The water increase its level
Explanation: When acid rain that falls or flows as water to reach rivers, lakes, wetlands and other water bodies causes the water in them to become acidic. This affects plant and animal life in aquatic ecosystems.
6. Acid rain does not cause any environmental damages.
Explanation: Acid rain cause many environmental damages. It affects the plants by removing nutrients from the soil. It affects plants and animals life in ecosystem. It affects the food chain. It damages buildings, vehicles and other systems made from stone or steel.
7. Which one of the way can prevent acid rain?
a) Increase the emission of sulfur dioxide and nitrogen oxides
b) Decrease the emission of sulfur dioxide and nitrogen oxides
c) Increase in the emission of hydrochloride and phosphate
d) Decrease in the emission of hydrochloride and phosphate
Explanation: One of the ways to stop the formation of acid rain is to decrease the emission of sulfur dioxide and nitrogen oxides into the surroundings. This may be achieved by using much less energy from fossil fuels in power plants and in industries.
8. The Taj Mahal in India is affected by____________________
b) Acid rain
c) Water pollution
d) Spoil Pollution
Explanation: Acid rain and dry acid deposition damage buildings. The acid corrodes the materials causing extensive damage and ruins historic buildings. For instances the Tai Mahal in India have been affected by acid rain.
9. Which of the following is the best way to reduce acid rain in soil?
a) By adding sulphur to the soil
b) By adding nitrogen to the soil
c) By adding oxygen to the soil
d) By adding limestone to the soil
Explanation: When acid rain affects the soil it’s difficult to prevent soil from acid rain but powered limestone can be added to the soil by a process which is known as liming to neutralize the acidity of the soil.
10. How can we control acid rain which is causing due to the exhaust fumes on the atmosphere by cars?
a) By burning more fuels
b) By using old engine vehicles
c) By using ignition
d) By using catalytic converters
Explanation: In catalytic converters, the gases are passed over metal coated beds that convert harmful chemicals into less harmful ones. These are used in cars to reduce the effects of exhaust fumres on the atmosphere.
11. Which is the most acidic in pH scale?
Explanation: Acidity is measured using a scale called the pH scale. This scale goes from 0 to 14. 0 is the most acidic and 14 is the most alkaline. Acid rain is much weaker than the string acids, it’s never acidic enough to burn the skin.
12. Who was the first to use the phrase “Acid Rain”?
a) Robert Angus Smith
b) Ernest Flower
c) Elmer Joseph Clark
d) Christ Ralph
Explanation: The phrase acid rain was first used in 1852 by Robert Angus Smith, who was a Scottish chemist. In his investigation of rainwater chemistry near industrial cities in England and Scotland he termed the phrase acid rain for the very first time.
13. When was the “Clean Air Act” in United States came into force?
Explanation: In the United States, reduction in acid deposition stem for the Clean Air Act of 1970 and its Amendments in 1990. This Amendments begun by the regulating of coal fired power plant emission. This development significantly reduced the Sulphuric dioxide in United States.
14. Natural sources also cause acid rain.
Explanation: The major natural causing agent for acid rain is volcanic emission. Volcanoes emit acid producing gases to create higher than normal amounts of acid rain or any other form of precipitation such as fog and snow.
Sanfoundry Global Education & Learning Series – Energy & Environment Management.
To practice all areas of Energy & Environment Management, here is complete set of 1000+ Multiple Choice Questions and Answers. |
The Antikythera mechanism is by far the most sophisticated piece of technology that survives from the ancient world. This corroded mass of battered bronze gearwheels languished at the bottom of the sea for more than 2000 years, before being salvaged by sponge divers in 1901.
The device was originally a mechanical computer (some people prefer to say calculator), which used Greek astronomers' state-of-the-art theories to model the movements of the Sun, Moon and planets in the sky.
Well that's what scholars thought, anyway. But a new paper on the mechanism, published earlier this year in the Journal of the History of Astronomy, suggests that they might have got things back to front. Jim Evans, an expert in the history of astronomy based at the University of Puget Sound in Tacoma, Washington, and his colleagues Alan Thorndike and Christian Carman have made the most accurate measurements yet of the Antikythera mechanism's zodiac dial, used to display the positions of celestial bodies in the sky.
Evans described his work at an event held in March at the Getty Villa in Los Angeles, at which he and I discussed the Antikythera mechanism (see video). His analysis has various technical implications for the way that the device displayed information - to be honest when I first heard him speak I thought it was the kind of thing that only true Antikythera geeks would get excited by. But when I went through the paper in more detail I saw this knock-out sentence, right at the end:
"Finally, if the maker of the Antikythera mechanism used gears to model Babylonian astronomical cycles, and if, as is likely, the mechanism reflects a craft tradition going back to the time of Archimedes, this raises the fascinating, but unprovable, possibility that epicycles and deferents entered Greek astronomy, not because of natural philosophical considerations, but because some geometer applied a geometrical image of gearing to a cosmic problem."
The theory of epicycles - the idea that celestial bodies moved in small circles as they traced larger orbits around the Earth - is arguably the most famous aspect of Greek astronomy. Although often scoffed at, it was actually very good at explaining the apparent movements of the Sun, Moon and planets through the sky, and it pretty much defined our view of the cosmos (see top three pics for various examples) until Kepler came up with the idea of elliptical orbits in the early 17th century AD.
What Evans and his colleagues are suggesting is that geared devices like the Antikythera mechanism didn't model this theory after all. They inspired it.
That's huge. It would give mechanical models a starring role in the history of astronomy, in other words in the way that we have come to understand the universe around us. If Evans is right then without models like the Antikythera mechanism, there would have been no epicycles, and for 2000 years we would not have seen the cosmos in the way that we did. Our modern understanding of how the solar system works would presumably still be the same, but the history of how we reached this point would be dramatically different.
I've written a feature about this latest work in this week's edition of Nature. But here's a summary of what led Evans and his colleagues to suggest this idea.
First, a bit of background about epicycles. The Greeks generally thought that the celestial bodies in the solar system - the Sun, Moon and five known planets - were orbiting Earth. They saw these celestial bodies as divine, and believed that their orbits must therefore consist of perfect circles.
But this isn't what you see when you look at the sky. The Sun and Moon (because the orbits of the Earth and Moon are actually ellipses, not circles) appear to speed up and slow down. And the planets (because they're orbiting the Sun, not the Earth) have a rather inconvenient habit of changing direction.
To explain this, the Greeks came up with the idea that celestial orbits were made up of different circles superimposed on one another. For example they reckoned that each planet traced a small circle - an epicycle - at the same time as moving around its larger orbit - the deferent. Similar theories of the Sun and Moon's motion involved superimposing one circle onto another with a slightly different centre.
When researchers who had X-rayed the surviving pieces of the Antikythera device published a reconstruction of its workings in 2006, they noted a crucial piece of gearing that was used to drive the Moon pointer. A "pin-and-slot" mechanism allowed one gearwheel to drive another around a slightly different centre, giving an undulating variation in speed. This pin-and-slot mechanism was itself mounted on a bigger 9-year turntable, effectively modelling how the orientation of the Moon's ellipse rotates around Earth.
This seemed to be a lovely demonstration of an epicyclic lunar theory used by the Greeks, translated into wheels of bronze. This type of gearing, in which gear wheels ride round on other wheels, is still described as epicyclic.
Although the relevant gearing for the Sun and planets does not survive, researchers assumed that if the mechanism was using epicyclic gearing to model the motion of the Moon, it was probably doing the same thing for these other bodies too. The photo on the left shows the epicyclic gearing that models the motions of the planets in a reconstruction made by Michael Wright (see how it works in this video).
So here's the new bit. Evans has now shown that the Antikythera mechanism may not have worked this way after all. He used X-ray images to accurately measure the divisions on the device's main dial. This dial has two concentric scales, one showing the 360 degrees of the zodiac, and one showing the 365 days of the year, so that pointers moving around it can show both the date, and the position of celestial bodies in the sky.
Just less than a quarter of this dial survives. The 360 zodiac divisions should of course be very slightly wider than the 365 day divisions. But Evans found that although evenly spaced, the zodiac divisions in this surviving portion are actually closer together. To make a full circle, other parts of the zodiac scale must compensate by being extra widely spaced.
This was done on purpose, Evans believes, to model the uneven progress of the Sun through the sky. Instead of the Sun pointer moving at varying speed around an equally divided dial, it moved at constant rate around an unequally divided dial.
Evans' analysis suggests that half of the zodiac dial had extra-narrow divisions - a "fast zone" - and half had extra-wide divisions - a "slow zone". This scheme would have modelled the Sun's motion reasonably accurately and is identical to an arithmetic theory that Babylonian astronomers used for the Sun, known as System A. The Greeks borrowed other Babylonian astronomical theories, so it's not a huge stretch to think that they used this one too.
If Evans is right (and others in the field are taking his suggestion seriously) then the Antikythera mechanism did not use epicyclic gearing to model the movement of the Sun after all. It used conventional geartrains to model much older astronomical theories.
This may therefore be the case for the planets too. Evans thinks that they were shown on five individual dials, perhaps showing the timings of events in their cycles rather than their position in the sky - again, no epicycles required.
As discussed above, the Antikythera mechanism did use epicyclic gearing to model the varying motion of the Moon. But Evans points out that the amplitude of variation encoded in the pin-and-slot mechanism is closer to that used in older arithmetic theories than in the epicyclic theory used by the Greeks.
He believes that rather than modelling epicycles directly, a mechanic looking for a way to represent an older, arithmetic theory of the Moon's motion may have hit upon the idea of using gearwheels mounted on other wheels to produce the cyclic variation that he was after. In other words the inventor of epicycles was not an astronomer, but a mechanic.
Once astronomers realised that epicyclic gearing could closely model what was going on in the sky, they could have borrowed the idea of superimposed circles, and incorporated it into their own theories of how the cosmos was actually arranged. The clockwork universe was born.
Not much is known about when and how the idea of epicycles first arose, but the credit is traditionally given to an astronomer called Apollonius of Perga who lived in the third century BC. Geared astronomical devices seem to have arisen at around the same time - although the Antikythera mechanism itself dates from the second or first century BC, the Roman author Cicero wrote that Archimedes made one in the third century BC. So the timing is about right for such machines to have inspired the idea of epicycles.
Over the following centuries, there could have been an ongoing interaction between mechanics and astronomers as the theory was developed and refined. "Maybe we need to rethink the connection between mechanics and astronomy," says Evans. "People think of it as purely one way, but maybe there was more of an interplay."
If this had happened, wouldn't somebody have written about it somewhere? Not necessarily, says Evans. He points out that the history of astronomy has generally been written by philosophers, who would have downplayed the role of mechanics.
Greek astronomy, he says, combined "a low road of nitty gritty arithmetical calculations", with "a philosophically-oriented high road" that was based on aesthetically pleasing geometric theories. "The people who wrote the history were philosophers of the high road. If there were the influence of something mechanical, it's not surprising that it wouldn't be there in the history. The historians emphasised the clean, the pure, the philosophical."
As Evans admits, it is impossible to prove where the idea of epicycles came from. But his analysis is fascinating food for thought. And a reminder, if we needed one, not to take anything about the Antikythera mechanism for granted. |
As one is able to think with critical reflection and act with creative compassion, one begins to live the way of "deep dialogue." In this way of life, one chooses certain behaviors, including the following:
- Reach out in openness to the "Other" in the search for truth and goodness;
- Learn there are ways of understanding and embracing the world other than our own;
- Learn to recognize our commonalities and differences—and value both;
- Learn to move between different worlds and integrate them in caring, cooperative actions.
EXPLANATIONS AND EXERCISES:
DEEP DIALOGUE - Principle 1: “Reach out in openness to the 'Other' in the search for truth and goodness.”
Explanation: The presupposition to reaching out in the search for truth and goodness to those who think and value differently from us, to those who are “Other,” is that we do not know everything about, do not possess all goodness and value in, the matter at hand. Though we are convinced that what we know is true and the values we hold are good, we recognize that we must endlessly seek more truth, goodness, and value─beyond just ourselves. The primary goal of dialogue is not for us to teach, but for us to learn, to find goodness. This discovery is the very definition of dialogue, to reach out in openness to those who think and value differently from us so we can learn and adapt our behavior accordingly.
Exercise: Choose a friend or mentor and interview that person to ask how she or he understands “truth and goodness.” With the same person, choose a situation in your society where you both agree that strong disagreements exist. Can you two name something you might learn from each side of the argument?
DEEP DIALOGUE - Principle 2: ”Learn there are ways of understanding and embracing the world other than our own.”
Explanation: As long as we talk only with those who think like us, we assume that our understanding and valuing of the world is not only a true picture of the world but even the only true picture of the world. But, when we open ourselves to encounter the "Other," we learn time and again that there are ways of understanding and valuing the world other than ours.
Example: If we have always lived in the city, when we go to live on a farm or in the wild, we encounter other understandings and valuing of many things to do with nature. The same is true of men opening themselves up to women, Christians to Jews, rich to poor, Germans to Chinese, etc.
Exercise: Identify a person or a group that for you is very different from you. What would you be curious to learn? What fears do you have of taking the risk to learn?
DEEP DIALOGUE - Principle 3: “Learn to recognize our commonalities and differences—and value both.”
Explanation: As we come to know the "Other," we find many things held in common. Psychologically, it is wise to focus on these commonalities, especially in the early stages of the encounter and during crises. We learn that there are levels of commonality of which we were previously unaware, and they now become a foundation for bonding. But, we must not cover over the differences, for they, too, are part of reality. Most often, however, we will find that the differences are not absolutely contradictory to our values and therefore are to be respected and valued.
Exercise: What difference, if any, does it make to find common ground in the face of differences that could divide? What, then, is the value of the differences?
DEEP DIALOGUE - Principle 4: “Learn to move between different worlds and integrate them in caring, cooperative actions.”
Explanation: We have learned that each of us “makes” our own world, built up from our experiences, reflections, and integration thereof. In deep dialogue we increasingly become aware that each person we encounter is an entire world unto her/himself. Both Judaism and Islam state this dramatically in their foundational scriptures, as was noted previously in both the Jewish Mishnah and the Muslim Qur’an.
Exercise: What practical project could you complete together with someone very different from you that would serve the wider world of both your communities? |
If we ever colonize the Moon, a critical resource will be ice found at the lunar poles. It’ll provide drinking water and rocket fuel. But the ice won’t be pretty and white, like the ice sheets of the Antarctic. Instead, it’ll be dark red or black — the result of eons of bombardment by radiation from the Sun and beyond.
A Moon-orbiting spacecraft has been measuring that bombardment for the last three years. The radiation from the Sun has been fairly low, because the Sun is coming out of a period of unusually low activity. But that’s allowed in more radiation from outside the solar system.
Over the last four billion years, the radiation would have caused chemical changes in the top layers of ice that’s hidden inside craters at the Moon’s poles, turning the ice quite dark. The radiation also would have caused chemical changes like those seen on the surfaces of comets and asteroids, creating organic molecules that are some of the building blocks of life.
No one is suggesting that life could exist on the Moon — conditions there are just too harsh. But the findings do show that the building blocks for life should be common — found just about anywhere that ice is being bombarded by radiation.
The Moon is part of a bright triangle this evening. As night falls, the orange planet Mars stands directly above the Moon. The true star Regulus, which is about half as bright as Mars, completes the triangle, well to the right of the Moon.
Script by Damond Benningfield, Copyright 2012 |
A powdery mix of metal nanocrystals wrapped in single-layer sheets of carbon atoms, developed at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab), shows promise for safely storing hydrogen for use with fuel cells for passenger vehicles and other uses. And now, a new study provides insight into the atomic details of the crystals' ultrathin coating and how it serves as selective shielding while enhancing their performance in hydrogen storage.
The study, led by Berkeley Lab researchers, drew upon a range of Lab expertise and capabilities to synthesize and coat the magnesium crystals, which measure only 3-4 nanometers (billionths of a meter) across; study their nanoscale chemical composition with X-rays; and develop computer simulations and supporting theories to better understand how the crystals and their carbon coating function together.
The science team's findings could help researchers understand how similar coatings could also enhance the performance and stability of other materials that show promise for hydrogen storage applications. The research project is one of several efforts within a multi-lab R&D effort known as the Hydrogen Materials—Advanced Research Consortium (HyMARC) established as part of the Energy Materials Network by the U.S. Department of Energy's Fuel Cell Technologies Office in the Office of Energy Efficiency and Renewable Energy.
Reduced graphene oxide (or rGO), which resembles the more famous graphene (an extended sheet of carbon, only one atom thick, arrayed in a honeycomb pattern), has nanoscale holes that permit hydrogen to pass through while keeping larger molecules at bay.
This carbon wrapping was intended to prevent the magnesium—which is used as a hydrogen storage material—from reacting with its environment, including oxygen, water vapor and carbon dioxide. Such exposures could produce a thick coating of oxidation that would prevent the incoming hydrogen from accessing the magnesium surfaces.
But the latest study suggests that an atomically thin layer of oxidation did form on the crystals during their preparation. And, even more surprisingly, this oxide layer doesn't seem to degrade the material's performance.
"Previously, we thought the material was very well-protected," said Liwen Wan, a postdoctoral researcher at Berkeley Lab's Molecular Foundry, a DOE Nanoscale Science Research Center, who served as the study's lead author. The study was published in the Nano Letters journal. "From our detailed analysis, we saw some evidence of oxidation."
Wan added, "Most people would suspect that the oxide layer is bad news for hydrogen storage, which it turns out may not be true in this case. Without this oxide layer, the reduced graphene oxide would have a fairly weak interaction with the magnesium, but with the oxide layer the carbon-magnesium binding seems to be stronger.
"That's a benefit that ultimately enhances the protection provided by the carbon coating," she noted. "There doesn't seem to be any downside."
David Prendergast, director of the Molecular Foundry's Theory Facility and a participant in the study, noted that the current generation of hydrogen-fueled vehicles power their fuel cell engines using compressed hydrogen gas. "This requires bulky, heavy cylindrical tanks that limit the driving efficiency of such cars," he said, and the nanocrystals offer one possibility for eliminating these bulky tanks by storing hydrogen within other materials.
The study also helped to show that the thin oxide layer doesn't necessarily hinder the rate at which this material can take up hydrogen, which is important when you need to refuel quickly. This finding was also unexpected based on the conventional understanding of the blocking role oxidation typically plays in these hydrogen-storage materials.
That means the wrapped nanocrystals, in a fuel storage and supply context, would chemically absorb pumped-in hydrogen gas at a much higher density than possible in a compressed hydrogen gas fuel tank at the same pressures.
The models that Wan developed to explain the experimental data suggest that the oxidation layer that forms around the crystals is atomically thin and is stable over time, suggesting that the oxidation does not progress.
The analysis was based, in part, around experiments performed at Berkeley Lab's Advanced Light Source (ALS), an X-ray source called a synchrotron that was earlier used to explore how the nanocrystals interact with hydrogen gas in real time.
Wan said that a key to the study was interpreting the ALS X-ray data by simulating X-ray measurements for hypothetical atomic models of the oxidized layer, and then selecting those models that best fit the data. "From that we know what the material actually looks like," she said.
While many simulations are based around very pure materials with clean surfaces, Wan said, in this case the simulations were intended to be more representative of the real-world imperfections of the nanocrystals.
A next step, in both experiments and simulations, is to use materials that are more ideal for real-world hydrogen storage applications, Wan said, such as complex metal hydrides (hydrogen-metal compounds) that would also be wrapped in a protective sheet of graphene.
"By going to complex metal hydrides, you get intrinsically higher hydrogen storage capacity and our goal is to enable hydrogen uptake and release at reasonable temperatures and pressures," Wan said.
Some of these complex metal hydride materials are fairly time-consuming to simulate, and the research team plans to use the supercomputers at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) for this work.
"Now that we have a good understanding of magnesium nanocrystals, we know that we can transfer this capability to look at other materials to speed up the discovery process," Wan said.
Explore further: Scientists introduce new material to store hydrogen
Liwen F. Wan et al, Atomically Thin Interfacial Suboxide Key to Hydrogen Storage Performance Enhancements of Magnesium Nanoparticles Encapsulated in Reduced Graphene Oxide, Nano Letters (2017). DOI: 10.1021/acs.nanolett.7b02280 |
Fluoroscopy is an imaging technique that uses x-rays to create "real-time" or moving images of the body. It helps doctors see how an organ or body system functions. Fluoroscopy is used in a variety of diagnostic and therapeutic procedures. A radiologist (x-ray doctor) and radiologic technologist perform the procedures together.
Most fluoroscopic exams require the patient to lie on the table. The x-ray machine, called the "fluoro tower", is brought across the patient. The fluoro tower has a curtain on it so the patient feels like he/she is laying in a tent or small car wash! In many cases the patient is given a contrast material to highlight specific organs and/or blood vessels so they can be seen on an image.
Contrast material can be swallowed, injected or given by an enema, depending on the type of exam and what part of the body is being studied. The radiologist is able to move the fluoro tower around to follow the contrast material wherever where it goes. The patient may be asked to move around in different positions so that we can take pictures of the exact area of interest. The images are viewed on a television monitor in the room, usually at the head of the x-ray table, so that the radiologist is able to see!
Radiation Safety for Fluoroscopic Procedures
The specialized equipment used in fluoroscopic procedures minimizes the amount of radiation the patient receives. Many safety measures are taken to limit the dose of radiation to the patient and other people who are in the fluoroscopic room during the exam. Some of these are:
Lead shielding to block the x-rays from being absorbed. The lead shield for the patient is located on the table. Lead aprons are worn by the parents and hospital staff during the entire exam.
Pediatric fluoroscopic settings that allow the radiologist and technologist to lower the radiation dose based on the size of the patient.
Last image save function: a feature that allows the radiologist to capture an image without additional radiation.
Pulsed Fluoroscopy: a feature that decreases the amount of radiation by administering the x-rays in a pulsed, rather than constant, fashion.
Questions about your upcoming procedure? Contact your primary physician or call the Radiology front desk at 812-450-3471. |
Introduction to China - History
In this brief sketch of the origins, growth, and spread of Chinese civilization, the expansions and contractions of Chinese political control over bordering states and regions, and the periodic conquests and rule by foreign dynasties, I wish to stress that the development of Chinese civilization was not a unilineal course of development carried forth by a single growing population. Over the centuries diverse linguistic and cultural populations merged into that larger whole that we identify as Han Chinese in later historical time. Unfortunately, many Chinese historical accounts, whether written by the Chinese themselves or by Western scholars, are Sino-centric, written as if the Han had always existed and all other peoples were marginal.
Chinese Neolithic cultures, which began to develop around 5000 B.C. , were in part indigenous and in part related to earlier developments in the Middle East and Southeast Asia. Wheat, barley, sheep, and cattle appear to have entered the northern Neolithic cultures via contact with southwest Asia, whereas rice, pigs, water buffalo, and eventually yams and taro seem to have come to the southern Neolithic cultures from Vietnam and Thailand. The rice-growing village sites of southeastern China and the Yangzi Delta reflect linkages both north and south. In the later Neolithic, some elements from the southern complexes had spread up the coast to Shandong and Liaoning. It is now thought that the Shang state, the first true state formation in Chinese history, had its beginnings in the late Lungshan culture of that region.
The Shang dynasty (c. 1480-1050 B.C. ) controlled the North China Plain and parts of Shanxi, Shaanxi, and Shandong through military force and dynastic alliances with protostates on its borders. At its core was a hereditary royal house—attended by ritual specialists, secular administrators, soldiers, craftsmen, and a variety of retainers—that ruled over a surrounding peasantry. It was finally displaced by the Western Zhou dynasty, led by a seminomadic group from the northwest edge of the empire. The Western Zhou established capitals near present-day Xian and Loyang and organized a feudal monarchy with its center on the North China Plain. In 771 B.C. they were in turn overthrown by the Eastern Zhou dynasty, which was an unstable confederation of contending feudal states with weak allegiance to the center. During the political confusion of this era, the forces struggling for power discussed and canonized what were to become the key political and social ideas of later Chinese civilization. It was the age of Confucius and Mencius, of the writing of historical annals in order to gain guidance from the past, of Daoist mysticism and Legalist practicality. As Zhou power waned, war broke out between the constituent feudatory domains in what came to be called the Warring States Period (403-221 B.C. ). Between 230 and 221 B.C. one of the contentious states succeeded in overrunning and annexing the other six, and its ruler renamed himself "Qin Shi Huangdi," or "the First Emperor of Qin." China's present name derives from this initially small western kingdom of Qin, which included part of present-day Sichuan. As the first unifying dynasty, it set the model for future imperial statecraft: centralized control through appointed bureaucrats who were subject to recall; creation of a free peasantry subject to the central state for taxation, labor service, and conscription; standardized weights and measures; reform of the writing system; a severe legal code; and control over the intelligentsia. The boundaries of this first imperial dynasty were ambitiously large, stretching from Sichuan to the coast and from the plains and loess lands to the lower Yangzi hinterland. Nevertheless, it was unsuccessful in its attempts to bring the south and southwest into the orbit of empire.
The Qin was short-lived, falling in 202 B.C. ; a combination of popular rebellions and civil wars brought it down. The threat of invasion by the northern nomads (the Xiongnu) was also a weakening factor, despite the construction of a unified Great Wall to mark and defend the northern boundary of empire. The Han dynasty (202 B.C. to A.D. 220) succeeded the Qin. Although also threatened by the Xiongnu confederation to the north, it was able to extend its military lines to the west and establish trade and diplomatic relations with the nomadic and oasis peoples in what is now Xinjiang. It had increasing contacts with Korea and Vietnam. It sent diplomats, troops, and settlers southward, but it never gained effective control over the independent Min-Yue state (modern-day Fujian), the Dian Kingdom (Yunnan), or the Nan-Yue Empire, which controlled the southern coasts. Han China's effective rule and settlements stretched from the northern plain to Hunan, Jiangxi, and Zhejiang, assimilating some segments of the non-Chinese peoples in these regions; however, native peoples the Chinese referred to as "Man," meaning "barbarians," still held most of the area. Meanwhile, the northern and northwestern borders were still insecure, despite the forced settlement of hundreds of thousands of Chinese settlers, and closer to home a series of widespread rebellions racked the dynasty.
From the fall of the Han dynasty in 220 until the reestablishment of a unified dynastie rule under the Sui in 589, China continued to be plagued by civil disorder, attempts to restore earlier feudal systems, and rivalries between separatist states. The state of Wu in the central and lower Yangzi valley remained largely un-Sinicized, as did the southern Yue states. Shu, in Sichuan, seems also to have been ethnically heterogeneous, whereas the northwest was under strong pressure from the proto-Tibetan Qiang peoples. The Western Chin dynasty (AD. 265-316), which attempted to establish itself as the successor to the Han dynasty, was probably doomed from the start: it controlled only about one-third of the area that had been the Han Empire. On the northern borders the non-Han peoples rose in rebellion, in alliance with the Xiongnu. After 304, much of north China came under the rule of non-Chinese peoples, such as the Qiang, and branches of the Xianbei, such as Toba and Mujiang. Yet historical records indicate that the Toba rulers of inner China (Northern Wei dynasty, A.D. 387-534) became increasingly Sinicized, even outlawing Toba language and customs and adopting many of the reforms and ideas initiated during the Qin dynasty. Conversely, the ruling house of the short-lived Sui was closely intermarried with Turkic and Mongol elites.
The Tang dynasty that followed ( A.D. 618-907) was led, at least initially, by northwestern aristocratic families of mixed ethnic origins. Although it is generally written about as a Han Chinese dynasty, it was consciously cosmopolitan. Its armed forces included contingents of Turkic peoples, Khitan, Tangut, and other non-Chinese, and its cities opened to settlement by traders, doctors, and other specialists from Persia, Central Asia, and the Middle East. Central Asian tastes influenced poetry, music, dance, dress, ceramics, painting, and even cuisine. In the eighth and ninth centuries, coastal trade cities like Guangzhou and Yangzhou had foreign populations of close to 100,000. The thrusts of imperial expansion went south, colonizing Hunan and then Jiangxi and Fujian. The people of Guangdong Province today refer to themselves as "people of Tang" rather than "people of Han," and until the tenth century the Chinese still viewed Guangdong and Guangxi as the wild frontier. Tang armies pressed deep into southern China and the Indochina Peninsula, battling in successive campaigns against Tai, Miao, and Yue (Viet) states or tribal confederations in the provinces of the southern tier and Annam. The Nanzhao Kingdom and its successor, the Dali Kingdom (claimed today by Dai, Bai, and Yi peoples), controlled Yunnan, much of Guizhou and Sichuan, as well as parts of what is now Vietnam and Myanmar. The Tang also pressed into Central Asia and established protectorates as far as present-day Afghanistan. At times, princes from the outlying tributary states were educated at the Tang court in hopes that they would bring Chinese culture home with them.
In the years of disorder that followed the fall of the Tang, non-Chinese contenders for control of the empire pressed their claims. The Tanguts (Tangxiang), a confederation of Tibetan tribes, founded the Xixia Empire, which controlled Ningxia and Gansu until defeat by the Mongols in the thirteenth century. The Tangut rulers allied through marriage with the Khitans, who were Altaic-speaking proto-Mongolians from Inner Mongolia and western Manchuria. The Khitan northern empire (Liao dynasty, 907-1125) alternatively used tribal law or the Tang legal codes and system of government to rule over the nomads of their home areas and the Chinese of the northern plains. The Khitan developed their own writing system and encouraged an economy based on a mix of agriculture and pastoralism. Except for adherence to Buddhism, they resisted Sinicization. When their empire finally fell in 1125, some of the survivors fled to Central Asia and formed a new state in exile (Kara-Khitay), which perhaps is the origin of the term "Cathay."
The Chinese-led Song dynasty that eventually wrested control of north China from the Liao divides into two periods. The Northern Song (960 to 1126) ruled from Kaifeng but only briefly reunified inner China, which soon fell to the northern nomads. Ruzhen (Jin) and Mongols (Yuan) ruled the northern tier and North China Plain, whereas the Southern Song (1127-1179) reestablished a capital at Hangzhou and tried to consolidate rule of the south. By then, technological advances in agriculture, the growth of commerce, and the past sequence of military colonization had opened the south to Han settlement. By the Northern Song period, most of the rapidly growing Chinese population already lived south of the Huai River, having pushed out or absorbed the remaining indigenous peoples of the area. In addition to expansion of agricultural land, there was a rapid growth of towns and cities, some of them reaching one million, and many of them over 100,000 in population.
There was an uneasy peace. The Mongol rulers of the Yuan dynasty (1276-1368) soon controlled most of China. Indeed, the united tribes of the steppes and grasslands controlled most of the Eurasian landmass at that time, with their territories stretching across Central Asia into Russia and eastern Europe. They established firmer control over Tibet and defeated the Dali Kingdom in Yunnan. Their armies went deep into south-central China, staking out the boundaries of new prefectures and counties to which future dynasties would lay claim. Mongols and their allies (Uigurs and other Turkic peoples) and a small number of ethnic Chinese filled government posts. Mongol rule followed the Chinese model of local government and the law code reflected the influence of earlier Chinese law codes, but it was clearly not a Chinese state. The rulers awarded some territory to Mongol princelings or military leaders as fiefs, and both law and administrative regulations distinguished Mongols (and their close allies) from "Han-ren" (north Chinese) and "Nan-ren" (southerners). Buddhist monastery land was exempt from taxation, and clergy everywhere were under the jurisdiction of a special central government bureau usually headed by a Tibetan lama. In this period, Lamaistic Buddhism became the state religion and the lamas had influence at court. Other developments during the Yuan were the flourishing of vernacular tales, novels, and dramas and a rapid growth in science and technology (astronomy, hydraulic engineering, medicine, cartography) sparked in part by contact with the world outside of China through caravan trade into Central Asia and sea routes to Southeast Asia and India.
Widespread popular uprisings and military expulsion of the Yuan from inner China led to the restoration of a Chinese dynasty, the Ming (1368-1644). Despite this victory, the struggles against the Mongols continued. The Ming reinforced the Great Wall and built garrison posts along it, and there were many conflicts as Chinese traders and farmers attempted to settle the bordering steppe area. At the same time, pacification and control of the southern frontiers continued through government support for establishment of Han Chinese military and civilian colonies ( tuntian ) . The indigenous peoples resisted this further colonization and were sometimes joined by descendants of earlier waves of settlers; the Ming histories record 218 "tribal" uprisings in Guangxi alone, 91 in Guizhou (which included portions of Yunnan), and 52 in Guangdong. The peoples of that area (ancestral to the present-day Yao, Miao, Zhuang, Gelao, and a number of smaller groups) were either assimilated, decimated, or forced to retreat to higher elevations or westward; some populations began the migration to present-day Vietnam and Thailand. The Han-settled areas were organized into the same administrative units as prevailed elsewhere in China, governed by appointed bureaucrats. The surviving non-Han peoples were uneasily brought into that structure or, in areas where they still outnumbered the Han, were controlled by indirect rule under hereditary landed officials ( tumu or tusi ) initially drawn from the indigenous elites. As long as the rulers of these quasi-fiefdoms kept the peace and paid taxes and tribute to the state, they had a free hand in administering local law and exacting rents and labor service for their own advancement.
In 1644, the Manchu descendants of the Ruzhen won control of the imperial throne and established the Qing dynasty (1644-1911 ). Qing expanded central-government control to Taiwan relatively easily, but Guizhou, Yunnan, Tibet, and the northwest continued to be problematic. In the southwest, there were wide-scale "Miao Rebellions," a generic term for all indigenous uprisings in the area. There were major rebellions in the 1670s, the 1680s, and again in the late 1730s. Qing records list some 350 uprisings in Guizhou between 1796 and 1911, and this number may be an undercount. No sooner had the state established firmer control over the minority peoples of the southwest then they faced the armed uprisings of Muslim ethnic and religious movements in Shaanxi and Gansu (1862-1875), and the "Panthay" Muslim Rebellion in Yunnan (1856-1873), which had set up its capital in Dali. Even after the status of Xinjiang was changed from a military colony to a province in 1884, Muslim resistance continued until the end of the dynasty. In late Qing, the Han too were in rebellion: the Taiping Rebellion, which began among the Hakka in Guangxi and Guangdong, held most of southeast China during the 1850s and 1860s and extended its influence into Guizhou and Sichuan. The Nien Rebellion in the same period dominated in the area north of the Huai River.
What seems to have kept the Qing in power throughout was a firm alliance of interest with the Han literati—elites who filled the bureaucratic posts of empire. In time, the Qing emperors out-Confucianized the Chinese themselves, adopting and encouraging traditional Chinese political and social thought based on the Confucian canon and assimilating to Chinese cultural styles. One might even say that they identified with the Han in viewing all other ethnic groups as "barbarians."
The collapse of the Qing and the ascendancy of the Republic of China starting in 1911 initially led to disintegration and local breakaway governments. Local warlords seized political power in large areas of the country, a problem not resolved until 1927. The Japanese held control over Taiwan and Manchuria until the end of World War II. The Russian Revolution had led to the establishment of an independent Mongolia and validation of Soviet claims to contested territory in China's far north and northwest. Tibet rejected China's claims of sovereignty, and many areas in Guangxi, Guizhou, Yunnan, and the northwest continued to hold large numbers of diverse peoples who did not follow the Guomindang's call to assimilate and be absorbed into the Chinese cultural and political world. Still, a new nationalism emerged and spread during this period in response to the late Qing and twentieth-century imperialist economic and political intrusions by the European powers (treaty ports, foreign concessions, unequal treaties, and extraterritorial privileges for foreigners). The new nationalism was intensified by the Japanese invasion of inner China in 1937 and the long years of war that followed. The government of the republic and its armies retreated to the southwest, while the Communist party and its armies built up a strong independent base in Shaanxi, Ningxia, and Gansu. Guerrilla forces organized resistance within occupied China. Within a short time following the end of World War II, China plunged into a civil war between the Communist and Republican forces, culminating in the victory of the Communists and the withdrawal of the defeated Guomindang government to Taiwan. During that civil war, both sides raised slogans appealing to national pride and calling for unity in the interests of China as a nation, as they had done during the war against Japan. Members of the minority nationalities also joined in the civil war, perhaps more strongly on the Communist side because of its promises of greater tolerance of cultural diversity and greater autonomy for the minority areas. |
A cell is the smallest structural and functional unit of an organism, typically microscopic and containing cytoplasm and a nucleus enclosed in a membrane.
Plant cells do not have a cell wall or chloroplasts but plant cells do. Animal cells are round and irregular in shape, while plant cells have a fixed, rectangular shape.
1. Bone cells are responsible for creating new arms for the growth of bones.
2. Muscle cells contain protein filaments that slide past one another, producing a contraction that allows us to produce force and motion.
3. Nerve cells transmit messages from one part of the body to another.
4. White blood cells defend against infection, while red blood cells carry oxygen throughout the body and take carbon dioxide out of the body.
In multicellular organisms, cells are organized into tissues, organs, and organ systems. For example, your brain is made up of mostly nerve tissue, which consists of nerve cells that relay information to other parts of your body. Your brain is made of different kinds of tissues that function together. For example, the brain also has blood vessels that carry the blood that supplies oxygen to your brain cells. Your brain is part of your nervous system, which directs body activities and processes.
1. The digestive system consists of the mouth, the salivary gland, the epiglottis, the esophagus, the stomach, the liver, the gallbladder, the pancreas, the small intestine, the large intestine, and the rectum. You put food in your mouth, then saliva is released by the salivary glands to break down starches, starting chemical digestion. Your epiglottis closes off your windpipe, and the food travels down your esophagus and into your stomach. Here, most chemical and mechanical digestion occur. At this point, the food is a thick liquid that is released into the small intestines in increments. In the small intestine, most chemical digestion and nutrient absorption take place. Once the food enters, bile is released from the gallbladder to help break up large fat particles. Here the pancreas also releases digestive enzymes that break down carbohydrates, proteins, and fats. The water and undigested food that is left moves from the small intestine into the large intestine. Here the water is absorbed into the bloodstream and the remaining material is ready for elimination. The end of the large intestine is the rectum. Here, the waste material is compressed into solid form. It then exits the body from the anus.
2. The cardiovascular system, or circulatory system, is made up of the heart, the blood vessels, and the blood. The heart pumps blood throughout the body through blood vessels. The blood vessels, made up of arteries, capillaries, and veins, take blood from the heart to the lungs and back to the heart. They also take blood from the heart to the whole body, then back to the heart. This is how the blood travels throughout the body.
3. The nose, the pharynx, the trachea, the bronchus, and the lungs make up the respiratory system. Air enters the body through the nose or the mouth. Nose hairs trap large particles from entering the system. From the nose, air enters the pharynx, which helps warm and moisten the air. Then it moves into the trachea which produces mucus. Air moves from the trachea into the left and right bronchi, which take air into the lungs. The trachea and bronchi also warm and moisten the air. Inside the lungs, the bronchi branch into smaller and smaller tubes. At the end of these small tubes are the alveoli. Here, gasses can move between air and blood.
As you can imagine, all three of these systems interact. For example, the water and nutrients from the digestive system get absorbed into the bloodstream, which is part of the cardiovascular system. Another example would be gasses going from the respiratory system to the cardiovascular system in the alveoli.
A Paramecium is a genus of unicellular protozoa, commonly studied as a representative of the ciliate group. They are widespread in freshwater, brackish and marine environments, and are often very abundant in stagnant basins and ponds. It is just like other unicellular organisms, and uses many of the same things to survive, such as cilia and vacuoles. |
This Convert Measurements to Solve Weight Problems video also includes:
How do you break down and solve a word problem that involves converting weight measurements? And how can that help decide how many strawberries are needed in fruit salad? That's the focus of this lesson. A handy review of metric units begins the learning, along with a discussion on pulling important information from a word problem and the common mistake of not taking unit size into account when deciding which weights are heavier. The core lesson walks step by step through solving a multi-step word problem. This is the third of five videos in the series.
- If you aren't logged in to Learnzillion, you will be prompted to create a free account to access all materials for this resource
- The lesson illustrates how a learner would use words to explain the thought process they used in solving the problem
- Thoughtful attention is given to each step of solving the word problems |
Injury Prevention & Treatment
Watch Out for Hypothermia: The "Indoor Cold"
Almost everyone knows about winter dangers for older people such as broken bones from falls on ice or breathing problems caused by cold air. But not everyone knows that cold weather can also lower the temperature inside your body. This drop in body temperature is called hypothermia, and it can be deadly if not treated quickly. Hypothermia can happen anywhere—not just outside and not just in northern states. In fact, some older people can have a mild form of hypothermia if the temperature in their home is too cool.
What Are The Signs Of Hypothermia?
When you think about being cold, you probably think of shivering. That is one way the body stays warm when it gets cold. But, shivering alone does not mean you have hypothermia.
How do you know if someone has hypothermia? Look for the “umbles“—stumbles, mumbles, fumbles, and grumbles—these show that the cold is a problem. Check for:
Confusion or sleepiness
Slowed, slurred speech, or shallow breathing
Change in behavior or in the way a person looks
A lot of shivering or no shivering; stiffness in the arms or legs
Poor control over body movements or slow reactions
A normal body temperature is 98.6 °F. A few degrees lower, for example, 95 °F, can be dangerous. It may cause an irregular heartbeat leading to heart problems and death.
If you think someone could have hypothermia, use a thermometer to take his or her temperature. Make sure you shake the thermometer so it starts below its lowest point. When you take the temperature, if the reading doesn’t rise above 96 °F, call for emergency help. In many areas, that means calling 911.
While you are waiting for help to arrive, keep the person warm and dry. Try and move him or her to a warmer place. Wrap the person in blankets, towels, coats—whatever is handy. Even your own body warmth will help. Lie close, but be gentle. Give the person something warm to drink but stay away from alcohol or caffeinated drinks, like regular coffee.
The only way to tell for sure that someone has hypothermia is to use a special thermometer that can read very low body temperatures. Most hospitals have these thermometers. In the emergency room, doctors will warm the person’s body from inside out. For example, they may give the person warm fluids directly by using an IV. Recovery depends on how long the person was exposed to the cold and his or her general health.
How Do I Stay Safe?
Try to stay away from cold places. Changes in your body that come with aging can make it harder for you to be aware of getting cold.
You may not always be able to warm yourself. Pay attention to how cold it is where you are.
Check the weather forecasts for windy and cold weather. Try to stay inside or in a warm place on cold and windy days. If you have to go out, wear warm clothes including a hat and gloves. A waterproof coat or jacket can help you stay warm if it’s cold and snowy.
Wear several layers of loose clothing when it’s cold. The layers will trap warm air between them. Don’t wear tight clothing because it can keep your blood from flowing freely. This can lead to loss of body heat.
Ask your doctor how the medicines you are taking affect body heat. Some medicines used by older people can increase the risk of accidental hypothermia. These include drugs used to treat anxiety, depression, or nausea. Some over-the-counter cold remedies can also cause problems.
When the temperature has dropped, drink alcohol moderately, if at all. Alcoholic drinks can make you lose body heat.
Make sure you eat enough food to keep up your weight. If you don’t eat well, you might have less fat under your skin. Body fat helps you to stay warm.
Some illnesses may make it harder for your body to stay warm. These include problems with your body’s hormone system such as low thyroid hormone (hypothyroidism), health problems that keep blood from flowing normally (like diabetes), and some skin problems where your body loses more heat than normal.
Some health problems may make it hard for you to put on more clothes, use a blanket, or get out of the cold. For example:
Severe arthritis, Parkinson’s disease, or other illnesses that make it tough to move around
Stroke or other illnesses that can leave you paralyzed and may make clear thinking more difficult
A fall or other injury
Staying Warm Inside
Being in a cold building can also cause hypothermia. In fact, hypothermia can happen to someone in a nursing home or group facility if the rooms are not kept warm enough. People who are already sick may have special problems keeping warm. If someone you know is in a group facility, pay attention to the inside temperature and to whether that person is dressed warmly enough.
Even if you keep your temperature between 60 °F and 65 °F, your home or apartment may not be warm enough to keep you safe. For some people, this temperature can contribute to hypothermia. This is a special problem if you live alone because there is no one else to feel the chilliness of the house or notice if you are having symptoms of hypothermia. Set your thermostat for at least 68 °F to 70 °F. If a power outage leaves you without heat, try to stay with a relative or friend.
You may be tempted to warm your room with a space heater. But, some space heaters are fire hazards, and others can cause carbon monoxide poisoning. The Consumer Product Safety Commission has information on the use of space heaters, but here are a few things to keep in mind:
Make sure your space heater has been approved by a recognized testing laboratory.
Choose the right size heater for the space you are heating.
Put the heater on a flat, level surface that will not burn.
Keep children and pets away from the heating element.
Keep things that can catch fire like paint, clothing, bedding, curtains, and papers away from the heating element.
If your heater has a flame, keep a window open at least one-inch and doors open to the rest of your home for good air flow.
Turn the heater off when you leave the room or go to bed.
Make sure your smoke alarms are working.
Put a carbon monoxide detector near where people sleep.
Keep an approved fire extinguisher nearby.
Is There Help For My Heating Bills?
If you are having a hard time paying your heating bills, there are some resources that might help. If your home doesn’t have enough insulation, contact your state or local energy agency or the local power or gas company. They may be able to give you information about weatherizing your home. This can help keep the heating bills down. You might also think about only heating the rooms you use in the house. For example, shut the heating vents and doors to any bedrooms not being used. Also, keep the basement door closed.
If you have a limited income, you may qualify for help paying your heating bill. State and local energy agencies, or gas and electric companies, may have special programs. Another possible source of help is the Low Income Home Energy Assistance Program. This program helps some people with limited incomes who need help paying their heating and cooling bills. Your local Area Agency on Aging, senior center, or community action agency may have information on these programs.
Plan ahead for the cold weather. Make sure your furnace is working, and you have a warm coat, hat, and gloves in the closet. If necessary, get help with shoveling the ice or snow. Being prepared will help ensure a safe and warm winter.
For more information:
Eldercare Locator, 1-800-677-1116, www.eldercare.gov.
National Energy Assistance Referral Hotline (NEAR), 1-866-674-6327, www.acf.hhs.gov/programs/ocs/liheap.
National Association of Area Agencies on Aging, 1-202-872-0888, www.n4a.org.
Reprinted with permission from the National Institute on Aging; for more information, click here. |
Dear scientist, please be creative to use the materials of Science on a Table
in ways that suits you, your research and your visitors.
What’s on the table?
In the center of the table, there is a model of the Sun and the Earth with the major Sun-observing satellites.
A box with data/images belongs to each satellite.
This setting provides the context of current solar observation.
It can be used to demonstrate space weather, for example.
What’s in the boxes?
The images in the boxes have been selected according to three criteria:
- Spectacular and beautiful images > triggering curiosity and imagination
- One solar event observed by all observatories > showing the broad range of data involved in research
- One image per year > showing active and passive phases of the eleven years solar cycle
However, the images yield much more content for exploration, conversation and learning.
Visitors can discover solar phenomena such as sunspots, magnetic loops, eruptions, solar storms, auroras.
Visitors can compare data from different observatories and trace solar storms.
Not only are the images of the selected solar event compatible (selection criteria 2)
but also those showing the solar cycle (selection criteria 3).
These images show why it takes different kinds of observations to get an encompassing picture of the Sun and its effects in space.
Visitors can observe phenomena related to the instruments:
– particles of a solar storm hitting SOHO and disturbing the image,
– an image cut in half due to a technical problem,
– an image with nothing on it because the instrument pointed to the wrong spot,
– a disrupted image because the satellite turned to the night side of the Earth,
– or no observation at an Earth based observatory due to poor weather conditions.
These images show some of the technological challenges involved in the observation of the Sun.
Based on these images, many interesting questions related to science and the conditions of research can be discussed.
However, some images are just incredibly beautiful and people may want to take them home.
Please give them the link of this website so they can print any of those they like.
This concept works for open, informal learning situations such as science festivals.
Should you wish to use Science on a Table for school visits, learning needs to be more structured.
We plan to develop a booklet for school activities as soon as there is an opportunity for doing so. |
Computer Processor : What is it?
A computer processor analyzes data and controls data flow. Also called the central processing unit (CPU), it is considered the “brain” of the computer because it performs the actual data processing. It carries out instructions of the program sequentially to execute the basic logical, arithmetical, and input-output system operations. The term “computer processor” has been the jargon used in the industry since the 1960’s. While the design, form, and implementation of processors have changed radically since, their fundamental operation is much the same.
Kinds of Computer Processors include the following:
|Intel® computer processor |
Intel has been using multicore approach to improve performance and keep up with constant upgrades in the industry. A “multi-core” processor is a chip containing more than one microprocessor core, thereby multiplying performance.
The latest and fastest Intel computer processor for demanding applications is the Core i7, a quad-core processor featuring 8-way multitasking capabilities. This second generation processor boasts the Intel Turbo Boost Technology 2.0 and Intel Hyper Threading Technology, which enables security protocols and applications to run very efficiently in the background without affecting productivity. It also boasts the Intel HD Graphics 2000 Technology, which renders discrete graphics cards non-essential, thereby reducing system cost and power consumption.
Intel Atom, meanwhile, is a line of ultra-low voltage computer processors designed for netbooks, mobile internet devices, and everyday PCs.
AMD® computer processor
The AMD computer processor is exclusively made by Advanced Micro Devices, Inc. (AMD). The latest is Bobcat, an x86 processor core used together with GPU cores in Accelerated Processing Units under the "AMD Fusion" brand. It is designed for low-power devices such as netbooks and similar consumer electronics. AMD is looking to release Bulldozer in the second quarter of 2011.
Quality: Choose a high-end computer processor if you use heavy applications, such as design software and games. It usually costs more, but it provides higher-quality images and graphics.
Model: If you can afford it, choose a fairly recent computer processor so you do not have to upgrade for at least 2 years. If the latest is too expensive, opt for an 8- to 12-month old processor, which can save you hundreds of dollars.
Compatibility: Check the maximum processor your motherboard can support. Check the documentation, or the manufacturer name and model usually appearing on the upper side of the motherboard. It’s easy to find information for newer models online. You can also find datasheets for older models.
recommended computer processors |
Wetlands: Here All Year? Session 1
Conduct this session in the classroom.
- Place the sponge in the shallow dish. Pour water into the dish, and ask students what is happening to the water. Compare the sponge to a wetland, explaining that soil in a wetland soaks up water just as the sponge does. Provide students with a general description of wetlands based on the information in the lesson Background. Include in the description the fact that a wetland may be covered by a shallow layer of water but is not deep enough to be called a pond or lake. Explain that some wetlands are wet all the time and others only part of the time.
- Divide the class into six groups. Each group will be studying one type of wetland: bog, freshwater marsh, saltwater marsh, wet meadow, shrub wetland, and tree swamp. Provide each group with several copies of the information sheet on their assigned wetland.
- Explain that as they read about their wetland, students should take notes on the amount and kind of water (fresh, salt, or brackish) in their wetland. Is it wet all the time? If only some of the time, when? Students should also list plants and animals that can be found in their wetland. Each group will use this information to create a diorama of their wetland.
- When groups have finished taking notes, they will begin preparations for their dioramas. Make available wildlife guides on birds, insects, mammals, reptiles, amphibians, trees, and wildflowers, and instruct students to draw the plants and animals found in the wetland. (Drawings will be cut out and placed in the dioramas, so they should be small enough to fit in the boxes.) Tell students to write a number on each plant type and a letter on each animal type. The dioramas will be assembled in Session 2 of the lesson. |
- 21.3–23.2 in
- 34.6–37.4 in
- 25.4–57.8 oz
- About the same size as a Mallard; slightly larger than a Gadwall.
- Canard noir (French)
- Ánade sombrio americano (Spanish)
- As soon as their down feathers dry, newly hatched ducklings are able to leave the nest, a depression on the ground lined with plant materials. They follow their mother to rearing areas with a lot of invertebrates to eat and plenty of vegetation for cover.
- Normally found in eastern North America, American Black Ducks occasionally show up on the West Coast, Europe, and even Asia. Some of these birds may be escaped pets, but others are known to be wild ducks: for instance, one female banded in New Brunswick, Canada, turned up later in France.
- Pleistocene fossils of American Black Ducks, at least 11,000 years old, have been unearthed in Florida and Georgia.
- The oldest American Black Duck on record was 26 years, 5 months old.
American Black Ducks breed mostly in freshwater wetlands throughout northeastern North America, including beaver ponds, brooks lined by speckled alder, shallow lakes with reeds and sedges, bogs in boreal forests, and wooded swamps. They may also nest in saltmarshes. They mostly spend the winter in saltwater wetlands, but also in beaver ponds, flooded timber, agricultural fields, and riverine habitats. They often take refuge from hunting and other disturbances by moving to fresh and brackish impoundments on conservation land.
American Black Ducks eat mostly plant matter, with insects added during the breeding season. Plant foods include seeds, roots, tubers, stems and leaves of plants growing in moist soil and underwater. In the breeding season adults and ducklings eat a diet high in animal foods, including aquatic insects (larvae of mayflies, caddisflies, dragonflies, flies and midges, and beetles), crustaceans, mollusks, and sometimes fish. They forage individually or with their mates in the nesting season, and either alone or in small groups during the rest of the year. In shallow water they forage like typical “dabbling ducks” by submerging their heads, or tipping up to reach underwater food. In deeper water they may dive more than 12 feet deep for plant tubers and other food items. On migration they eat seeds, foliage, and tubers of aquatic plants, agricultural grains, seeds and fruits of wild terrestrial plants, invertebrates, and sometimes fish and amphibians. Wintering birds eat mostly plant parts in freshwater habitats, adding foods such as mussels, zooplankton, and small fish in marine habitats.
- Clutch Size
- 6–14 eggs
- Number of Broods
- 1 broods
- Egg Length
- 2.2–2.5 in
- Egg Width
- 1.6–1.8 in
- Incubation Period
- 23–33 days
- Nestling Period
- 1 days
- Egg Description
- White, cream-colored, or pale greenish buff.
- Condition at Hatching
- Well developed and covered with down.
The female builds the nest on her own, digging with her feet and bill in leaf litter or soil to form a basin 7–8 inches across and 1.5 inches deep. While laying eggs, she adds plant material gathered within reach of the nest, including grass, twigs, leaves, stems, and conifer needles. By the fourth or fifth egg she starts adding down feathers plucked from her own body, until the clutch is fully covered at the beginning of incubation.
The female selects a well-concealed site, often on the ground, on wooded or grassy islands, uplands, marshes, cultivated croplands, or cropland borders. She may choose a site in a shrubby area, a brush pile, a hay bale, a patch of grass, or a rock crevice. Nests are sometimes in crotches, hollows, or cavities of large trees.
American Black Ducks are slow, heavy fliers but excellent swimmers, diving to avoid predators and sometimes to find food. Mates are monogamous within each breeding season, and the pairs may stay together in subsequent years. They court and form strong pair bonds in the fall and winter before migrating to breeding grounds. Nesting starts in February in the southern part of their range, but often not until late May in the northern part. The female incubates the eggs while the male defends the territory. Pairs nest near each other fairly peacefully unless one male or pair intrudes; then the territorial male threatens, chases, and fights the intruders. Halfway through incubation, the male becomes less attentive and eventually abandons the nest. The ducklings hatch all within a few hours, and once they are dry the female leads the brood to rearing areas with a lot of invertebrates and plant cover. In early September, after molting, the adult and fledgling ducks join up near breeding areas and begin to migrate south.
American Black Ducks are common, but the North American Breeding Bird Survey recorded a decline of about 84% between 1966 and 2014. Since 2004, declines have slowed down. They are not on the 2014 State of the Birds Watch List. Farming, logging, and urbanization in this species’ breeding and wintering habitats, both inland and on the coast, may have contributed to the fall in numbers. Duck hunters intensively exploited American Black Ducks for decades, shooting an estimated 800,000 per year in the 1960s and 1970s. In 1982 the Humane Society of the U.S. pursued a lawsuit that led to stringent hunting restrictions the following year, namely a 30-day season with a one-bird limit per day. Annual harvest in the 1990s was estimated at 166,000 in the U.S., and today records indicate that figure is about 115,000 per year. American Black Ducks are warier than many other duck species, such as Mallards, and thus less tolerant of disturbance. Mallards may have even contributed to the decline in black ducks, since Mallards thrive under urban conditions and may oust their shyer cousins from the habitat (as well as altering local populations by hybridizing with them). Like other aquatic animals, American Black Ducks are sensitive to pollution and runoff that degrade water quality. In the mid-twentieth century the pesticide DDT contributed to eggshell thinning. American Black Ducks also are vulnerable to lead poisoning when they eat spent lead shot while foraging in wetlands.
- Longcore, J. R., D. G. McAuley, G. R. Hepp, and J. M. Rhymer. 2000. American Black Duck (Anas rubripes). In The Birds of North America, No. 481 (A. Poole and F. Gill, eds.). The Birds of North America Online, Ithaca, New York.
- North American Bird Conservation Initiative, U.S. Committee. 2014. State of the Birds 2014 Report. U.S. Department of Interior, Washington, DC.
- Raftovich, R.V., K.A. Wilkins, S.S. Williams, H.L. Spriggs, and K.D. Richkus. 2011. Migratory bird hunting activity and harvest during the 2009 and 2010 hunting seasons [PDF]. U.S. Fish and Wildlife Service, Laurel, MD.
- U.S. Fish and Wildlife Service. 2012. Harvest Information Program.
- U.S. Department of the Interior, USGS Patuxent Wildlife Research Center. 2015. Longevity records of North American Birds.
- USGS Patuxent Wildlife Research Center. 2014. North American Breeding Bird Survey 1966–2014 Analysis.
Resident or short-distance migrant. Individuals that breed in northwestern Ontario and Quebec migrate the longest distances, 700–800 miles. Individuals in other populations may stay in one place all year or move short distances to avoid freezing water. American Black Ducks migrate at night in small flocks of 12–30, though flocks of several thousand may take off from staging areas in the fall when cold fronts arrive.
Find This Bird
Look for American Black Ducks in both fresh and saltwater in eastern North America, where they will look like female Mallards except with an olive-yellow bill and overall darker, higher-contrast plumage. They prefer protected bodies of water such as saltmarshes and ponds, and frequently mix with other species of ducks, especially Mallards. Among flocks of Mallards, look for a darker, colder-toned duck of similar size; in flight, the white underwings of American Black Ducks form a brighter, more contrasting flash than on a flying Mallard. Because these two species frequently hybridize in eastern North America, be aware that you may see individuals with intermediate characters, such as a dark body and a partially green head. |
The origin of the word "Texas" reflects the state's history of interaction between Native Americans and Spanish explorers, as well as the colonists and settlers who would later call the state home.
Teshas to Texas
The language of the Native American Caddo tribe of southeastern Texas inspired the state name we know today. During the 1540s, when Spanish explorers encountered the tribe, the Caddo attempted to communicate to the newcomers that they would be their friends, or "teyshas." The confused explorers believed the Caddo were actually stating their tribal name, and proceeded to identify them as the "Tejas" or "Teyas." In time, the word "Tejas" was used to describe the land north of the Rio Grande and east of New Mexico. However, English-speaking settlers around the territory often mispronounced the Spanish "Tejas" as "Teksas." When the area eventually became part of the United States, the pronunciation stuck in the form of the word "Texas."
- wellesenterprises/iStock/Getty Images |
Electron superhighway paves way for quantum computer
Rice University physicists have created a tiny "electron superhighway" that could help scientists design a quantum-based computer at some point in the future.
The device - which is described as a "quantum spin Hall topological insulator" - is one of the essential building blocks needed to create quantum particles capable of storing and manipulating data.
According to Rice physicist Rui-Rui Du, today's computers use binary bits of data that are either ones or zeros. However, quantum-powered computers would use quantum bits, or "qubits," which can be both ones and zeros at the same time, thanks to the quirks of quantum mechanics.
This quirk would provide quantum computers with a huge edge in performing particular types of calculations, including code-breaking, climate modeling and biomedical simulation.
"In principle, we don't need many qubits to create a powerful computer," explained Du. "In terms of information density, a silicon microprocessor with 1 billion transistors would be roughly equal to a quantum processor with 30 qubits."
Perhaps not unsurprisingly, researchers have adopted various approaches to creating qubits. Regardless of the approach, a common problem is making certain that information encoded into qubits isn't lost over time due to quantum fluctuations.
The approach followed by Du and colleague Ivan Knez is known as "topological quantum computing," which is expected to be more fault-tolerant than other types of quantum computers. This is because each qubit in a topological quantum computer will be made from a pair of quantum particles that have a virtually immutable shared identity.
Unfortunately, the catch to the topological method is that physicists have yet to create or observe one of these stable pairs of particles, which are known as "majorana fermions." The relatively elusive Majorana fermions were first proposed in 1937, although the race to create them in a chip has just begun.
Physicists now believe the particles can be made by "marrying" a two-dimensional topological insulator - like the one created by Du and Knez - to a superconductor. Yet, topological insulators are often thought of as peculiar oddities. Although electricity cannot flow through them, it is capable of flowing around their narrow outer edges.
As such, if a small square of a topological insulator is attached to a superconductor, the elusive Majorana fermions are expected to appear precisely where the materials meet. If this proves true, says Knez, the devices could potentially be used to generate qubits for quantum computing.
Knez spent more than a year refining the techniques to create Rice's topological insulator. The device is made from a commercial-grade semiconductor that's commonly used in designing night-vision goggles.
"We are well-positioned for the next step... Meanwhile, only experiments can tell whether we can find Majorana fermions and whether they are good candidates for creating stable qubits," added Du. |
During the difference engine’s era, the gears needed to be changed manually which would then result into the calculations being made. All of that was changed when signals of electricity replaced physical motion with the US Government’s 1942 machine named ENIAC. The concept of accepting programming was also followed by this machine.
To make programming faster, two vital concepts which directly influenced programming languages were developed in 1945 by John Von Neumann, who was then with the Institute for Advanced Study. The first concept was known as the shared-program method. This concept dictated that the hardware had to be non-complex and need not be hand-wired for every program. Intricate instructions were used to control this type of hardware which made reprogramming quicker.
The second concept called the ‘conditional control transfer’ gave birth to code blocks which can be used even in different orders or the so-called subroutines. The next part of the concept was logical branching. With this, the concept of having code blocks that can be used and reused was born.
By 1949, the Short Code language came out. It became the mother of electronic device computer language. With this language, the programmer was required to use 0’s and 1’s instead of the usual statements. 1951 marked the appearance of compiler named A-0 by Grace Hopper. This program translated all the 0’s and 1’s for the computer. This gave way to much quicker programming.
FORTRAN (FORmula TRANslating System) was introduced in 1957 which was also the first key language. It was designed for IBM for scientific computation. This language included the GOTO, DO and IF statements. FORTRAN’s forte was not business computing, though. It was a good program for number handling but not for business computations.
COBOL was then developed in 1959. It was designed as a businessman’s language. The COBOL’s program was comparable to an essay where there are 4-5 sections comprising a major whole. This made it easier to study.
The LISP language (developed for artificial intelligence study) also known as the Cambridge Polish was developed in 1958 by John McCarthy. This programming language is highly abstract and specific that is why it is still being used today. The LISP can store lists and modify them on its own.
In that same year, the Algol language was produced. This became the mother of the Pascal language, C and C++, and also Java. Algol also had the first proper grammar called the Backus-Naar form or BNF. Algol 68, which was the next version, was a harder version to use. Due to this difficulty, Pascal came into existence.
Niklaus Wirth introduced the Pascal language in 1968. It was a necessary means of teaching then. It was a combination of the following languages: ALGOL, FORTRAN and COBOL. It was also Pascal that improved the pointer data form. Its downfall was caused by its lack of variable groups. Modula-2 then appeared but C was already popular among many users.
C by Dennis Ritchie (1972, used by Unix) was comparable to Pascal but its precursors were the B and BCPL. It is also being used in Windows, Linux and Mac OS. OOP (Object Oriented Programming) was developed in 1970’s until the 80’s. This developed into the C++ language in 1983. This language can manipulate many tasks all at the same time. This is also the chosen language courses in AP Computer Science. In 1987, Perl (Practical Extraction and Reporting Language) was developed.
Java soon followed in 1994. Microsoft has also developed VB or Visual Basic which uses widgets and these are now widely used.
The future holds many more developments for computer programming. It may have started on a crude method but looking at the languages in use today, there were so many developments that we can only wonder what ‘impossibilities’ could be made possible very soon. |
GHS: Hazard Communication Labelling
The development of a harmonized hazard communication system, including labelling, safety data sheets and easily understandable symbols is one of the major objectives of the GHS.
To that end, the GHS includes appropriate labelling tools to convey information about each of the hazard classes and categories.
Using symbols, signal words or hazard statements, other than those which have been assigned to each of the GHS hazard classes and categories would be contrary to harmonization.
Nevertheless, it is recognized that in some circumstances the demands and rationale of systems may warrant some flexibility in whether to incorporate certain hazard classes and categories for certain target audiences.
The GHS takes into consideration the needs of the target audience that will be the primary end-users of the harmonized communication scheme as well as the manner in which these audiences will receive and use the information.
Additionally, the overlapping needs of target audiences was also taken into consideration. Those audiences include:
- Employers and workers need to know about the specific hazards related to the chemicals used or handled in the workplace, and about specific protective measures required to avoid adverse effects.
- Packaging and storing of a chemical can minimize hazards, however, workers and emergency responders need to know mitigation factors are appropriate in case of an accident – i.e. they may require information that can be read at a distance.
- In addition to the label, additional information is available to workers through the SDSs and workplace risk management system.
- The label is likely to be the sole source of information readily available to the consumer.
- Consumer education is more difficult and less efficient than education for other audiences.
- The issue of comprehensibility is of particular importance for this target audience since they may rely solely on label information.
- Emergency responders need information on a range of levels, including accurate, detailed and sufficiently clear information to facilitate an immediate response.
- Fire fighters need information that can be seen and understood at a distance, such as graphical and coded information.
- Wide-range of target audiences, especially transport workers and emergency responders.
- Information needed by different transport workers is dependent upon the type of work done and amount of contact they will have with hazardous items.
- i.e. Drivers may need only limited information unless they are also responsible for the loading and unloading of packages and/or filling of tanks.
Comprehensibility / Translation / Standardization
One aim of the GHS is to present information in a manner that the intended audience can easily understand. The following principles are meant to assist the communication process:
a. Information should be conveyed in more than one way
b. Comprehensibility of components of the system should take into account existing evidence from studies, literature and tests
c. Phrases used to indicate degree of severity should be consistent across different hazard types. (This was a hotly debated topic – since it is difficult to directly compare physical hazards. In the end, it was felt that it is possible to help audiences put the degree of hazards into context and convey the same degree of concern)
Consideration was given to the comprehensibility of translated words to ensure they conveyed the same meaning.
For labels, the hazard symbols, signal words and hazard statements have all been standardized and assigned to each of the hazard categories and these should appear on GHS labels as indicated for each hazard class.
The GHS recognizes that other label elements may need to appear that have not been standardized, i.e. precautionary statements.
However, to prevent unnecessary variation, it is recommended that supplementary information be limited to:
a. Further detail that does not contradict or cast doubt on validity of hazard information
b. Information about hazards not yet incorporated into the GHS
The GHS also accepts that the Labeller should have the option of providing supplementary information related to the hazard in the hazard statement rather than in the supplementary information section on the label.
The GHS stipulates that all systems should specify a means of responding to new information and updating labels and SDS information.
Suppliers should respond to new and significant information about a chemical hazard by updating the label and SDS.
This applies to any information that changes GHS classification of the substance or mixture and leads to a change in the information provided on labels and SDSs.
This could include new information on a potential adverse chronic health effect – even if it hasn’t yet triggered a change in classification.
Updating should be carried out promptly on receipt of information that necessitates the revision.
Suppliers should also periodically review information on which labels and SDS for substances are based.
Competent authority may choose to specify a time (3-5 years) from the date of original preparation within which suppliers should review information.
Confidential Business Information
Systems should consider confidential business information (CBI): however, CBI provisions should not compromise health and safety of workers and competent authorities should consider:
a. Whether inclusion of chemicals in CBI is appropriate to the needs of the system.
b. What definition of CBI should apply – taking account of factors like accessibility of information to competitors, intellectual property rights, and potential harm disclosure would cause employer or suppliers business.
c. Appropriate procedures for disclosure of CBI where necessary
Provisions for CBI in different systems should be consistent with following principles
a. For information otherwise required on labels or SDS, CBI claims should be limited to the names of the substances and their concentrations in mixtures – all other information should be disclosed on label or SDS as required
b. Where CBI has been withheld should be indicated on label and SDS
c. CBI should be disclosed to competent authority upon request, which should then protect confidentiality of the information.
d. Where a medical emergency exists (as determined by medical professional), timely disclosure of CBI should be assured.
e. For non-emergency situations, CBI should be disclosed to safety or health professional providing medical or other safety and health services.
f. Where non-disclosure of CBI is challenged, competent authority should address such challenges or provide process for challenges. Supplier or employer should be responsible for supporting the assertion that information qualifies for CBI protection.
Training is an integral part of hazard communication and should be appropriate and commensurate with the nature of the work and exposure.
Procedures for preparing labels in the GHS:
a. Allocation of label elements
b. Reproduction of the symbol
c. Reproduction of the hazard pictogram
d. Signal words
e. Hazard statements
f. Precautionary statements and pictograms
g. Product and supplier identification
h. Multiple hazards and precedence of information
i. Arrangements for presenting the GHS label elements
j. Special labelling arrangements
The GHS provides tables for each hazard class. The tables detail the label elements (symbol, signal word, hazard statement) that have been assigned to each of the hazard categories of the GHS.
Following are the hazard symbols which should be used in the GHS.
All of the symbols, aside from the environment symbol, are part of the standard symbols used in the UN recommendations on the transport of Dangerous Goods model regulations.
Pictograms and Reproduction of Hazard Pictograms
Pictogram means a graphical composition that may include a symbol plus other elements, such as a border, background pattern or color that conveys specific information.
All hazard pictograms should be in the shape of a square set on a point (diamond).
For transport, the pictograms prescribed by the UN Model Regulations on the Transport of Dangerous Goods should be used.
Transport pictograms must have minimum dimensions of 100mm by 100mm, with exceptions for smaller pictograms on very small packaging and gas cylinders.
Transport pictograms have symbol on upper half of label. The pictograms should be affixed to a background of contrasting colour.
Pictograms prescribed by the GHS, but not the transport pictograms, should have a black symbol on white background with a red frame sufficiently wide enough to be clearly visible.
***GHS gives the option of using a black border for packages that will not be exported (note – OSHA has indicated it plans to require a red framed border, whether the package is for domestic or international use.)
Competent authorities may choose to use a transport pictogram outside of an area not covered by transport regulations – such as the exclamation point which is used for skin irritant.
Allocation of label elements
On packages covered by the UN Model Regulations on the Transport of Dangerous Goods: where a transport pictogram appears, a GHS pictogram for the same hazard should not appear.
GHS pictograms not required for transport should not be displayed on freight containers, road vehicles or railway cars.
GHS Label Requirements
Information required on a GHS label:
1. Signal words
a. A word used to indicate the relative level of severity of hazard and alert the reader to a potential hazard on the label. Signal words used in GHS are “Danger” and “Warning.” Danger is for the more severe hazard categories. Signal words are assigned to each hazard category
2. Hazard Statements
a. A phrase assigned to a hazard class and category that describes the nature of the hazards of a hazardous product, including when appropriate, the degree of the hazard.
b. Hazard statement and code: Hazard statement codes are intended to be used for reference purposes – they are not part of the text and should not be used to replace it.
3. Precautionary statements
a. Phrase (and/or pictogram) that describes the recommended measures that should be taken to minimize or prevent adverse effects resulting from exposure to a hazardous product. GHS label should include appropriate precautionary information, the choice of which belongs to the labeler or competent authority.
b. Precautionary codes are used to uniquely identify precautionary statements and are for reference purposes – they are not part of the precautionary text and should not be used to replace it.
4. Product Identifier
a. Product identifier should be used and it should match product identifier used on the SDS. If mixture is covered by UN Model regulations for transport of Dangerous goods, UN proper shipping name should also appear on package
b. Label for substance should include the chemical identity of the substance. For mixtures and alloys – label should include chemical identities of all ingredients or alloying elements that contribute to acute toxicity, skin corrosion or serious eye damage, germ cell mutagenicity, carcinogenicity, reproductive toxicity, skin or respiratory sensitization, or specific target organ toxicity (STOT). When these hazards appear on the label
c. Where a substance or mixture is supplied exclusively for workplace use, competent authority may choose to give suppliers discretion to include chemical identities on the SDS, in lieu of including them on labels.
d. The competent authority rules for CBI take priority over the rules product identification and ingredients meeting criteria for CBI do not have to be included on the label
5. Supplier identification
a. Name, address and telephone number of the manufacturer or supplier of the substance or mixture should be provided on the label.
Substances or Mixtures with More than One GHS Hazard
There is a precedence of symbols depending upon the use and audience. For substances and mixtures covered by UN Model Regulations, the precedence of symbols for physical hazards should follow the rules of UN Model regulations. In workplace situations, competent authority may require all symbols be used. For health hazards, the following precedence apply:
a. If skull and crossbones applies, the exclamation mark should not appear [as used for acute toxicity]
b. If corrosive symbol applies, exclamation mark should not appear as used for skin or eye irritation
c. If health hazard symbol appears for respiratory sensitization, the exclamation mark should not appear where used for skin sensitization or skin or eye irritation.
Precedence for allocation of signal words
a. If signal word Danger appears, Warning should not appear
Precedence for hazard statements
a. All assigned hazard statements should appear on the label – except as provided below. Competent Authority may specify order.
b. To avoid duplication or redundancy – following rules may be applied
i. If statement H410 “very toxic to aquatic life with long lasting effects” is assigned, statement H400 “very toxic to aquatic life” may be omitted.
ii. If H411 “toxic to aquatic life with long lasting effects” is assigned, H401 “toxic to aquatic life” may be omitted.
iii. If H412 “harmful to aquatic life with long lasting effects” is assigned, H402 “harmful to aquatic life” may be omitted.
iv. If H314 “causes severe skin burns and eye damage” is assigned, H318 “causes serious eye damage” may be omitted.
Arrangements for Presenting the GHS Label Elements
Location of GHS information on the label
GHS hazard pictograms, signal word and hazard statements should be located together on the label. Competent authority may choose to provide specified layout for the presentation of these and for the precautionary information or allow supplier discretion.
Competent Authority may allow supplemental information to be used on label – in such instances they may also choose where it appears – in any case, it should not impede identification of GHS information
Color can be used on other areas of the label besides the pictogram as allowed by competent authorities
Labelling of Small Packages
a. All applicable GHS label elements should appear on the immediate container of a hazardous substance or mixture where possible
b. Where impossible to put all the applicable label elements, other methods of providing the full hazard information should be used in accordance with the definition of Label in GHS – factors influencing this include:
i. Shape, form or size of immediate container
ii. Number of label elements to be included
iii. Need for label elements to appear in more than one official language
c. Where volume of hazardous substance or mixture is so low and there is not likelihood of harm to human health and/or environment – with competent authorities okay – label may be omitted.
d. Certain label elements may be omitted with permission from competent authority where volume of substance or mixture is below a certain amount
e. Some labelling elements on the immediate container may need to be accessible throughout the life of the product, e.g. for continuous use by workers or consumers
Special labeling arrangements
Competent authority may allow communication of certain hazard information for carcinogens, reproductive toxicity and specific target organ toxicity through repeated exposure on label and SDS or on the SDS alone – same for metals and alloys when supplied in the massive, non-dispersible, form.
Products falling within the scope of GHS will carry the GHS label at the point where they are supplied to the workplace, and that label should be maintained on the supplied container in the workplace.
The GHS label or label elements should also be used for workplace containers
However, competent authority can allow employers to use alternative means of giving workers the same information in a different format when appropriate to the workplace and it communicates the information as effectively as the GHS label – for instance info could be displayed in the work area, rather than on individual containers.
Alternative means of communicating hazards are needed usually where hazardous chemicals are transferred from original supplier containers to secondary containers or where chemicals are produced in a workplace but are not packaged in containers for intended for sale or supply.
In many situations it is impractical to produce complete GHS labels and attach it to the container due to size limitations or access to a process container (e.g. containers for laboratory testing, storage vessels, piping or process reaction systems). In such instances, systems should ensure there is clear hazard communication and workers should be trained to understand the specific communication methods used in a workplace.
See GHS R3 126.96.36.199.5.1 for examples of how communications can be handled in such situations
Consumer product labelling based on the likelihood of injury
Competent authorities may allow labelling based on likelihood of harm, rather than based on hazard for consumer labelling.
Tactile warnings, if used, should conform to ISO 11683:1997 “Tactile warnings of danger: requirements
Learn more about the GHS by clicking on the links below:
GHS Answer Center
10 GHS Facts in 60 Seconds
GHS 101: U.S. Adoption
GHS 101: An Overview
GHS 101: History of the GHS
GHS 101: Classification
GHS 101: Safety Data Sheets
GHS 101: Links to Useful GHS Info
GHS 101: GHS Definitions
5 Great Questions on GHS
GHS Transport Pictograms
Access the UN’s GHS Third Revision by Clicking the Links Below:
| ||Foreword and table of contents |
| ||Part 1: |
| ||Part 2: |
| ||Part 3: |
| ||Part 4: |
| ||Annex 1: |
Allocation of label elements
| ||Annex 2: |
Classification and labelling summary tables
| ||Annex 3: |
Codification of hazard statements, codification and use of precautionary statements and examples of precautionary pictograms
| ||Annex 4: |
Guidance on the preparation of Safety Data Sheets
| ||Annex 5: |
Consumer product labelling based on the likelihood of injury
| ||Annex 6: |
Comprehensibility testing methodology
| ||Annex 7: |
Examples of arrangements of the GHS label elements
| ||Annex 8: |
An example of classification in the Globally Harmonized Systems
| ||Annex 9: |
Guidance on hazards to the aquatic environment
| ||Annex 10: |
Guidance on transformation/dissolution of metals and metal compounds |
Scientists will have to find alternative explanations for a huge population collapse in Europe at the end of the Bronze Age as researchers prove definitively that climate change – commonly assumed to be responsible – could not have been the culprit.
There had been some speculation that climate change had led to a collapse of Bronze Age empires with substantial decrease in the human population.
But a much more granular examination of the climate record and human activities shows that wetter weather intruded several generations after the collapse.
Wo what caused the collapse? New technology.
According to Professor Armit, social and economic stress is more likely to be the cause of the sudden and widespread fall in numbers. Communities producing bronze needed to trade over very large distances to obtain copper and tin. Control of these networks enabled the growth of complex, hierarchical societies dominated by a warrior elite. As iron production took over, these networks collapsed, leading to widespread conflict and social collapse. It may be these unstable social conditions, rather than climate change, that led to the population collapse at the end of the Bronze Age.
The necessary ingredients for a Bronze Age culture required large complex societies. The only mode humans had at the time to effectively sustain this were hierarchical authroitarian ones.
Iron could, with the right technology, can made almost anywhere. It is found throughout the world, meaning that not only was a stronger metal available but to a larger, distributed audience. Meaning that hierarchy was not longer necessary for the domination of others.
The collapse of Bronze Age hierarchical societies may well have come from their inability to deal with the distributed democracies of iron, resulting in warfare that caused their collapse.
So we see a lack of human activity until the iron age civilizations rebuild the hierarchies. |
What is a quadratic polynomial? Is there an equation with a quadratic degree of 2? The answer to these questions, unfortunately, are no and yes. There is an equation with a quadratic degree of 2 but it's not a quadratic equation and it doesn't have a solution in the form of a quadratic equation.
In a quadratic equation the first term, or the root is a constant and can be positive, negative or zero. The second term is a constant and can be zero, positive or negative. The third term is always a factor and can be either positive or negative. The last term is called the term and is one term that is dependent on all of the previous terms. It is usually a complex number. In a quadratic equation there is only one solution to the equation and that solution is also a complex number.
In a quadratic equation it is possible for a constant term to be zero. If the first term is zero, it means that a solution exists. But if it's zero then there is only one solution and that is a complex number. In addition to this there are three factors involved in this equation. The second term represents a complex number. The third term is a constant. And the last term is a factor and determines how the other terms interact.
A quadratic equation can be written in the form “a+b-c=q”. This is called the Pythagorean Theorem. If the first term is zero then we can easily make the answer equal to the second term. If the first term is positive and the second term is negative then we can make the answer equal to the third term. If the first term is zero then we can also make the answer equal to the third term by taking the square root of both sides.
However, if a quadratic equation has a degree of 2 then the only solution is a complex number. That is why it is known as the quadratic equation. You can also find solutions to the quadratic equation in terms of the other basic and more complicated equations. For example, you can solve for x using the Pythagorean Theorem, you can also solve for a and b using Euler's Formula and you can also solve for c and d using the Taylor Rule. So there are many ways that you can find solutions. to the quadratic equation.
In a quadratic equation you might find that if you try to add and multiply together terms you get a higher value of one of the factors than the other and this is known as the integral of the quadratic equation. When you have the integral of a quadratic equation, you can tell if you're solving for a real number or a complex number. If the difference between the two numbers is zero then the solution is a complex number. If it is a positive number then the solution is a real number. |
| Lesson 9 || Networking standards |
| Objective || Describe the importance of networking standards in e-Commerce solutions. |
Key networking standards
A protocol is a specific implementation of a subset of the Open Standards Interconnection (OSI)
reference model. Protocols as they apply to networking are usually broken into four categories, including:
- LAN protocols
- WAN protocols
- Routing protocols
- Network protocols
Largely, protocol categories match the subgroups in our networking discussion.
There are seven layers in the OSI model. The higher layers (5 through 7) involve applications and thus are software oriented.
The lower levels involve the physical implementation, and are known as the data transport layers.
Click the Detail button to learn more about standards related to OSI.
No other area of computer-based technology rests more on standards than networking. In terms of protocols, the most important networking standards for e-Commerce and the Web include TCP/IP and HTTP.
The list of networking standards and protocols for networking hardware to software is endless. There is no way, given the breadth of this course, that we can dive in deep enough to provide you with a comprehensive understanding of all the network standards.
We do suggest that the e-Commerce architect possess a solid understanding of most of the networking standards. There are plenty of public sources of education that go into detail about networking standards.
Networking standards organizations
Several organizations participate in the networking standards process, and some of the key networking standards for which they are responsible are included in the Detail section. Click the Standards Detail button to explore some of these resources.
The OSI model:
All networking protocol and software standards, as well as hardware standards, operate within an extremely
important standard, known as Open Standards Interconnection (OSI) reference model. |
Bone spurs, or osteophytes, are bony projections that form along the joints and are often seen in conditions such as arthritis. Bone spurs can cause pain and are largely responsible for limitations in joint motion.
Bone spurs can form as the body responds to an abnormality around a joint. The most common cause is osteoarthritis, a degenerative disease in which the normal cartilage surrounding a joint is gradually worn away.1
As the protective cartilage is depleted and bone becomes increasingly exposed, the body responds with inflammation and changes to the structures around the joints. Ligaments thicken and deposits of calcium create new bone growth—what is known as a bone spur.1
The formation of spurs can be thought of as the body’s effort to increase the surface area of an exposed joint—a protective measure to better distribute any impact or force that may be applied to that joint. Unfortunately, it tends to have the opposite effect, restricting joint mobility while constricting nerves and other tissues servicing that joint.1
Bone spurs are also common in a non-inflammatory disease called diffuse idiopathy skeletal hyperostosis (DISH). While the cause of DISH is unknown, as many as 80% of people diagnosed with the disease will experience pain and stiffness as a result of the formation of spurs along the spine.2
Bone Spur Symptoms
Most bone spurs do not cause significant pain or problems. Even when there is pain, it may not be caused by the spur itself but rather the underlying condition (arthritis, disease, degeneration).3
Bone spurs that form along the spine can result in an impingement in which a nerve is compressed by the bone overgrowth. In such a case, there can be pain felt in multiple parts of the body depending on which nerve line was affected. It can cause pain in the legs or arms as well as numbness and a prickly, pins-and-needles sensation in the feet or hands.4
The formation of osteophytes on the joints of the fingers (called Heberden's nodes and Bouchard's nodes) not only cause the typical swelling we associate with arthritis but seriously limit the dexterity of hands and fingers. Pain most often occurs during the earlier stages of arthritis (generally around middle age) and tends to subside at a later age.5
Although bone spurs themselves are not problematic, they are indicative of an underlying problem that may need treatment. Changes in bone growth are often documented to help monitor and manage the severity of degenerative diseases such as arthritis. If there is a pain, a non-steroidal anti-inflammatory drug (NSAID) like ibuprofen may be prescribed.6
In circumstances in which a bone spur seriously impacts a person’s ability to function, it may be removed. However, the majority of these spurs will return unless the underlying problem is somehow resolved. In cases of osteoarthritis, this may not be possible.6
Sometimes bone spurs around the fingers or toes (such as happens with hallux rigidus of the big toe) can be removed to improve motion and reduce pain.7
Often when people have arthroscopic shoulder surgery, such as with a rotator cuff repair, they will have a bone spur removed from around the rotator cuff, in a procedure known as a subacromial decompression.4
A Word From Verywell
Bone spurs can be a sign of damage or degenerative change within a joint. Bone spurs could also be a source of pain and deformity around the joint.
That said, the management of a bone spur requires management of the underlying condition. Simply removing a bone spur is often a short-term solution. There are specific situations where your surgeon may remove a bone spur, but it is likely that over time the condition will return. |
Vitamin D is a fat-soluble vitamin. Fat-soluble vitamins are stored in the body's fatty tissue.
Vitamin D helps the body absorb calcium. Calcium and phosphate are two minerals that you must have for normal bone formation.
In childhood, your body uses these minerals to produce bones. If you do not get enough calcium, or if your body does not absorb enough calcium from your diet, bone production and bone tissues may suffer.
The body makes vitamin D when the skin is directly exposed to the sun. That is why it is often called the "sunshine" vitamin. Most people meet at least some of their vitamin D needs this way.
Very few foods naturally contain vitamin D. As a result, many foods are fortified with vitamin D. Fortified means that vitamins have been added to the food.
Fatty fish (such as tuna, salmon, and mackerel) are among the best sources of vitamin D.
Beef liver, cheese, and egg yolks provide small amounts.
Mushrooms provide some vitamin D. Some mushrooms you buy in the store have higher vitamin D content because they have been exposed to ultraviolet light.
Most milk in the United States is fortified with 400 IU vitamin D per quart. Most of the time, foods made from milk, such as cheese and ice cream, are not fortified.
Vitamin D is added to many breakfast cereals. It is also added to some brands of soy beverages, orange juice, yogurt, and margarine. Check the nutrition fact panel on the food label.
It can be hard to get enough vitamin D from food sources alone. As a result, some people may need to take a vitamin D supplement. Vitamin D found in supplements and fortified foods comes in two different forms:
- D2 (ergocalciferol)
- D3 (cholecalciferol)
Follow a diet that provides the proper amount of calcium and vitamin D. Your provider may recommend higher doses of vitamin D if you have risk factors for osteoporosis or a low level of this vitamin.
Too much vitamin D can make the intestines absorb too much calcium. This may cause high levels of calcium in the blood. High blood calcium can lead to:
- Calcium deposits in soft tissues such as the heart and lungs
- Confusion and disorientation
- Damage to the kidneys
- Kidney stones
- Nausea, vomiting, constipation, poor appetite, weakness, and weight loss
Some experts have suggested that a few minutes of sunlight directly on the skin of your face, arms, back, or legs (without sunscreen) every day can produce the body's requirement of vitamin D. However, the amount of vitamin D produced by sunlight exposure can vary greatly from person to person.
- People who do not live in sunny places may not make enough vitamin D within a limited time in the sun. Cloudy days, shade, and having dark-colored skin also cut down on the amount of vitamin D the skin makes.
- Because exposure to sunlight is a risk for skin cancer, exposure for more than a few minutes without sunscreen is not recommended.
The best measure of your vitamin D status is to look at blood levels of a form known as 25-hydroxyvitamin D. Blood levels are described either as nanograms per milliliter (ng/mL) or nanomoles per liter (nmol/L), where 0.4 ng/mL = 1 nmol/L.
Levels below 30 nmol/L (12 ng/mL) are too low for bone or overall health, and levels above 125 nmol/L (50 ng/mL) are probably too high. Levels of 50 nmol/L or above (20 ng/mL or above) are enough for most people.
The Recommended Dietary Allowance (RDA) for vitamins reflects how much of each vitamin most people should get on a daily basis.
- The RDA for vitamins may be used as goals for each person.
- How much of each vitamin you need depends on your age and sex. Other factors, such as pregnancy and your health, are also important.
Infants (adequate intake of vitamin D)
- 0 to 6 months: 400 IU (10 micrograms [mcg] per day)
- 7 to 12 months: 400 IU (10 mcg/day)
- 1 to 3 years: 600 IU (15 mcg/day)
- 4 to 8 years: 600 IU (15 mcg/day)
Older children and adults
- 9 to 70 years: 600 IU (15 mcg/day)
- Adults over 70 years: 800 IU (20 mcg/day)
- Pregnancy and breastfeeding: 600 IU (15 mcg/day)
The National Osteoporosis Foundation (NOF) recommends a higher dose for people age 50 and older, 800 to 1,000 IU of vitamin D daily. Ask your health care provider which amount is best for you.
Vitamin D toxicity almost always occurs from using too many supplements. The safe upper limit for vitamin D is:
- 1,000 to 1,500 IU/day for infants (25 to 38 mcg/day)
- 2,500 to 3,000 IU/day for children 1 to 8 years; ages 1 to 3: 63 mcg/day; ages 4 to 8: 75 mcg/day
- 4,000 IU/day for children 9 years and older, adults, and pregnant and breastfeeding teens and women (100 mcg/day)
One microgram of cholecalciferol (D3) is the same as 40 IU of vitamin D.
Cholecalciferol; Vitamin D3; Ergocalciferol; Vitamin D2
Mason JB, SL Booth. Vitamins, trace minerals, and other micronutrients. In: Goldman L, Schafer AI, eds. Goldman-Cecil Medicine. 26th ed. Philadelphia, PA: Elsevier; 2020:chap 205.
National Osteoporosis Foundation website. Clinician's guide to prevention and treatment of osteoporosis. cdn.nof.org/wp-content/uploads/2016/01/995.pdf. Accessed November 9, 2020.
Salwen MJ. Vitamins and trace elements. In: McPherson RA, Pincus MR, eds. Henry's Clinical Diagnosis and Management by Laboratory Methods. 23rd ed. St Louis, MO: Elsevier; 2017:chap 26.
Review Date 2/2/2019
Updated by: Emily Wax, RD, CNSC, University of Virginia Health System, Charlottesville, VA. Also reviewed by David Zieve, MD, MHA, Medical Director, Brenda Conaway, Editorial Director, and the A.D.A.M. Editorial team. Editorial update 11/09/2020. |
Two coins are tossed simultaneously. Find the probability of getting at most 1 headat most 1 head.
When two coins are tossed simultaneously, all possible
outcomes are HH, HT, TH and TT.
Total number of possible outcomes = 4
Let E be the event of getting at most one head.
Then, the favourable outcomes are HT, TH and TT.
Number of favourable outcomes = 3
∴ P (getting at most 1 head) = `"Number of favourable outcomes"/ "Total number of possible outcomes" = 3/4 ` |
Cyber safety – why is it important?
The Internet is a great information tool and can also be a lot of fun. Unfortunately, it can also open you up to potentially risky situations with serious outcomes.
Being cyber safe means acting in a smart, safe and responsible manner on the Internet and other connected environments. Cyber safety is about protecting your personal information, reputation and well-being.
Protecting children and young people
There are many things that young people can do to stay cyber safe: never include personal details such as phone numbers or email address on online profiles, never share passwords, think carefully before sending or posting a photo as once it’s sent it can never be retrieved, never share personal details with (or meet up with) a stranger, and don’t argue online.
However the best way for children and young people to stay cyber safe is for the adults in their lives to be actively involved.
Cyber bullying has a detrimental effect on young people and can cause mental health issues such as anxiety, poor academic achievement, poor relationships with peers, low self-esteem and loneliness. Raising awareness of bullying issues when they occur in adolescence is socially and economically more effective than dealing with enduring problems in adulthood.
Research indicates that bullying peaks twice in childhood—once in primary school and again during the transition to high school, making the implementation of programs in primary schools a key goal.
While a big focus of cyber safety education in Australia is aimed at helping to keep children and young people safe online, evidence shows that, to promote the safety of young people further, it is also important to put the focus on the adults in their lives. Many adults are not confident enough about their knowledge to feel like they can help their kids make the best use of technology. By helping parents to understand how and why their children use technology, they are better able to guide their children’s online interactions.
Cyber safety for seniors
Cyber safety is equally important for senior citizens. Our seniors are increasingly becoming more active online to keep up with family and friends, but this also increases the risk of becoming the victim of online scams and hacking.
Seniors can stay cyber safe by ensuring their computers are protected with security software and never providing personal information to strangers.
It is recognised that libraries are an essential community resource to provide users with the skills to be smart, safe and responsible users of digital technologies. eSmart Libraries is an Australia wide initiative supported by the Telstra Foundation and is a behaviour change initiative for libraries to help improve cyber-safety and deal with cyber-bullying and bullying.
More information available at these websites |
Asked by: Huertas Stanchevaasked in category: General Last Updated: 12th January, 2020
What is the process of breaking down large fat globules into smaller fat globules?
Also know, what is the meaning of fat globules?
In human cell biology globules of fat are the individual pieces of intracellular fat inside other cell types than adipocytes (fat cells). Intracellular fat is bound in the globular form by phospholipid membranes, which are hydrophobic. This means that fat globules are insoluble in water.
Additionally, what enzyme breaks down fat in the stomach? Lipase
Simply so, how are large lipid globules made into smaller globules in the small intestine?
Emulsification is a process in which large lipid globules are broken down into several small lipid globules. Lipases break down the lipids into fatty acids and glycerides. These molecules can pass through the plasma membrane of the cell, entering the epithelial cells of the intestinal lining.
Why is it necessary to emulsify fat globules in the small intestine?
Emulsification is important for the digestion of lipids because lipases can only efficiently act on the lipids when they are broken into small aggregates. Lipases break down the lipids into fatty acids and glycerides. |
If you look at a drop of pond water under a microscope, all the "little creatures" you see swimming around areprotists.
All protists have a nucleus and are thereforeeukaryotic. Protists are either plant-like, animal-like or fungus-like.
Plant-like protists are autotrophs – they contain chloroplasts and make their own food. Animal-like and fungus-like protists areheterotrophs.
Protozoansare animal-like protists(heterotrophs)grouped according to how they move. The wordprotozoameans "little animal." They are so named because many species behave like tiny animals—specifically, they hunt and gather other microbes as food.
All protozoa digest their food in stomach-like compartments called vacuoles <vac-you-ohls>. As they chow down, they make and give off nitrogen, which is an element that plants and other higher creatures can use. Protozoa range in size from 1/5,000 to 1/50 of an inch (5 to 500 µm) in diameter. They can be classified into three general groups based on how they move.
The first group is the phylum Rhizopoda. These are amoebae <ah-me-bee>, which can be subdivided into the testate amoebae, which have a shell-like covering, and the naked amoebae, which don't have this covering.
Amoebae ooze along by means of pseudopodia (false feet) engulfing food as they go.
Amoebae live in water or moist places. They have a cell membrane but no cell wall.
The second group is the Flagellates <flah-geh-lets>, of the phylum Zoomastigina. Flagellates are generally the smallest of the protozoa and have one or several long, whip-like projections called flagella poking out of their cells. Flagellates use their flagella to move.
It is a flagellate in the intestines of termites which enable them to eat wood.
The third group of protozoans are the ciliates from the phylum Ciliophora. These are generally the largest protozoa. They are covered with hair-like projections called cilia and they eat the other two types of protozoa as well as bacteria. Ciliates are found in every aquatic habitat.
The last of the Protozoans come from the phylum, Sporozoa. These are parasitic and nonmotile. For example……
Plant-like protists arealgae. Algae are eukaryotic autotrophs. They, along with other eukaryotic autotrophs, form the foundation of Earth’s food chains. They produce much of Earth’s oxygen.
There are three unicellular phyla of algae: • Phylum Euglenophyta • Phylum Bacillariophyta • Phylum Dinoflagellata
Members of first phylum of algae, Euglenophyta, are both plant-like and animal-like. • Euglena are autotrophs since they make food from sunlight and • Heterotrophs since they ingest food from surrounding water.
The second unicellular algae, Bacillariophyta, are photosynthetic autotrophs. • They have shells of silica. • They make up a large portion of the world’s phytoplankton which is Earth’s largest provider of oxygen.
The third unicellular algae, Dinoflagellata, are a major component of marine phytoplankton. • These algae have at least two flagella set at right angles to each other and thick cell walls made of cellulose plates. • Blooms of dinoflagellates cause “Red Tide.”
Rhodophyta are red seaweeds. • They are found in warm or cold marine environments along coast lines in deeper water. • They absorb green, violet, and blue light waves. These light waves are able to penetrate below 100 meters.
Phylum Phaeophyta is made up of the brown algae. • They are found in cool saltwater along rocky coasts. • Giant Kelp are the largest and most complex brown algae. They have hold fasts and air bladders.
The last of the multicellular algae are the green algae from thePhylum chlorophyta. • Most green algae are found in fresh water habitats.
A Volvox is a hollow boll composed of hundreds of flagellated cells in a single layer.
Fungus-like protists, Myxomycota and Oomycota are decomposers. • Phylum Myxomycota are made up of plasmodial slime molds. • Phylum Oomycota is made up of water molds and downy molds.
Slime MoldsSlime molds have traits like both fungi and animals. During good times, they live as independent, amoeba-like cells, dining on fungi and bacteria. But if conditions become uncomfortable—not enough food available, the temperature isn't right, etc.—individual cells begin gathering together to form a single structure. The new communal structure produces a slimy covering and is called a slug because it so closely resembles the animal you sometimes see gliding across sidewalks. The slug oozes toward light. When the communal cells sense that they've come across more food or better conditions, the slug stops
Water molds from the Phylum Oomycota are classified as protists because they have flagellated reproductive cells. • Downy mildews parasitize plants and are decomposers in freshwater ecosystems. |
The Sierra Nevada contains thousands of lakes and ponds. Despite their importance to wildlife and people, until recently they were relatively little-studied. As a consequence, the species inhabiting these water bodies, structure of lake food webs, and impacts of fish introductions were all poorly understood. We’ve sought to remedy this using landscape-scale surveys, detailed observational studies, and whole-lake experiments.
In the late 1990s and early 2000s, we surveyed more than 7,000 lakes and ponds in the central and southern Sierra Nevada. We used data collected in this survey effort to describe these habitats and their species composition, including native amphibians, benthic macroinvertebrates, and zooplankton, and non-native fish. This information and subsequent studies allowed description of the effect of non-native fish on lake fauna, the recovery of this fauna following fish removal, impact of changes to lake communities on the adjacent terrestrial ecosystem, and the effect of mountain yellow-legged frog extinctions on lake ecosystems.
The findings from our research provide the foundation for lake recovery efforts being implemented in the Sierra Nevada by the National Park Service, U.S. Forest Service, and California Department of Fish and Wildlife. These efforts will ensure that sufficient numbers of lakes remain in their original fishless condition to support viable populations of native species. |
Sé el primero en recomendar esto
In the almost half century since the Drake Equation was first conceived, a number of profound discoveries have been made that require each of the seven variables of this equation to be reconsidered. The discovery of hydrothermal vents on the ocean floor, for example, as well as the ever-increasing extreme conditions in which life is found on Earth, suggest a much wider range of possible extraterrestrial habitats. The growing consensus that life originated very early in Earth's history also supports this suggestion. The discovery of exoplanets with a wide range of host star types, and attendant habitable zones, suggests that life may be possible in planetary systems with stars quite unlike our Sun. Stellar evolution also plays an important part in that habitable zones are mobile. The increasing brightness of our Sun over the next few billion years, will place the Earth well outside the present habitable zone, but will then encompass Mars, giving rise to the notion that some Drake Equation variables, such as the fraction of planets on which life emerges, may have multiple values. |
The recycling of aluminium generally produces significant cost savings over the production of new aluminium even when the cost of collection, separation and recycling are taken into account. Over the long term, even larger national savings are made when the reduction in the capital costs associated with landfills; mines and international shipping of raw aluminium are considered.
The environmental benefits of recycling aluminium are also enormous. Only around 5% of the CO2 is produced during the recycling process compared to producing raw aluminium (and an even smaller percentage when considering the complete cycle of mining and transporting the aluminium). Also, open-cut mining is most often used for obtaining aluminium ore, which destroys large sections of the world’s natural land.
One of the reasons why only 31% of scrap aluminium is recycled is that it’s cheaper, for the aluminium producer, to make new aluminium then it is to find, collect, identify, separate, and clean the aluminium parts in old products. Some manufacturers like to paint aluminium solely for aesthetic reasons; this creates problems for recyclers because the paint releases extremely toxic fumes when the aluminium is re-melted. Most of the aluminium that’s recycled comes from pre-consumer factory waste.
Aluminium lasts practically forever, 500 year old aluminium is just as good as aluminium made 50 years ago, because it doesn’t rust or corrode like other metals. Strategic planning may dictate that it’s most economical to stockpile scrap aluminium for future use while energy is still relatively cheap. Whatever the case may be, the global supply of easily accessible scrap aluminium is not enough to meet current demands for aluminium.
- Aluminium is the third most abundant element in the earth’s crust and is the earth’s second most used metal.
- The aluminium drink can is the world’s most recycled packaging container – worldwide over 50% of aluminium cans are recycled.
- Nearly 60% of the aluminium used in the UK has been previously recycled
- Recycling aluminium drink cans saves up to 95% of the energy needed to make aluminium from its raw materials.
- Making one aluminium drink can from raw materials uses the same amount of energy that it takes to recycle 20.
- Recycling 1 kg of aluminium saves 8kg of bauxite, 4kg of chemical products and 14 Kilowatts of electricity.
- The energy saved by recycling 1 aluminium drink can is enough to run a television for three hours.
- In 2000 the UK consumed 5 billion aluminium drinks cans, of which 42% were recycled. This is lower than the European leaders – Switzerland and Finland currently have the highest recycling rate at 91%.
- The average person uses 1.3kg of aluminium cans a year – that’s about 84 cans
- The average household uses 3.2kg of aluminium cans a year – that’s about 208 cans.
- An aluminium drink can contains four different aluminium alloys: the can body, can end, ring pull and the rivet attaching the ring pull.
Steel is also mined from an ore. Iron ore is plentiful but it too is usually combined with oxygen or sometimes carbon or sulphur. The iron ore is stripped in a blast furnace to reduce it to pig iron that can then be used in steel production.
There are currently about 11 million tonnes per annum of iron and steel scrap arisings. About 70% of this scrap is recovered. Of the remainder – 2/3 is land filled.
Facts and Figures
- All steel cans are 100% recyclable. They can be recycled over and over again, to make anything from cars and bicycles to more steel cans, without any loss of quality!
- Steel is the only common metal that will stick to a magnet.
- In the UK, we use 13 billion steel cans every year. Stacked on top of each other, you could make three piles of cans that would reach to the moon.
- Steel is made from one of the earth’s most common natural resources, iron ore, as well as limestone and coal.
- Steel is strong and durable, protecting from water, oxygen and light. These qualities make steel an excellent packaging material for food and drink, and for household, promotional and industrial products.
- Every household uses approximately 600 steel cans a year.
- Steel is the most recycled metal in the UK – and in the world.
- The thinnest part of a steel can wall measures only 0.07mm thick – that’s thinner than a human hair.
- It would take 1087 steel drinks cans stacked end to end to reach the top of the London Eye – or 2818 to reach the top of the Eiffel Tower.
- 70% of all steel packaging is recycled, compared to just over 30% of aluminium packaging.
- Steel cans are becoming lighter. The average weight of a soft drinks can is only 21.4g, compared with 31.2g in 1980.
- Over 3 billion cans are recycled in the UK each year – equivalent to the weight of 18,000 double decker buses.
- All steel cans contain up to 25% recycled steel.
- It’s not just food and drink that come in steel cans. Many paint cans, aerosols, biscuit and sweet tins, and bottle tops are made of steel too.
- Recycling one tonne of steel cans saves 1.5 tonnes of iron ore, 0.5 tonnes of coal and 40% water usage.
- Two-thirds of all cans on supermarket shelves are made of steel.
- Recycling seven steel cans saves enough energy to power a 60-watt light bulb for 26 hours.
- Steel in Europe contains 54% recycled steel and is 100% recyclable. |
As one of the five senses, hearing is one of the human body’s most extraordinary processes. It is a complex system made up of delicate and synchronous parts. Read this blog to learn more about our ears and how they help us to hear.
Sound begins with a vibration in the atmosphere. When a sound is made (whether from wind, a bell or a voice), it moves the air particles around it. Those air particles, in turn, move the air particles around them, carrying the energy of the vibration through the air as a sound wave. That’s where your ear comes in.
Sound waves are collected by the outer ear and directed along the ear canal to the eardrum. Did you know, the shape of your outer ear is just as unique as you are? It plays an important role in how you hear. Called the pinna, its funnel-like shape and curvy design enable you to determine the direction the sound is coming from.
When the sound waves hit the eardrum, the impact creates more vibrations, which, cause the three bones of the middle ear to move. The smallest of these bones, the stirrup, fits into the oval window between the middle and inner ear.
When the oval window moves, fluid in the inner ear moves, carrying the energy through a delicate, snail-shaped structure called the cochlea.
In the inner ear, thousands of microscopic hair cells are bent by the wave-like action of the fluid inside the cochlea. The bending of these hairs sets off nerve impulses, which are then passed through the auditory nerve to the hearing center of the brain. This center translates the impulses into sounds the brain can recognize, like words, music or laughter, for instance.
If any part of this delicate system breaks down, hearing loss can be the result. If you have any questions about your hearing or a loved one’s hearing, give us a call today. |
But reality is not always so orderly. If the protons accumulate faster than small waterwheels can remove them, they seep through the membrane in other ways. And in skeletal muscle cells, this leakage of protons produces substantial amounts of heat. It is believed to help keep polar animals warm, said Traver Wright, a professor at Texas A&M University and author of the new article.
To see how many proton leaks could occur in sea otters, Dr Wright and his colleagues placed muscle cell samples from 21 animals in a special chamber that allowed researchers to monitor the ins and outs of the mitochondria of the sea otters. cells. They discovered that sea otters are capable of leaking huge amounts of protons, suggesting a substantial heat-producing capacity. And they were surprised to find that this ability was present in both small otters and adult adults.
In general, an organism’s metabolic capacity is related to its level of activity, Dr. Wright said. But young otters, of an age when they often relied on their mothers; adults of all sizes; and even a relatively inactive captive otter all had equally high metabolisms and great proton escape capacity. In fact, they had higher rates than even Iditarod sled dogs.
“Their escape metabolic rate is nowhere near as high as in sea otters,” Dr. Wright said of the dogs. For otters, he added, “this generation of heat is really the driving force behind their metabolic development.”
Sea otters consume calories even without much physical activity, as that energy converts directly into heat, the findings suggest. Otters are among the only animals to date where proton leakage can account for almost all of their high metabolism, Dr. Wright said. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.