content
stringlengths 275
370k
|
---|
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Heartburn, Hiatal Hernia, and Gastroesophageal Reflux Disease ( GERD)
What Is GERD?
Gastroesophageal reflux disease, or GERD, occurs when the lower esophageal sphincter (LES) does not close properly and stomach contents leak back, or reflux, into the esophagus. The LES is a ring of muscle at the bottom of the esophagus that acts like a valve between the esophagus and stomach. The esophagus carries food from the mouth to the stomach.
When refluxed stomach acid touches the lining of the esophagus, it causes a burning sensation in the chest or throat called heartburn. The fluid may even be tasted in the back of the mouth, and this is called acid indigestion. Occasional heartburn is common but does not necessarily mean one has GERD. Heartburn that occurs more than twice a week may be considered GERD, and it can eventually lead to more serious health problems.
Anyone, including infants, children, and pregnant women, can have GERD.
What are the symptoms of GERD?GERD in Children
What causes GERD?
How is GERD treated?
What if symptoms persist?
What are the long-term complications of GERD?
Hope Through Research |
Data recovery is the process of restoring files and data that were lost due to deletion, erasure, system crash, reformatting, or hardware damage.
In order to understand what happens when you scan with a high-quality data recovery program, it’s important to know how your hard drive stores and accesses data. If you are unfamiliar with this topic, you might want to review our article on how hard drives work. As we learned, hard drives store a file as a string of 1s and 0s. But what happens when you delete a file?
When a file is deleted, your ability to access it is gone. There is no icon, and no way to open it. However, the actual information is still recorded, magnetized in binary code on your hard disk. Think of it kind of like a page in a book. You remove the page listing from the table of contents, and you erase the page number, making it impossible to find. But the page is still there, with the same information.
You can also think of it like painting. If an artist paints a scene, he may decide that a tree should be removed. He doesn’t erase the tree, he simply paints over it. In the same way, your file information is not erased, but is prepared to be ‘repainted. ’ This is the case for hard drives, deleted SD memory files, deleted files from USB cards, and deleted iPod music.
So when deleting a file, the computer places the corresponding hard drive location in the ‘empty space’ category. The computer ignores this information completely, so there’s no way for you to access it. Next time you need to save something, this space may be used. Using our book example, the page is not erased, but it’s put in the scrap paper pile, to be re-used when necessary.
The same is also true for files and data lost through crashes, reformats, etc. This file is not accessible, but data lingers on the hard drive in its binary form.
Find your missing files and data today!But the important question is, how do you get that data back?
Data recovery software performs a scan of your hard drive itself, not of the files which are visible in Windows. Even though you can’t see the file, the data recovery software can. And it can recover your missing file or restore your deleted email.
With advanced scanning technologies, the software directs the read/write head of the hard drive to read all parts of the platters, even areas which are recorded as blank. As we learned, these areas are not really blank, but contain information which has been deleted and are considered free space.
In this way, the data recovery program reads information and files which the computer (and you) would otherwise think was gone.
The data recovery program can apply various detection methods to identify files, recognizing that a certain sort of information is a picture, one type is an email, another sort is a text document, and so on.
The detected data is then recovered. To do this, the data recovery program makes the computer recognize it as a file, re-interpret it and transform it back into a file we understand. After recovery, the file will have an icon, a file name, and can be accessed just as before. The data on the hard drive is no longer marked as empty, so there is no longer a risk of the files being overwritten.
Since all deleted files linger as ghost information on the hard drive, a data recovery scan can dredge up lots of old files, and it may take some time for a thorough scan. Performing a text scan can simplify the process, and help you locate your file that much faster. |
Many capable children at all grade levels experience frustration and failure in school, not because they lack ability, but because they do not have adequate study skills. Good study habits are important for success in school, to foster feelings of competence, to develop positive attitudes, and to help children realize they can control how well they do in school and in life. Good study habits lay the groundwork for successful work habits as an adult.
There are 24 articles in this section.
Sort by: | Date | Title |
Taking good notes while reading can help students improve concentration and actively engage with what they are reading. This excerpt from Homework Made Simple: Tips, Tools and Solutions for Stress-Free Homework describes a number of effective note-taking methods.
Teach your students to avoid the avoidance of writing. Learn how to lead them down the path of enthusiasm and self-confidence about writing through research-proven strategies.
The Coordinated Campaign for Learning Disabilities has compiled a list of strategies that parents can use to help their child develop good organizational skills.
Help your students remember their math facts. Mnemonic instruction is particularly helpful for students with short term memory problems. Learn how to use three important strategies, key words, pegwords,and letters. |
A recent Time magazine had fusion power highlighted on its cover, with the thought, “It might actually work this time”. The concept is that fusion power, if you can make it work, will provide essentially unlimited power, and that will be totally greenhouse gas free. The usual gaseous product is helium, which has no vibrational spectrum at all because it does not form molecules, at least none that have any lifetime that we know of. The so-called miracle here is that this fusion power will come from small companies, and not from government funding to academics. As an example of the biggest government-funded effort, ITER is being constructed in the south of France at a cost of $20 billion, and will take at least another twelve years to construct, then some unknown amount of time to “debug”.
The problems are reasonably obvious. If you can make fusion work, you get/require temperatures in the order of a hundred million degrees centigrade. There is no solid material above about 4,000 degrees centigrade, so how do you contain this? The short answer is by magnetic fields. Unfortunately, the material becomes a plasma after a few thousand degrees, and plasmas are weird. They are extremely difficult to control, nevertheless, in principle plasmas can be controlled with magnetic fields. It is also possible to arrange for the plasmas to generate their own containment fields, but this is not generally done. Of course to get the necessary magnetic field strength you consume an awful lot of electricity, and a lot more to get the material up to reaction temperature, so there is also the problem of getting more energy out than you put in. There have been fusion reactions in the lab, but these have always, so far, consumed more electricity than they could get out. Of course, they were not designed to make electricity, but rather they were designed to uncover and solve the problems in getting fusion to work usefully.
The reason you get so much more energy out than from any other technology lies in the strength of binding. One of the strongest chemical bonds is the hydrogen molecule, which has a binding energy of about 4.3 electron volts (or about 438 kilojoules per mole). The binding energy of deuterium (a proton and a neutron) is about 2.3 MeV, i.e., about a million times more. 4He (the usual form of helium, two protons and two neutrons) is a bit over 28 MeV. So, if you react two deuterium nuclei to form helium, you get about 23.4 MeV. The easiest reaction to get working would be to react deuterium with tritium (a proton bound to two neutrons) and this reacts to give helium plus a neutron. The problem with this is that neutrons will fly off, hit the walls of the reactor, and react with them, making them radioactive. You have to replace your reactor walls every six months, which makes this an added expense. To add to your troubles, tritium is not that stable, and you have to make it somewhere. So, what to do?
One solution that a company called General Fusion (www.generalfusion.com) has come up with is to compress the plasma in a vortex of metal that includes lithium, and lithium captures neutrons and makes tritium. I must confess that as an outsider who is somewhat ignorant of the problems, I like the thinking here. Another reaction is being tried by a company called Tri Alpha https://en.wikipedia.org/wiki/Tri_Alpha_Energy,_Inc.
(they do not seem to have a website that rates on Google!) They appear to have chosen a different reaction still: firing a proton (a hydrogen nucleus) at boron 11 (to get carbon 12, which is a very stable nucleus). All you need to make this go is a billion degrees! That could also be misleading. Heat is random kinetic energy, and what you try to do by heating is to get some parts go fast enough so that when they kit, the kinetic energy is enough to get over the barrier to reaction. This could be done “cold” if the protons were accelerated fast enough, in which case you have directed kinetic energy.
So, how long is this going to take? Who knows? Since at present the problems being solved are still scientific ones, not in the immediate future, because once these are sorted, there will still be engineering problems, including the one of how to get power from the heat. In my futuristic novel “Troubles” I guessed fusion would be made to work about 2050, and my proposed method of recovering energy was by the so-called magnetohydrodynamic effect. There was a power station built in the Soviet Union that worked by taking energy from a plasma, the plasma being made from coal. I gather that while it worked, it generated about 60% of the available energy, which is much better than standard coal-fired plants, and it was limited by the fact that the plasmas collapse in the region of about 1500 degrees Centigrade. However, their plasmas were unlikely to exceed 4000 degrees, and energy recovery is dependent on the temperature range (4000 to about 1800 degrees) and what is thrown out is lost. (The plant also probably failed because coal will also contain silicates, and these would produce a slag. No slag is possible from making helium.) From over a hundred million degrees the losses due to the second law of thermodynamics applied to plasma collapse are negligible. As an aside, my guess was the use of the deuterium – deuterium reaction, to keep neutrons to a modest amount. (There will be some 3He + n.). Unfortunately, I probably will not live long enough to see whether that guess turns out to be right. |
The New Zealand Curriculum (NZC) is the guiding document for teaching and learning in all NZ schools. Below you will find some valuable information about this guiding document. Should you wish to read more about the NZC you can visit the Ministry of Education website. The link is at the bottom of this page.
At Clevedon School, we aim to implement all aspects of the NZC through a robust and integrated programme. Within each class the learning is organised in a way that works to meet the needs of the individuals within it. For more information about the units of work and the learning taking place in your child's class, please visit the team and class pages. And, as always, we encourage you to speak to your child's teacher about your child's learning if you need any further information.
The key competencies
The New Zealand Curriculum identifies five key competencies:
using language, symbols, and texts
relating to others
participating and contributing
People use these competencies to live, learn, work, and contribute as active members of their communities. More complex than skills, the competencies draw also on knowledge, attitudes, and values in ways that lead to action. They are not separate or stand-alone. They are the key to learning in every learning area.
The development of the competencies is both an end in itself (a goal) and the means by which other ends are achieved. Successful learners make use of the competencies in combination with all the other resources available to them. These include personal goals, other people, community knowledge and values, cultural tools (language, symbols, and texts), and the knowledge and skills found in different learning areas. As they develop the competencies, successful learners are also motivated to use them, recognising when and how to do so and why.
Opportunities to develop the competencies occur in social contexts. People adopt and adapt practices that they see used and valued by those closest to them, and they make these practices part of their own identity and expertise.
The competencies continue to develop over time, shaped by interactions with people, places, ideas, and things. Students need to be challenged and supported to develop them in contexts that are increasingly wide-ranging and complex.
Thinking is about using creative, critical, and metacognitive processes to make sense of information, experiences, and ideas. These processes can be applied to purposes such as developing understanding, making decisions, shaping actions, or constructing knowledge. Intellectual curiosity is at the heart of this competency.
Students who are competent thinkers and problem-solvers actively seek, use, and create knowledge. They reflect on their own learning, draw on personal knowledge and intuitions, ask questions, and challenge the basis of assumptions and perceptions.
Using language, symbols, and texts
Using language, symbols, and texts is about working with and making meaning of the codes in which knowledge is expressed. Languages and symbols are systems for representing and communicating information, experiences, and ideas. People use languages and symbols to produce texts of all kinds: written, oral/aural, and visual; informative and imaginative; informal and formal; mathematical, scientific, and technological.
Students who are competent users of language, symbols, and texts can interpret and use words, number, images, movement, metaphor, and technologies in a range of contexts. They recognise how choices of language, symbol, or text affect people’s understanding and the ways in which they respond to communications. They confidently use ICT (including, where appropriate, assistive technologies) to access and provide information and to communicate with others.
This competency is associated with self-motivation, a "can-do” attitude, and with students seeing themselves as capable learners. It is integral to self-assessment.
Students who manage themselves are enterprising, resourceful, reliable, and resilient. They establish personal goals, make plans, manage projects, and set high standards. They have strategies for meeting challenges. They know when to lead, when to follow, and when and how to act independently.
Relating to others
Relating to others is about interacting effectively with a diverse range of people in a variety of contexts. This competency includes the ability to listen actively, recognise different points of view, negotiate, and share ideas.
Students who relate well to others are open to new learning and able to take different roles in different situations. They are aware of how their words and actions affect others. They know when it is appropriate to compete and when it is appropriate to co-operate. By working effectively together, they can come up with new approaches, ideas, and ways of thinking.
Participating and contributing
This competency is about being actively involved in communities. Communities include family, whānau, and school and those based, for example, on a common interest or culture. They may be drawn together for purposes such as learning, work, celebration, or recreation. They may be local, national, or global. This competency includes a capacity to contribute appropriately as a group member, to make connections with others, and to create opportunities for others in the group.
Students who participate and contribute in communities have a sense of belonging and the confidence to participate within new contexts. They understand the importance of balancing rights, roles, and responsibilities and of contributing to the quality and sustainability of social, cultural, physical, and economic environments.
Important for a broad, general education
The New Zealand Curriculum specifies eight learning areas: English, the arts, health and physical education, learning languages, mathematics and statistics, science, social sciences, and technology.
The learning associated with each area is part of a broad, general education and lays a foundation for later specialisation. Like the key competencies, this learning is both end and means: valuable in itself and valuable for the pathways it opens to other learning.
While the learning areas are presented as distinct, this should not limit the ways in which schools structure the learning experiences offered to students. All learning should make use of the natural connections that exist between learning areas and that link learning areas to the values and key competencies.
Learning areas and language
Each learning area has its own language or languages. As students discover how to use them, they find they are able to think in different ways, access new areas of knowledge, and see their world from new perspectives.
For each area, students need specific help from their teachers as they learn:
the specialist vocabulary associated with that area
how to read and understand its texts
how to communicate knowledge and ideas in appropriate ways
how to listen and read critically, assessing the value of what they hear and read.
In addition to such help, students who are new learners of English or coming into an English-medium environment for the first time need explicit and extensive teaching of English vocabulary, word forms, sentence and text structures, and language uses.
As language is central to learning and English is the medium for most learning in the New Zealand Curriculum, the importance of literacy in English cannot be overstated.
In English, students study, use, and enjoy language and literature communicated orally, visually, or in writing.
In the arts, students explore, refine, and communicate ideas as they connect thinking, imagination, senses, and feelings to create works and respond to the works of others.
In health and physical education, students learn about their own well-being, and that of others and society, in health-related and movement contexts.
In learning languages, students learn to communicate in an additional language, develop their capacity to learn further languages, and explore different world views in relation to their own.
In mathematics and statistics, students explore relationships in quantities, space, and data and learn to express these relationships in ways that help them to make sense of the world around them.
In science, students explore how both the natural physical world and science itself work so that they can participate as critical, informed, and responsible citizens in a society in which science plays a significant role.
In the social sciences, students explore how societies work and how they themselves can participate and take action as critical, informed, and responsible citizens.
In technology, students learn to be innovative developers of products and systems and discerning consumers who will make a difference in the world.
To be encouraged, modelled, and explored
Values are deeply held beliefs about what is important or desirable. They are expressed through the ways in which people think and act.
Every decision relating to curriculum and every interaction that takes place in a school reflects the values of the individuals involved and the collective values of the institution.
The values on the list below enjoy widespread support because it is by holding these values and acting on them that we are able to live together and thrive. The list is neither exhaustive nor exclusive.
Students will be encouraged to value:
excellence, by aiming high and by persevering in the face of difficulties
innovation, inquiry, and curiosity, by thinking critically, creatively, and reflectively
diversity, as found in our different cultures, languages, and heritages
equity, through fairness and social justice
community and participation for the common good
ecological sustainability, which includes care for the environment
integrity, which involves being honest, responsible, and accountable and acting ethically
and to respect themselves, others, and human rights.
The specific ways in which these values find expression in an individual school will be guided by dialogue between the school and its community. They should be evident in the school’s philosophy, structures, curriculum, classrooms, and relationships. When the school community has developed strongly held and clearly articulated values, those values are likely to be expressed in everyday actions and interactions within the school.
Through their learning experiences, students will learn about:
their own values and those of others
different kinds of values, such as moral, social, cultural, aesthetic, and economic values
the values on which New Zealand’s cultural and institutional traditions are based
the values of other groups and cultures.
Through their learning experiences, students will develop their ability to:
express their own values
explore, with empathy, the values of others
critically analyse values and actions based on them
discuss disagreements that arise from differences in values and negotiate solutions
make ethical decisions and act on them.
All the values listed above can be expanded into clusters of related values that collectively suggest their fuller meanings. For example, 'community and participation for the common good' is associated with values and notions such as peace, citizenship, and manaakitanga.
Foundations of curriculum decision making
The principles set out below embody beliefs about what is important and desirable in school curriculum – nationally and locally. They should underpin all school decision making.
These principles put students at the centre of teaching and learning, asserting that they should experience a curriculum that engages and challenges them, is forward-looking and inclusive, and affirms New Zealand’s unique identity.
Although similar, the principles and the values have different functions. The principles relate to how curriculum is formalised in a school; they are particularly relevant to the processes of planning, prioritising, and review. The values are part of the everyday curriculum – encouraged, modelled, and explored.
All curriculum should be consistent with these eight statements:
The curriculum supports and empowers all students to learn and achieve personal excellence, regardless of their individual circumstances.
Treaty of Waitangi
The curriculum acknowledges the principles of the Treaty of Waitangi, and the bicultural foundations of Aotearoa New Zealand. All students have the opportunity to acquire knowledge of te reo Māori me ōna tikanga.
The curriculum reflects New Zealand’s cultural diversity and values the histories and traditions of all its people.
The curriculum is non-sexist, non-racist, and non-discriminatory; it ensures that students’ identities, languages, abilities, and talents are recognised and affirmed and that their learning needs are addressed.
Learning to learn
The curriculum encourages all students to reflect on their own learning processes and to learn how to learn.
The curriculum has meaning for students, connects with their wider lives, and engages the support of their families, whānau, and communities.
The curriculum offers all students a broad education that makes links within and across learning areas, provides for coherent transitions, and opens up pathways to further learning.
The curriculum encourages students to look to the future by exploring such significant future-focused issues as sustainability, citizenship, enterprise, and globalisation. |
What is interrupt ?
An interrupt is an external or internal event that interrupts the microcontroller to inform it that a device needs its service.
Why we need interrupt?
-A single microcontroller can serve several devices by two ways
- Interrupt-Whenever any device needs its service, the device notifies the microcontroller by sending it an interrupt signal. Upon receiving an interrupt signal, the microcontroller interrupts whatever it is doing and serves the device. The program which is associated with the interrupt is called the interrupt service routine (ISR) or interrupt handler
- Polling- The microcontroller continuously monitors the status of a given device. When the conditions met, it performs the service. After that, it moves on to monitor the next device until every one is serviced
Advantage of interrupt
– The polling method is not efficient, since it wastes much of the microcontroller’s time by polling devices that do not need service. The advantage of interrupts is that the microcontroller can serve many devices, Each devices can get the attention of the microcontroller based on the assigned priority . For the polling method, it is not possible to assign priority since it checks all devices in a round-robin fashion.
How does interrupt works?
- Whenever any device needs service of microcontroller, the device notifies the microcontroller by sending it an interrupt signal.
- Upon receiving an interrupt signal, the microcontroller interrupts whatever it is doing and saves the address of the next instruction (PC) on the stack pointer (SP).
- It jumps to a fixed location in memory, called the interrupt vector table, that holds the address of the ISR(interrupt service routine). Each interrupt has its own ISR. The microcontroller gets the address of the ISR from the interrupt vector table and jumps to it
- It starts to execute the interrupt service subroutine until it reaches the last instruction of the subroutine which is RETI (return from interrupt).RETI not used in C coding.
- Upon executing the RETI instruction, the microcontroller returns to the place where it was interrupted and First, it gets the program counter (PC) address from the stack pointer by popping the top two bytes of the stack into the PC.
- Then it starts to execute from that address and continue what it executing before.
- This whole process is shown graphically in above pics.
Interrupt vector table
Interrupt vector table shows priority of different interrupts. Upon Reset, all interrupts are disabled (masked), meaning that none will be responded to by the microcontroller if they are activated. There are 21 total interrupts in ATmega32 microcontroller.
- Applications – To provide services to the devices efficiently.
Content for the tab VIDEO |
The tale of how the death of the dinosaurs paved the way for the rise of our ancestors is one known by every schoolchild. Today, scientists announce that it is wrong.
The traditional telling of the apocalyptic story goes as follows: the dinosaurs ruled the Earth for hundreds of millions of years, until an asteroid struck the Yucatan Peninsula of Mexico 65 million years ago that triggered a mass extinction that allowed the ancestors of today’s mammals to thrive, paving the way for the rise of man.
The asteroid part of the story is still true, but a study published today in the journal Nature challenges the oft held belief that the demise of the dinosaurs played a major role in the rise of our ancient ancestors, suggesting global warming and the appearance of flowers could have been much more important.
An international team, including members from Imperial College London and the Zoological Society of London, has constructed a complete evolutionary tree tracing the history of all 4,500 mammals on Earth that puts the major diversification 10-15 million years after asteroid strike, casting into doubt the role the dinosaur die-off played in the success of our present day mammals.
The tree, based on more than a decade of effort, is based on existing fossil records and new molecular analyses.
They show that many of the genetic 'ancestors’ of the mammals we see around us today existed 85 million years ago, and survived the meteor impact that is thought to have killed the dinosaurs.
There was a small pulse of mammalian diversification immediately after the dinosaur die-off. However, most of these groups have since either died out completely, such as Andrewsarchus , a species of Mesonychid (extinct group of aggressive wolf-like cows), or declined in diversity, such as the group containing sloths and armadillos.
The researchers believe that our 'ancestors’, and those of all other mammals on earth now, radiated - diversified into new species - in two pulses. The first was about 30 million years before the dinosaurs died out.
Flowering plants radiated then too, possibly aiding the diversification of mammals by giving them new things to eat.
The second pulse was not until 10 million years after the end of the dinosaurs, around the time of a sudden increase in the temperature of the planet - known as the Cenozoic thermal maximum.
Around 55 million years ago, the mid-latitude mean annual temperatures went up by up to 5 deg C over about 20,000 years. “It was a much bigger increase in temperature than we’ve had so far, but within the range that we might get within the next century (never mind 20,000 years),” said Prof Andy Purvis from Imperial College London.
“Our research has shown that for the first 10 or 15 million years after the dinosaurs were wiped out, present day mammals kept a very low profile, while these other types of mammals were running the show.
It looks like a later bout of 'global warming’ may have kick-started today’s diversity - not the death of the dinosaurs.
“This discovery rewrites our understanding of how we came to evolve on this planet, and the study as a whole gives a much clearer picture than ever before as to our place in nature.”
Dr Kate Jones from the Zoological Society of London added: “Not only does this research show that the extinction of the dinosaurs did not cause the evolution of modern-day mammals, it also provides us with a wealth of other information. Vitally, scientists will be able to use the research to look into the future and identify species that will be at risk of extinction. The benefit to global conservation will be incalculable.”
Another team member, University of Georgia Institute of Ecology Director John Gittleman, said that the fossil record, by its very nature, is patchy.
To fill in the gaps, molecular evolutionary trees are constructed by comparing the DNA of species. Because genetic changes occur at a relatively constant rate, like the ticking of a clock, scientists can estimate the time the species diverged from their common ancestor by counting the number of mutations.
Using radioisotope dating, scientists can also estimate divergence times from the fossil record.
Gittleman’s colleagues combined more than 2,500 partial trees constructed using molecular data and the fossil record to create the first virtually complete mammalian tree.
“The end result is that the mammals we know today are actually quite old and just flew under the radar of everything that was out there, be they dinosaurs or now other 'archaic’ mammals as well, for a lot longer than most people suspected,” said Olaf Bininda-Emonds, lead author of the study and now on a Heisenberg Scholarship at the University of Jena, Germany.
“This is just the first of many insights, if not surprises, about mammalian evolution to be mined with the help of the tree.” |
In this assignment you will outline a persuasive speech. The speech and self-review are due next week.
- Select either Topic A or Topic B for your persuasive speech.
- Topic A: Should Children Under the Age of 10 Own Cell Phones?
- Topic B: Should Self-Driving Cars Be Legal?
- Create an outline or speaking notes in Microsoft Word.
- Download the , which provides guidance for the structure of an outline.
- Focus your speech on 2–3 main points so you’ll stay within the 4-minute time limit.
- Submit the completed outline in a Microsoft Word document.
Your assignment will be graded according to the following criteria:
- The outline is complete and on topic.
- The outline provides solid flow for the speech.
- The outline is clear and free from spelling and grammar issues.
- You must incorporate at least two quality resources.
This course requires the use of Strayer Writing Standards. For assistance and information, please refer to the Strayer Writing Standards link in the left-hand menu of your course. Check with your professor for any additional instructions.
The specific course learning outcome associated with this assignment is:
- Outline a speech using a structured flow and proper spelling and grammar.
Week 6 Assignment Overview:
- Additional Outline Template (with citations) |
What Are Hernias?
Hernias happen when part of an organ or tissue in the body (such as a loop of intestine) pushes through an opening or weak spot in a muscle wall. It can push into a space where it doesn't belong. This causes a bulge or lump.
How Do Hernias Happen?
Hernias are fairly common in kids. Babies, especially preemies, can be born with them.
Some babies are born with small openings inside the body that will close at some point. Nearby tissues can squeeze into such openings and become hernias. Unlike hernias seen in adults, these areas are not always considered a weakness in the muscle wall, but a normal area that has not yet closed.
Sometimes tissues can squeeze through muscle wall openings that are only meant for arteries or other tissues. In other cases, strains or injuries create a weak spot in the muscle wall. Then, part of a nearby organ can push into the weak spot so that it bulges and becomes a hernia.
Hernia repair is the one of the most common surgeries kids have. It's important to know the signs of a hernia so your child gets the right medical care.
What Are the Types of Hernias?
There are different types of hernias, and each needs different levels of medical care.
Most hernias in kids are either inguinal hernias in the groin area or umbilical hernias in the belly-button area.
An inguinal hernia happens when part of the intestines pushes through an opening in the lower part of the abdomen called the inguinal (IN-gwuh-nul) canal. Instead of closing tightly, the canal leaves a space for the intestines to slide into.
Doctors fix inguinal hernias with surgery.
An umbilical hernia happens when part of a child's intestines bulges through the abdominal wall inside the belly button. It shows up as a bump under the belly button. The hernia isn't painful and most don't cause any problems.
Most umbilical (um-BILL-ih-kul) hernias closes up on their own by the time the child turns 4 or 5. If a hernia doesn't go away by then or causes problems, doctors may recommend surgery.
An epigastric hernia is when part of the intestines pushes through the abdominal muscles between the belly button and the chest.
Many epigastric (eh-pih-GAS-trik) hernias are small, cause no symptoms, and don't need treatment. Larger ones that do cause symptoms won't heal on their own, but surgery can fix the problem.
Other types of hernias — like hiatal hernias, femoral hernias, and incisional hernias — usually happen in older people, not kids.
- Umbilical Hernias
- Inguinal Hernias
- Quick Summary: Treating Indirect Inguinal Hernia
- Epigastric Hernias
- When Your Baby's Born Premature
Note: All information is for educational purposes only. For specific medical advice, diagnoses, and treatment, consult your doctor.
© 1995- KidsHealth® All rights reserved.
Images provided by The Nemours Foundation, iStock, Getty Images, Veer, Shutterstock, and Clipart.com. |
The Wicklow Mountains are famous for its granite rock which has been quarried for centuries. This section explains how a rock that only forms underground is now exposed throughout the mountains.
Formation of the Wicklow Mountains
500 million years ago Ireland as we know it didn’t exist. The land that would form Ireland lay deep beneath a tropical sea, known as the Iapetus Ocean, a predecessor to the Atlantic Ocean. Landslides and volcanic eruptions deposited mud and sand along the ocean floor, building up huge volumes of material. Immense weight was exerted on the lower levels of these deposits compressing it into rock, forming mudstones and sandstones (types of sedimentary rock).
420 million years ago, when the continental plates of North America and Europe collided, the land beneath the sea buckled upward. The land rose out of sea, forming the Wicklow Mountains. During this collision, great heat and stress was exerted upon the mud and sandstones, baking them into new rocks of slates, schists and quartzites (types of metamorphic rock). The molten layer of the earth’s crust (magma) was forced upward, cooling slowly beneath the metamorphic rock to form granite (a type of igneous rock). This slow cooling gives granite its typical large crystals which are visible to the naked eye.
Millennia of tropical storms and ice sheets have eroded away much of the slates, schists and quartzites which covered over the granite. The granite that was once deeply buried became exposed. The erosive forces of ice, wind and water shaped the new mountains over time into the rounded hills we have today. Lugnaquilla and Scarr Mountain are among the few places where some of the original schist remains on top. Schists still surround the central granite core of the mountains on all sides. The geological divide between granite and schist is clearly visible in places like Glendalough where coarse granite boulder scree suddenly gives way to smoother shiny schist.
The Wicklow Mountains are part of a larger granitic mass, which extends all the way from Dun Laoghaire to New Ross, forming the largest mass of granite(batholith) in northwestern Europe.
Formation of Minerals
Mining for lead and other minerals such as zinc and silver was a major industry in the Wicklow Mountains. The formation of these minerals is tied in with the process of granite and schist formation.
The point where the schists and granite were forming side by side would have been a superheated soup of minerals. Quartz rock formed between the granite and schist and minerals gathered in veins within the quartz. In the Wicklow Mountains lead and zinc were the main ores deposited. |
By Sara Pineda, El Centro de Amistad Program Director After being exposed to the news, world events including natural disasters and violent tragedies that impact our lives, it’s not uncommon for children and adults to experience some fear, anxiety, or depression. For some parents it may be difficult to talk to their children to address some of the things that are happening around us. For a child who is exposed to a traumatic event such as the one this child was exposed to in this tragic story we may wonder what the potential effects of trauma are on children. (Read more here: A community aims for healing after boy witnesses mother’s fatal stabbing) What is Trauma? Trauma is an emotional response to an intense situation that threatens injury, death or the physical well-being of self or others, causes terror and horror or helplessness at the time of the event. Children’s responses to trauma may be influenced by several factors, such as their stage of development, ethnic and cultural factors, any pre-existing child and family challenges, and previous exposure to trauma. The majority of the children and youth express distress after experiencing a traumatic life event. They may display changes in behavior as they try to cope with what they have witnessed or experienced. Some examples of these behaviors are: separation anxiety, new fears, anger, irritability, sadness, physical symptoms like stomach aches, headaches and nausea, changes in sleeping patterns, reduced concentration, and academic decline. As a therapist at El Centro de Amistad in the San Fernando Valley, I have seen the healing that can occur over time. Children demonstrate resilience. What helps a child’s resiliency? Family, cultural, and community strengths promotes a child’s path to a better future. We have seen how social, community, and governmental support networks are very important in recovery and resilience as well. Reaching out to community-based organizations that provide mental health services is critical to getting the help they need. Research studies have also shown that some children will return to previous levels of functioning while others may have greater barriers to coping and recovery. Some of these barriers are ongoing life stressors, community stress, prior traumas, prior mental health issues and ongoing safety issues. When reactions appear to continue to impede in the daily functioning of the child, it is important to seek clinical attention. How can parents, caregivers, and other adults help children cope from a traumatic event? Here are things that adults can do to help a child cope with a traumatic situation: – Provide information about trauma and explain common responses to trauma, which can be helpful to the child, but also to the adults around the child. It is important that adults understand the different responses based on the child’s level of development. When adults and community members (teachers, coaches and clergy) understand responses to trauma, they can better support the child. – Assist the child by providing reassurance and comfort. – Answer questions in the language a child understands. – Help children expand their “feelings” vocabulary. – Re-establish routines to bring back some “normalcy” to a child’s life (bed times, returning to school and resuming leisure/social activities). – Provide tangible items, such as a picture of a loved one, stuffed animal or a favorite item, the child can have with them when they struggle with coping. – Work on helping children solve problems they face due to the trauma. – Allow a child to continue to have contact with other children or adults the child feels helps them. Children tend to gravitate towards people who provide stability and support. – Set boundaries and limits with consistency and patience. – Show love and affection. When a child is struggling in their efforts to cope, it is essential that clinical attention be considered. Mental health professionals like those at El Centro de Amistad in the San Fernando Valley will have the ability to provide trauma-informed treatment and intervention and methods to target trauma symptoms the child is struggling with. |
You might know that Excel stores its date information as numbers starting at 1 on January 1st 1900. This makes manipulating date differences by subtraction very easy, shown below, where the difference between two dates is calculated.
And the unformatted values and formula are shown here.
Excel displays the results of date calculations as days by default and this works well when the numbers are relatively small. It’s easy to see that 7 days is a week and 10 days is about a week and a half. You might, however, think differently when the values get bigger or there are a lot of numbers to scan. Now you might need to start thinking in larger time units to make sense of them. Can you quickly calculate in your head how many weeks 111 days is? Or 473 days? You can? Good for you, but please keep reading.
I run into this problem quite frequently when creating reports of baseline vs. actual dates from project schedules and suchlike. I also like my audience to quickly understand what I’m telling them, so I wrote a function to display date differences more clearly. Paste the code below into a module in the VB IDE.
Function FormatYearWeekDay(theNumber As Integer) as String 'Formats a date difference value as years, weeks and days 'Assumes 365 days in a year Dim Yr As String Dim Wk As String Dim Dy As String Dim Neg as Boolean Yr = "" Wk = "" Dy = "" 'Test if the number is zero If theNumber = 0 Then FormatYearWeekDay = "0d" Exit Function End If 'Test if the number is positive or negative If theNumber < 0 Then Neg = True Else Neg = False End If 'Setting the absolute value means we don't have to worry 'about negative values until later theNumber = Abs(theNumber) 'Deal with years If theNumber >= 365 Then Yr = Int(theNumber / 365) & "y " theNumber = theNumber Mod 365 End If 'Deal with weeks If theNumber >= 7 Then Wk = Int(theNumber / 7) & "w " theNumber = theNumber Mod 7 End If 'Deal with days Dy = theNumber & "d" 'Set the function's return value If Neg Then FormatYearWeekDay = "-" & Yr & Wk & Dy Else FormatYearWeekDay = Yr & Wk & Dy End If End Function
How it works
- Dimension some variables and set them to empty values
- Test if the number is zero, and if so set the return value to “0d” and exit the function – nothing more to do
- Set the Neg variable to TRUE or FALSE for later so we can use the absolute value of the number to avoid having to write additional code for negative numbers
- Get the absolute value of the number
- Test if the number is >= 365, and if so divide by 365 to give the number of years and store the result in a variable
- Add a ‘y’ indicator
- Use modular arithmetic to get the remaining days without the years
- Test if the number is >= 7, and if so divide by 7 to give the number of weeks and store the result in a variable
- Add a ‘w’ indicator
- Use modular arithmetic to get the remaining days without the weeks
- The remaining value must be the number of days left so store it in another variable
- Add a ‘d’ indicator
- Test if the original number was negative and if so replace the negative sign
This is what you get
I wondered if Excel had anything in-built to create the same functionality and I came across the DateDif function. I learned that this function is no longer supported; certainly not in Excel 2010 and I can’t vouch for earlier or later versions. You can see how it works here.
I also found the VBA function DateDiff (note the double ‘f’) which looks like it might be more useful. You can see how this one works here.
Things you can do
- How would you amend the function to cater for a 5 day working week?
- Explain the apparent anomaly in the second and third pairs of columns on rows 16 and 17 in the above screen shot
- Consider whether leap years would make any difference to the calculations
- Consider whether adding a category for months would be useful
- Examine how the ‘theNumber’ parameter in the function is altered under program control to achieve clear and concise coding
- Experiment with the DateDif and DateDiff functions in the example download and try to replicate and/or improve on my function’s results
- Presentation of data to your audience is fundamentally important
- Dates can be manipulated using simple subtraction
- There is often in-built functionality in Excel, but does it always do what we want?
Over to you
Please leave a comment if you can think of any other uses for this functionality or improvements to the code. I am not a professional programmer and only do it to save me time in the long run. Or, occasionally, because I have no other choice. One day, if I can find the worksheets, I might tell you about the time I successfully used Monte Carlo simulation to create project costing estimates. |
From its vantage point in geostationary orbit, NASA’s GOLD mission – short for Global-scale Observations of the Limb and Disk – has given scientists a new view of dynamics in Earth’s upper atmosphere. Together, three research papers show different ways the upper atmosphere changes unexpectedly, even during relatively mild conditions that aren’t typically thought to trigger such events.
GOLD studies both neutral particles and those that have electric charge – collectively called the ionosphere – which, unlike neutral particles, are guided by electric and magnetic fields. At night, the ionosphere typically features twin bands of dense charged particles. But GOLD’s data revealed previously unseen structures in the nighttime ionosphere’s electrons, described in research published in the Journal of Geophysical Research: Space Physics on Aug. 24, 2020.
While comparing GOLD’s data to maps created with ground-based sensors, scientists spotted a third dense pocket of electrons, in addition to the typical two electron bands near the magnetic equator. Reviewing GOLD data from throughout the mission, they found that the peak appeared several times in October and November of multiple years, suggesting that it might be a seasonal feature.
Though scientists don’t know what exactly creates this extra pocket of dense electrons, it appeared during a period of relatively mild space weather conditions. This was a surprise to scientists, given that big, unpredictable changes in the ionosphere are usually tied to higher levels of space weather activity.
GOLD also saw large drops in the upper atmosphere’s oxygen-to-nitrogen ratio – a measurement typically linked to the electron changes that can cause GPS and radio signal disturbances.
This event was notable to scientists not for what happened, but when: The dips that GOLD saw happened during a relatively calm period in terms of space weather, even though scientists have long associated these events with intense space weather storms. The research was published on Sept. 9, 2020, in Geophysical Research Letters.
During a geomagnetic storm – space weather conditions that disturb Earth’s magnetosphere on a global scale – gases in the upper atmosphere at high latitudes can become heated. As a result, nitrogen-rich air from lower altitudes begins to rise and flow towards the poles. This also creates a wind towards the equator that carries this nitrogen-rich air down towards lower latitudes. Higher nitrogen in the upper atmosphere is linked to drops in electron density in the ionosphere, changing its electrical properties and potentially interfering with signals passing through the region. GOLD observed this effect several times during relatively calm space weather conditions during the day – outside of the disturbed conditions when scientists would normally expect this to happen.
These changes during seemingly calm conditions may point to a space weather system that’s more complicated than previously thought, responding to mild space environment conditions in bigger ways.
“The situation is more complex – the ionosphere is more structured and dynamic than we could have seen before,” said Dr. Sarah Jones, mission scientist for GOLD at NASA’s Goddard Space Flight Center in Greenbelt, Maryland.
Indeed, GOLD’s observations of changing atmospheric composition are already informing scientists’ computer models of these processes. A paper published in Geophysical Research Letters on May 20, 2021, uses GOLD’s data as a reference to show how changes near the poles can influence the ionosphere’s conditions in the mid-latitudes, even during periods of calm space weather activity. GOLD’s broad, two-dimensional view was critical to the finding.
“When you look in two dimensions, a lot of things that look mysterious from one data point become very clear,” said Dr. Alan Burns, a researcher at the High Altitude Observatory in Boulder, Colorado, who worked on the studies.
By Sarah Frazier
NASA’s Goddard Space Flight Center, Greenbelt, Md. |
Written by Simone Haerri
What do you need to know?
The common honey bee (Apis mellifera) is experiencing high mortality rates in recent years. This is a major concern for the beekeeping industry and for agricultural crops relying on honey bees as their main pollinator. Several key factors have been linked to colony mortality such as exposure to pesticides, viral infections and the ectoparasitic mite Varroa destructor. Whereas we know how each one of those stressors affects the honey bee individually, not as much is known how combining the different stressors affect the bees.
In this paper, researchers assessed how the combined effect of exposure to neonicotinoid pesticides with parasitism by Varroa mites is affecting honeybees during development from larval stage to adulthood. They found that all health-related honey bee parameters were negatively affected by the presence of the Varroa mites, and some aspects showing a compounded negative effect when both stressors were present at the same time. They found that the combined effect of Varroa mites and pesticide exposure in bees exposed during the larval stage has a long term effect by reducing the weight of newly emerged bees and by affecting the expression of immune and metabolic-related genes, which in turn might affect the chance to fight off infections, repair damaged tissue, and metabolize toxins.
Why is this research important?
The honeybee (Apis mellifera) is one of the most important pollinators for agricultural crops and it also has a long history of management. Recently, honey bees are experiencing a high level of mortality. There are several important factors linked to the drastic decrease in numbers. The presence of the parasite Varroa destructor, viral infections such as the deformed wing virus (DWV) and exposure to pesticides, like neonicotinoids.
Varroa mites are little tiny arachnids that live on the outside of honey bees and they feed on the fat tissue and hemolymph (blood) of the bees and if present in high enough numbers, they can kill entire honey bee colonies. They can also transmit viruses, such as the deformed wing virus. Honeybees can, to a certain degree, protect themselves against the mites, for example by grooming themselves.
Whereas each of those stressors alone has negative effects on honey bee health, in combination, they can cause even more problems. However, research so far has reported contradictory results when it comes to the exposure to multiple stressors. Understanding that complex interplay among various stressors will help taking measures to reduce honey bee mortality.
What did the researchers do?
The honeybee larvae used for the experiment were originally taken from colonies kept at our Honey Bee Research Centre (https://honeybee.uoguelph.ca). Honeybee larvae were then assigned to different concentrations of a neonicotinoid insecticide, clothianidin, with each concentration replicated twice. To test how exposure to pesticides in combination with being infected by Varroa mites is affecting the honeybees, Varroa mites were added to half of the pesticide groups. This resulted in eight different treatment groups:
- Group 1: No exposure to pesticides – no Varroa mites
- Group 2: No exposure to pesticides – Varroa mites present
- Group 3, 4 & 5: Exposure to low, medium and high concentration of Clothianidin – each with no Varroa mites
- Group 6, 7 & 8: Exposure to low, medium and high concentration of Clothianidin – each with Varroa mites present
Clothianidin is a pesticide belonging to the Neonicotinoid family. The pesticide concentrations used for the experiment mimicked the realistic exposure a honey bee larvae would experience during its lifetime by ingesting pollen of plants that wore grown from Clothianidin-treated seeds. Once the bees completed their development, the following parameters were measured:
- Proportion of emerged bees to test for direct mortality
- Weight of the emerged bee. Weight can be an indicator of bee health
- DWV levels. DWV (deformed wing virus) is a viral infection experienced by honeybees than can be transmitted by the mites
- Number of haemocytes per bee was estimated. Haemocytes are cells used to heal a wound and/or attack a pathogen. Their number is often used as an indicator of strength of the immune system
- Total gene expressions to have a better understanding of the effect of the stressors on bee health, based on the biological pathways associated with the affected genes.
What did the researchers find?
The researchers found that different parameters measured reacted quite differently to the applied treatments. The number of larvae that successfully developed into adult honey bees (proportion of bees emerged) and the number of haemocytes were decreased when the Varroa mites were present, but were not affected directly by the exposure to Clothianidin. The same was found true for the “Deformed wing virus” levels that were higher in the presence of the Varroa mites but unaffected by exposure to pesticides. On the other hand, the weight of the emerged bee was most dramatically reduced when exposed to both stressors, the pesticide and the Varroa mites. This indicates that the combined effect of the two stressors can magnify the effect of each individual stressor and significantly reduce honey bee health.
The study found that clothianidin alone affected genes related to metabolism and Varroa affected genes associated to immune responses. When the stressors were combined, the number of up and down-regulated genes was higher compared to the stressors alone, showing that the interaction between stressors could affect more aspects of bee health.
About the researchers
Dr. Nuria Morfin is a post-doctoral researcher at the School of Environmental Sciences. Dr. Paul H. Goodwin and Dr. Ernesto Guzman-Novoa are faculty members at the School of Environmental Sciences, University of Guelph, ON, Canada.
Honey Bee, Apis mellifera, pollinator, pesticides, neonicotinoids, Varroa mite, parasites, overwinter mortality, honeybee, pollination, beekeeping, deformed wing virus
Morfin, N., Goodwin, P. H., & Guzman-Novoa, E. (2020). Interaction of Varroa destructor and Sublethal Clothianidin Doses during the Larval Stage on Subsequent Adult Honey Bee (Apis mellifera L.) Health, Cellular Immunity, Deformed Wing Virus Levels and Differential Gene Expression. Microorganisms, 8(6), 858. |
When you get a new computer, be it a desktop computer or a laptop, everything runs at lightning speed. All smooth, all fast. After a few months of usage, you observe that its performance is starting to slow. What could have caused it?
Several factors can cause slow computer performance. What your computer is made of and the maintenance you do are major factors. But you have to remember that a computer is a machine. It is made of parts that can eventually wear out. Damage caused by infiltrating viruses and malware can also cause adverse changes in the system and in your files.
Hard Disks. These guys store all the files and data of your computer. ALL of them. It affects computer efficiency through its read/write speeds.
Your hard disks store all of the data on your computer, from program files to documents or photos. The space available on the hard drive doesn’t have a very large affect on the computer’s performance, but the read and write speed do. A hard disk that spins at 7200RPMs will be able to process data much more quickly than a hard disk that spins at 5200RPMs. Solid state hard drives are a type of flash memory that doesn’t rely on moving parts, making them faster than standard spinning hard disks.
A computer’s processor can identify if your computer can run fast or not.
The processor determines how fast your machine can run. Faster processors are better for applications like gaming and video editing, because they can quickly get data from your computer hardware to your monitor. Processors with more than one core will also improve a computer’s performance because they can perform multiple tasks at the same time.
The amount and type of memory you have installed in your computer determines how much data can be processed at once. Faster memory and a larger amount of RAM will make your computer perform more quickly. Attempting to use an application that overloads the RAM will give you significantly slower speeds.
Video Processors. One computer has a different video processor from the other especially location-wise which affects its quality.
Some computers have the video processing hardware on the motherboard, while others have a standalone video card. Generally, computers with a standalone graphical processing unit (GPU) are better at displaying complex images and video than computers with a GPU on the motherboard. Your video card partially determines which software you can run and how smoothly it works.
Last but not least, maintenance. Every computer needs it. And you shouldn’t forgo without updating your computer and programs for long periods of time. Bugs, errors, viruses, temporary files – all these can slow down your computer. But all you have to do is do regular updates and your computer will be good to go.
Even a computer made of the latest, best components still requires maintenance. It’s important to install updates, keep the computer free of malware and perform regular upkeep. Updates from the operating system manufacturer and any hardware or software updates can help the computer run better or fix problems. Malware slows your computer down and can expose you to identity theft. An anti-virus program and regular scans will help you avoid those issues. Make sure to defragment your computer, clean up old files and shut it down regularly. Doing these things will help keep your system working quickly.
All components of a computer are crucial, including its operating system and programs. Keeping them updated, repaired (when needed), maintained, and cleaned will boost computer performance. |
1. What is Obsessive-Compulsive Disorder?
People with Obsessive-Compulsive Disorder (OCD) suffer from obsessions and compulsions. Obsessions are repetitive thoughts or images that the person finds intrusive and inappropriate, and that increase levels of anxiety. Compulsions are repetitive rituals (thoughts or actions) designed to counter obsessions and lower anxiety. For example, a person with obsessions about contaminations may wash their hands repetitively; or a person with obsessions about possible harm may check repeatedly.
While washing and checking are easily recognised, many people have more abstract symptoms such as having to pray over and over to get rid of blasphemous thoughts, or suffering from intrusive sexual thoughts, or having to hoard excessively. In addition to obsessions and compulsions, people with OCD may show avoidance behaviours; for example, the person with contamination concerns may simply stay indoors rather than risk going outdoors. Other people with OCD may take an extraordinarily long time to complete routine daily activities – this is a form of OCD known as ‘obsessional slowness’.
In the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV), OCD falls under that category of anxiety disorders. OCD often presents with other anxiety and mood disorders; in fact, a cross-national study found that the lifetime prevalence of Major Depressive Disorder among OCD patients range from 13 to 60% across seven countries. OCD mostly predates depression, suggesting that depressive symptoms usually occur in response to the distress and functional impairment associated with OCD. Many with this condition will seek treatment for depression or other secondary phenomena of OCD and will not reveal the sources of their problems unless specifically asked about. Most people with OCD have good insight into their symptoms; they know that their concerns are excessive, even though they cannot help following through on compulsions to set their minds at ease.
2. What Causes Obsessive-Compulsive Disorder?
No one knows exactly what causes OCD. Although many processes remain unclear, there is increasing evidence that the cause of OCD lies in problems with the circuitry, structure and neurochemistry of the brain. Recent studies have demonstrated that people with OCD have different patterns of brain activity from normal individuals and those with other psychiatric disorders. In this way more evidence is added to the theory of a biological cause for OCD. Recent research findings have also shown that patients with OCD respond to a particular group of drugs, called the serotonin reuptake inhibitors. The neurotransmitter, serotonin, a naturally occurring compound in the brain involved in the transmission of nerve impulses, is thought to be a key factor in this disorder.
In females, a common age of onset is at the time of pregnancy or giving birth; hormonal interactions with brain chemicals are likely to play an important role in these cases. In addition, one subtype of OCD begins after certain infections, typically after a Streptococcal throat infection. This condition is termed PANDAS or Paediatric Autoimmune Neuropsychiatric Disorder Associated with Streptococcus. In such cases the body’s reaction to the Streptococcus bacterium mistakenly attacks a part of the brain called the basal ganglia. We know that the basal ganglia are important in humans with OCD because of several different kinds of studies. Amongst the first of these were early studies of neurology patients with different kinds of lesions of the basal ganglia where many of these patients developed OCD symptoms. The exact genes involved in OCD are not yet known, although they are likely to influence brain chemistry (and perhaps the brain chemistry of the basal ganglia in particular) in some way. This is a rapidly advancing area of research, and several candidate genes for OCD have already been proposed (e.g. catechol-O-methyl transferase).
3. Who Gets Obsessive-Compulsive Disorder?
OCD affects both males and females of all ages and ethnic groups. OCD commonly begins in adolescence or childhood, perhaps particularly in males. In females, another common ages of onset is at the time of pregnancy or giving birth; hormonal interactions with brain chemicals are likely to play an important role in such cases.
Once regarded as a rare psychiatric disorder unresponsive to treatment, OCD is now recognised to be a common problem affecting some 2 - 3% of the general population. OCD appears to occur at similar rates throughout the world. It usually lasts for many years during which time patient’s symptoms may very in severity and focus.
One subtype of OCD begins after certain infections; typically after a Streptococcal throat infection. In such cases the body’s reaction to the Streptococcal bacterium mistakenly attacks a part of the brain called the basal ganglia, resulting in sudden onset of OCD symptoms and / or tics.
On occasion, OCD is seen in people with various other neurological conditions, typically those that involve basal ganglia lesions. The basal ganglia are a group of structures that lie towards the centre of the brain, and that play a central role in learning and executing sets of motor sequences outside of awareness (e.g. initially learning to ride a bicycle requires conscious awareness, later these motor programs are carried out automatically).
Certain genes appear to play a role in causing OCD, and the condition is therefore somewhat more common in relatives of people with OCD or Tourette’s disorder than in the general population. The exact genes involved in OCD are not yet known, although they are likely to influence brain chemistry (and perhaps the brain chemistry of the basal ganglia in particular) in some way. |
You collect coins, and you’re on the trail of a legend: According to rumor, a manufacturing defect led to one in every thousand 1939 nickels replacing Thomas Jefferson with a Sasquatch (also known as Bigfoot). But all of these weathered nickels now look about the same. How can you tell that you have found your elusive quarry?
Finding something new in particle physics is much the same. We frequently know roughly what a new particle might look like, but this “signature” is often similar to that of other particles. One of the best ways to aid our search is to paint extremely accurate pictures of known particles and then look for exceptions to that rule.
Heavy particles like dark matter candidates, the Higgs boson or particles predicted by supersymmetry share a common signature: They may decay into particles including a “vector boson,” V (a type of particle that transmits the weak force), and a “charmed meson,” D* (a particle made of two quarks, one of which is a charm quark).
CDF physicists performed a search for these V+D* events — the normal nickels — to make certain that our picture of them is accurate. Models of events such as these are known to be accurate at high energies; however, at lower energies, subtleties in the strong force that binds together fundamental particles become more important, and the models may break down.
This study was the first to test V+D* production at lower energies in hadron collisions. The V particle is either the W boson or the Z boson. The full Tevatron Run II data sample was used (9.7 inverse femtobarns).
The figure shows the data when the V particle is the W particle. The experiment measured 634 ± 39 such events. The W particle is found by looking for an energetic lepton (a muon or an electron) and missing transverse energy (neutrino). The D* particle is observed from its decay into the D0 particle and a low-energy pion. The D0 decays into a negative kaon and a positive pion.
Several sources of systematic uncertainty cancel in calculating the ratio of the decay probabilities for these two processes. We found that V+D* production behaves just as predicted. Providing such a stringent test of these models widens the net that we can cast in future studies. This, in turn, betters our chances of fishing out something new and exciting, perhaps previously undiscovered particles or particle decays. |
Mid-summer to mid-fall is a time when our Santa Cruz office and the rest of the Monterey Bay witnesses the arrival of thousands of international visitors. These aren’t sight-seeing tourists or traveling scientists, but rather a type of seabird called the Sooty Shearwater (Ardenna grisea). Sooties are deceptively unassuming birds at first glance – something like a small, drab seagull – but two things make them truly remarkable: distance and numbers. Every year these long-distance athletes undertake a round-trip flight of a staggering 40,000 miles – something that happens when your breeding ground is in one hemisphere and your feeding ground is in another. The birds transverse the entire Pacific Ocean in a figure eight pattern, starting from their nesting areas near New Zealand, then heading eastward, sometimes as far as Chile, before cutting northwest towards Japan, Alaska, or California to forage, then looping back to the Southern Hemisphere. It represents the longest animal migration ever recorded electronically by scientists.
What makes the California coast a desirable destination for these winged wanderers? The rich waters of the Pacific Ocean are important feeding areas not just for young salmon, but for a host of other animals as well. In particular, the process of upwelling draws nutrient-packed waters of the deep ocean up to the surface, triggering an explosion of food in the California Current that travels southward along the coast. Supported by bountiful blooms of plankton, the ocean food web includes an abundance of krill, squid, jellies, and fish – all of which are shearwater food. Hundreds of thousands of Sooty Shearwaters converge on Monterey Bay each year to take part in this feast, with some estimates as high as a million birds. They can appear like dense, black clouds moving across the water, like smoke billowing over the ocean. Up close, it’s a rush of wings and splashes as the birds plunge beneath the waves as deep as 200 feet, “flying” underwater in pursuit of prey.
Migrations are an awe-inspiring phenomenon in any animal, especially when coupled with sheer abundance. Despite their breathtaking numbers, however, Sooty Shearwaters are actually declining, and in 2018 were placed on the IUCN Red List as “Near Threatened.” Like any migratory species, these birds are vulnerable to threats they might encounter across their entire range, whether invasive species in their nesting areas or declines in prey species due to climate change or overharvest in their feeding grounds. Events like World Migratory Bird Day can help draw attention to the challenges facing migratory birds – and we hope can also help build appreciation for the less visible but no less remarkable migrations of fish (which now have a celebratory migration day of their own). Whether winged or watery, these examples remind us that migrations are worthy of marveling at, but can also be fragile, and are not something to take for granted.
To experience the thrill of the Sooty Shearwater bonanza, watch this video to immerse yourself in a particularly dense flock! |
Speculations about the Effects of Fire and Lava Flows on Human Evolution
Recent research argues that an association with fire, stretching back millions of years, played a central role in human evolution resulting in many modern human adaptations. Others argue that hominin evolution was driven by the roughness of topographic features that resulted from tectonic activity in the African Rift valley. I combine these hypotheses to propose that, for millions of years, active lava flows in the African Rift provided consistent but isolated sources of fire, providing very specific adaptive pressures and opportunities to small isolated groups of hominins. This allowed these groups of early hominins to develop many fire specific adaptations such as bipedalism, smaller teeth and mouths, shorter intestines, larger brains, and perhaps a host of social adaptations. By about 1.8 million years ago, Homo erectus emerged as a fire adapted species and mastered the technology necessary to make fire itself. This technology allowed them to move into the rest of the world, taking a new kind of fire with them that would change ecosystems everywhere they went. This hypothesis is supported by recent geologic work that describes a large lava flow occurring in the region of the Olduvai Gorge during the 200000 year time period we believe Homo erectus emerged in the area. |
♦ Queen’s English (or King’s English) is standard or correct grammatical English spoken or written in the United Kingdom. It may be spoken in any accent. It is used for many forms of written text including newspapers, business letters, essays, text books, fiction books, CVs, and government documents.
When the British monarch is a queen, standard English is Queen’s English; when the British monarch is a king, it is King’s English.
♦ broken English – broken English is English that has incomplete grammar and vocabulary and incorrect pronunciation. This expression usually refers to English spoken by non-native speakers.
1. The short note was written in broken English.
2. In broken English he told the taxi driver where he wanted to go.
3. She only speaks broken English and will need an interpreter for her hospital appointment.
♦ to speak the same language – is to have the same ideas, beliefs and opinions as someone else.
1.We speak the same language when it comes to religion.
2. The opposing political groups don’t speak the same language about the environment.
♦ to pick up a language – to pick up a language is to learn it easily or casually, usually by listening to native speakers and practising it without formal lessons.
1. He picked up Mandarin by listening to his work colleagues.
2. I picked up lots of new English words when I was on holiday in the UK.
3. Children pick up new languages very easily.
♦ Pidgin English – a pidgin starts as a makeshift contact language based on two or more languages, commonly used for communication by and among traders with different native languages. A pidgin has a small vocabulary and simplified grammar. As a pidgin becomes a more complex and stable community language it develops into a creole.
There are many forms of Pidgin English e.g. Hawaii Pidgin English (the Pidgin English spoken in Hawaii) and Nigerian Pidgin English.
♦ a dead language – a dead language is a language that is no longer learned as a native language, but might still be used by scholars and experts.
Latin and ancient Greek are dead languages.
to murder a language – is to speak a language very badly, making many mistakes with grammar, vocabulary and pronunciation.
It’s said you must murder a language before mastering it.
plain English – clear, simple and easily understood spoken or written English.
I wish this contract was written in plain English!
It’s all Greek to me – the expression it’s all Greek to me refers to something that is impossible to understand.
1. Question: Do you understand this mathematical equation?
Answer: No, it’s all Greek to me!
2. This new work contract is all Greek to me.
Is there a similar expression to it’s all Greek to me in your native language?
Can you think of any other language idioms?
Images © Michael Gwyther-Jones and Scott Zona |
Excelsior College Mathematics Future of Cars Graphing Questions
- February 06, 2022/ Homework Tutors
I’m working on a algebra multi-part question and need an explanation and answer to help me learn.
The problem you must solve is:
1. Refer to this report about the future of cars
2. In your own words, and in complete sentences, answer the following questions:
a. How many states considered or are considering legislation to allow self- driving cars?
b. Approximately how many traffic fatalities were there in total in 1973? In
2010? How many fatalities were there per 100 million miles traveled in each of those years?
c. In which year were motor vehicle fatalities the highest?
- In which year(s) was there approximately 4 fatalities for each 100 million milestraveled?
- Approximately how many fatalities were there in the 90’s? How many fatalitiesper 100 million miles traveled?
3. Describe anything that makes the graphs hard to read. What would you do differently?
4. State 3 additional pieces of information that you can read from these data displays.
**Don’t just list your answers! Remember to show and describe your work.
Discuss how you got your answers from each graph or chart. For example: Was
there a legend to follow? A scale you had to read and convert? How did you
identify your answer from the chart. |
(CNN)-Reclusive, nocturnal, numerous — bats are a possible source of the coronavirus. Yet some scientists concur they are not to blame for the transfer of the disease that’s changing daily life — humans are.
Zoologists and disease experts have told CNN that changes to human behavior — the destruction of natural habitats, coupled with the huge number of fast-moving people now on Earth — has enabled diseases that were once locked away in nature to cross into people fast.
Scientists are still unsure where the virus originated, and will only be able to prove its source if they isolate a live virus in a suspected species — a hard task.
But viruses that are extremely similar to the one that causes Covid-19 have been seen in Chinese horseshoe bats. That has led to urgent questions as to how the disease moved from bat communities — often untouched by humans — to spread across Earth. The answers suggest the need for a complete rethink of how we treat the planet.
Bats are the only mammal that can fly, allowing them to spread in large numbers from one community over a wide area, scientists say. This means they can harbor a large number of pathogens, or diseases. Flying also requires a tremendous amount of activity for bats, which has caused their immune systems to become very specialized.
“When they fly they have a peak body temperature that mimics a fever,” said Andrew Cunningham, Professor of Wildlife Epidemiology at the Zoological Society of London. “It happens at least twice a day with bats — when they fly out to feed and then they return to roost. And so the pathogens that have evolved in bats have evolved to withstand these peaks of body temperature.”
Cunningham said this poses a potential problem when these diseases cross into another species. In humans, for example, a fever is a defense mechanism designed to raise the body temperature to kill a virus. A virus that has evolved in a bat will probably not be affected by a higher body temperature, he warned.
But why does the disease transfer in the first place? That answer seems simpler, says Cunningham, and it involves an alien phrase that we will have to get used to, as it is one that has changed our lives — “zoonotic spillover” or transfer.
“The underlying causes of zoonotic spillover from bats or from other wild species have almost always — always — been shown to be human behavior,” said Cunningham. “Human activities are causing this.”
When a bat is stressed — by being hunted, or having its habitat damaged by deforestation — its immune system is challenged and finds it harder to cope with pathogens it otherwise took in its stride. “We believe that the impact of stress on bats would be very much as it would be on people,” said Cunningham.
“It would allow infections to increase and to be excreted — to be shed. You can think of it like if people are stressed and have the cold sore virus, they will get a cold sore. That is the virus being ‘expressed.’ This can happen in bats too.”
In the likely epicenter of the virus — the so-called wet-markets of Wuhan, China — where wild animals are held captive together and sold as delicacies or pets, a terrifying mix of viruses and species can occur.
“If they are being shipped or held in markets, in close proximity to other animals or humans,” said Cunningham, “then there is a chance those viruses are being shed in large numbers.” He said the other animals in a market like that are also more vulnerable to infection as they too are stressed.
“We are increasing transport of animals — for medicine, for pets, for food — at a scale that we have never done before,” said Kate Jones, Chair of Ecology and Biodiversity at University College London.
“We are also destroying their habitats into landscapes that are more human-dominated. Animals are mixing in weird ways that have never happened before. So in a wet market, you are going to have a load of animals in cages on top of each other.”
Cunningham and Jones both pointed to one factor that means rare instances of zoonotic spillover can turn into global problems in weeks. “Spillovers from wild animals will have occurred historically, but the person who would have been infected would probably have died or recovered before coming into contact with a large number of other people in a town or in a city,” said Cunningham.
“These days with motorized transport and planes you can be in a forest in central Africa one day, and in a city like central London the next.”
Jones agreed. “Any spillover you might have had before is magnified by the fact there is so many of us, and we are so well connected.”
There are two simple lessons, they say, that humanity can learn, and must learn fast.
First, bats are not to blame, and might actually help provide the solution. “It’s easy to point the finger at the host species,” said Cunningham.
“But actually it’s the way we interact with them that has led to the pandemic spread of the pathogen.” He added that their immune systems are poorly understood and may provide important clues. “Understanding how bats cope with these pathogens can teach us how to deal with them, if they spillover to people.”
Ultimately diseases like coronavirus could be here to stay, as humanity grows and spreads into places where it’s previously had no business. Cunningham and Jones agree this will make changing human behavior an easier fix than developing a vastly expensive vaccine for each new virus.
The coronavirus is perhaps humanity’s first clear, indisputable sign that environmental damage can kill humans fast too. And it can also happen again, for the same reasons.
“There are tens of thousands [of viruses] waiting to be discovered,” Cunningham said. “What we really need to do is understand where the critical control points are for zoonotic spillover from wildlife are, and to stop it happening at those places. That will be the most cost-effective way to protect humans.”
Jones said viruses “are on the rise more because there are so many of us and we are so connected. The chance of more [spillovers into humans] happening is higher because we are degrading these landscapes. Destroying habitats is the cause, so restoring habitats is a solution.”
The ultimate lesson is that damage to the planet can also damage people more quickly and severely than the generational, gradual shifts of climate change.
“It’s not OK to transform a forest into agriculture without understanding the impact that has on climate, carbon storage, disease emergence and flood risk,” said Jones. “You can’t do those things in isolation without thinking about what that does to humans.”
Have a question on the coronavirus outbreak? Connect directly with the World Health Organization (WHO) via WhatsApp or Save the following numbers (+41 22 501 76 55) and send “hi” more updates on coronavirus.
Source: By Nick Paton Walsh and Vasco Cotovio, CNN |
- Gropper S et al, 2012, Coordination and regulation of the digestive processes Advanced Nutrition and Human Metabolism, p. 56
- Overview of gastrointestinal hormones Colorado State University – Fort Collins
- Cholecystokinin (CKK) Encyclopaedia Britannica
- Secretin Encyclopaedia Britannica
- Somatostatin Encyclopaedia Britannica
- Motilin Encyclopaedia Britannica
What is digestion?
Digestion (from Latin digerere = separate) is breakdown of complex nutrients, such as starch, proteins and fats, into their basic components, such as glucose, amino acids and fatty acids thus making them available for absorption.
Starch is partly broken down into a disaccharide maltose in the mouth with the help of the enzyme salivary amylase, and partly in the small intestine by pancreatic amylase delivered by pancreatic juice.
Disaccharides are broken down into monosaccharides with the help of the enzymes produced in the intestinal lining: sucrose is broken down into glucose and fructose by sucrase, lactose into glucose and galactose by lactase, maltose into two glucose molecules by maltase, and trehalose in two glucoses by trehalase.
Oligosaccharides, like fructooligosaccharides (FOS), and certain other soluble dietary fibers cannot be digested by human enzymes, but can be broken down (fermented) by normal large intestinal bacteria into short chain fatty acids (SCFA) and monosaccharides.
Proteins are broken down into short protein fragments, called peptides, in the stomach with the help of hydrochloric acid (HCl) and the enzyme pepsin, and in the small intestine by the enzymes trypsin and chymotrypsin delivered with the pancreatic juice. Peptides are broken down by peptidases in the small intestinal lining. The end products of the protein breakdown are amino acids.
Lipid (Fat) Digestion
Fats (triglycerides and phospholipids) are broken down into fatty acids, glycerol and other components in the small intestine with the help of bile delivered by the gallbladder (and produced in the liver) and the enzyme lipase delivered by pancreatic juice.
Cholesterol and phytosterols, which appear in the food in the form of cholesterol esters, are broken down into cholesterol and fatty acids with the help of cholesterol esterase, delivered by pancreatic juice.
Regulation of Digestion
Sympathetic nerves arising from the thoracic and lumbar part of the spinal cord innervate all parts of the gastrointestinal tract. Increased sympathetic activity inhibits intestinal motility (peristalsis) and constrict the sphincters, which results in slower food transit, which can improve digestion but can result in constipation [1-p.56].
Parasympathetic nerves, mainly the Vagus nerve (10th cranial nerve), stimulate intestinal motility, relax sphincters and stimulate gastrointestinal reflexes and digestive juice secretions. Increased parasympathetic activity can result in faster food transit, which can impair digestion and result in diarrhea [1-p.56].
Gastrointestinal tract also contains its own enteric nervous system, which either stimulates or inhibits gut motility and digestive juice secretions [1-p.56]. Enteric nervous system is connected and influenced by the central nervous system (brain-gut axis). Enteric nervous system affects gastrointestinal reflexes:
- Gastroileal reflex. When food enters the stomach, the stomach distends, which stimulates the motility of the ileum.
- Ileogastric reflex. When food reaches the ileum (the distal part of the small intestine), the muscular sphincter at the end of he stomach will constrict thus reducing the stomach emptying.
- Gastrocolic reflex. Distension of the stomach by food stimulates colon motility. This is why breakfast can stimulate a bowel movement.
Gastrointestinal Hormones (Neuropeptides)
Gastrointestinal hormones affect gut motility, secretion of digestive juices and nutrient absorption.
Chart 1. Gastrointestinal Hormones
|HORMONE||SITE OF PRODUCTION||STIMULUS||FUNCTION|
|Gastrin ||Stomach, small intestine||Parasympathetic; presence of proteins (amino acids) in the stomach||Stimulates gastric acid secretion and intestinal motility|
|Cholecystokinin (CKK) ||Small intestine||Presence of fatty acids and amino acids in the small intestine||Stimulates secretion of pancreatic enzymes and gallbladder contraction|
|Secretin ||Small intestine||Acidic (usual) pH in the small intestine after entrance of food from the stomach||Stimulates secretion of alkaline pancreatic juice into the small intestine and thus neutralizes the acidic food from the stomach|
|Glucose-dependent insulinotropic peptide or gastric inhibitory peptide (GIP) ||Small intestine||Presence of fat and glucose in the small intestine||Stimulates insulin secretion; inhibits gastric acid secretion and motility|
|Motilin ||Small intestine||Stimulates stomach emptying and gut motility between meals|
|Somatostatin ||Pancreas||Inhibits secretion of gastric (pepsin) and pancreatic enzymes (lipase) and mobility of the intestine and gallbladder|
Chart 1 source: [1-p.57; 2]
Certain gastrointestinal disorders can impair digestion. Main symptoms of impaired digestion include abdominal bloating, excessive gas and diarrhea and, long-term, weight loss and eventual protein deficiency.
H. pylori infection of the stomach can cause chronic gastritis with decreased secretion of gastric acid (hypochlorhydria or achlorhydria) and the enzyme pepsin, which can result in slower digestion of proteins.
Gastric bypass or surgical removal of the stomach can result in faster transit of food through the gut and thus in less efficient digestion.
Small Intestinal Disorders
Infection of the small intestine due to food poisoning can cause inflammation of the stomach and small intestine (gastroenteritis), which can impair the digestion, mainly of carbohydrates.
Congenital or acquired deficiencies of disaccharidases (enzymes that break down disaccharides):
- Lactase deficiency results in lactose intolerance.
- Sucrase-isomaltase deficiency (rare) results in decreased ability to digest sucrose, maltose, isomaltose and starch.
Small intestinal bacterial overgrowth (SIBO), celiac disease, tropical sprue, Crohn’s disease, lymphoma and other diseases that affect the small intestinal lining can also affect the digestion of carbohydrates.
Liver and Gallblader Disorders
Liver inflammation (hepatitis), advanced liver cirrhosis or other severe liver disorders can be associated with decreased bile production and thus impaired digestion of fat.
Bile stones or tumors that block the common bile duct can prevent the delivery of bile in the intestine and thus impair fat absorption. Bile stones that lodge at the end of the common bile duct can block the delivery of both bile and pancreatic enzymes and thus impair the digestion of fats, proteins and carbohydrates.
Advanced chronic pancreatitis or pancreatic cancer, cystic fibrosis or other severe pancreatic disorder can be associated with a decreased production of pancreatic digestive enzymes (amylase, trypsin, lipase), which can result in impaired digestion of carbohydrates, proteins and fats.
- Alcohol chemical and physical properties
- Alcoholic beverages types (beer, wine, spirits)
- Denatured alcohol
- Alcohol absorption, metabolism, elimination
- Alcohol and body temperature
- Alcohol and the skin
- Alcohol, appetite and digestion
- Neurological effects of alcohol
- Alcohol, hormones and neurotransmitters
- Alcohol and pain
- Alcohol, blood pressure, heart disease and stroke
- Women, pregnancy, children and alcohol
- Alcohol tolerance
- Alcohol, blood glucose and diabetes
- Alcohol intolerance, allergy and headache
- Alcohol and psychological disorders
- Alcohol and vitamin, mineral and protein deficiency
- Alcohol-drug interactions
- Moderate, heavy, binge drinking
- Alcohol intoxication
- Alcohol poisoning
- Alcohol and gastrointestinal tract
- Alcoholic liver disease
- Long-term effects of excessive alcohol drinking
- Alcohol craving and alcoholism
- Alcohol withdrawal
- Hydrogenated starch hydrolysates (HSH)
- Fructo-oligosaccharides (FOS)
- Galacto-oligosaccharides (GOS)
- Human milk oligosaccharides (HMO)
- Isomalto-oligosaccharides (IMO)
- Mannan oligosaccharides (MOS)
- Raffinose, stachyose, verbascose
- SOLUBLE FIBER:
- Acacia (arabic) gum
- Beta mannan
- Carageenan gum
- Carob or locust bean gum
- Fenugreek gum
- Gellan gum
- Glucomannan or konjac gum
- Guar gum
- Karaya gum
- Psyllium husk mucilage
- Resistant starches
- Tara gum
- Tragacanth gum
- Xanthan gum
- INSOLUBLE FIBER:
- Chitin and chitosan
- Aspartic acid
- Glutamic acid
- FATTY ACIDS
- Alpha-linolenic acid (ALA)
- Eicosapentaenoic (EPA) and Docosahexaenoic acid (DHA)
- Arachidonic acid (AA)
- Linoleic acid
- Conjugated linoleic acid (CLA)
- Short-chain fatty acids (SCFAs)
- Medium-chain fatty acids (MCFAs)
- Long-chain fatty acids (LCFAs)
- Very long-chain fatty acids (VLCFAs)
- Vitamin A - Retinol and retinal
- Vitamin B1 - Thiamine
- Vitamin B2 - Riboflavin
- Vitamin B3 - Niacin
- Vitamin B5 - Pantothenic acid
- Vitamin B6 - Pyridoxine
- Vitamin B7 - Biotin
- Vitamin B9 - Folic acid
- Vitamin B12 - Cobalamin
- Vitamin C - Ascorbic acid
- Vitamin D - Ergocalciferol and cholecalciferol
- Vitamin E - Tocopherol
- Vitamin K - Phylloquinone
- Flavanols: Proanthocyanidins
- Flavanones: Hesperidin
- Flavonols: Quercetin
- Flavones: Diosmin, Luteolin
- Isoflavones: daidzein, genistein
- Caffeic acid
- Chlorogenic acid
- Tannic acid |
As teachers our goal is to create students who are autonomous (self-directed independent) learners. We want our students to think for themselves, to ask questions and solve them using whatever resources, tools, plan or methods (strategy) they seem fit to accomplish a task. As teachers we can offer an assortment of different strategies that they can use that can motivate and engage students to learn English more. Given whatever task, autonomous learners will selectively use whatever strategies best effectively and can explain the process of their approach of doing so.
Providing the strategies to students, there are two strategies which overlap or work together.
- Language learning and Communication strategies
- Cognitive and Metacognitive strategies
Language learning and communication strategies is straight-forward in listening, speaking, reading and writing discussed earlier in 3. Integrating Skills which encompasses cognition which is the actual learning, knowing and manipulation of what is being taught. Metacognition is the monitoring and awareness of the cognition process. Think of it as the breakdown and understanding of the learning and knowing. Cognition and metacognition strategies we can think as mostly ‘invisible’ mental processes when we are practicing language learning and communication strategies. We can speak, listen, read and write but to fully learn and know takes a set of cognition and metacognition strategies to help the actual overseeing, monitoring, understanding and underlying of what is being taught in English.
To help develop and provide strategies to a student, teachers can:
- First identify the strategies that best work for a given student by giving tests, surveys, and interviews.
- After identification, help the student understand and use the strategies that best work for them.
- Develop a set of strategies for the student that work together. Remember that we are selecting strategies from both language learning and communication and cognitive and metacognitive strategies that work together for the student. There are also social and affective factors that play into learning strategies too. So create an encouraging comfortable classroom environment that draws from the student’s background and each other.
As teachers our ultimate goal to do develop students who are self-directed language learners who possess the following characteristics:
- Eager to communicate in English
- Try to to communicate in English without being embarrassed or shy to make mistakes
- Able to recognize language and communication patterns and rules
- Able to guess and predict strategies
- Able to pay attention to meaning
- Able to monitor and self-correct their own speech and writing
- Practice the language whenever they get the chance
- Think and dream in English rather than translate
- Transfer strategies to new learning situations
- Learn language outside the classroom
This module is a broad task to master as teachers we have to develop independent strategic students who are successful language learners. The list of strategies is long, overlap and build on one another in combination.
- Strategies such as visualization, verbalization, associations, chunking, questioning, scanning, underlining, accessing cues that can be verbal or visual, mnemonics, sounding out words, self-checking, monitoring.
These strategies and many more must be taught to student how and when to use them so students develop their own learning schema; individualized set of strategies that they develop to learn. The end goal is to be able use a set of strategies to think and communicate in English. |
Author: Dr. Jean-Paul Rodrigue
1. Rationale and Construction
The St. Lawrence Seaway is one of the world’s most comprehensive inland navigation systems, the outcome of centuries of navigation and waterway developments along with the St. Lawrence and Great Lakes system where many segments are shared hydrological assets between Canada and the United States. By the 19th century, a large number of canals were being built to improve inland accessibility, particularly the Erie Canal, completed in 1825, that competed directly with the prominence of the St. Lawrence in accessing the Great Lakes. With the construction of the Lachine Canal in 1825 and the Welland Canal in 1829, in addition to specific canals and locks linking it to Lake Ontario, the St. Lawrence remained a competitive corridor to access the North American Midwest. However, by the late 19th century, rail transportation assumed its prominence, rendering several canal system uncompetitive, many of which were closed down. The only way that inland navigation could endure was with better economies of scale, which could only be achieved along the St. Lawrence, but required substantial infrastructure investments.
The establishment of the International Joint Commission in 1909 to help resolve water development issues between Canada and the United States provided the renewed impetus. After conducting a series of studies about the system, its constraints, and its commercial potential, the commission recommended the construction of the St. Lawrence Seaway. However, two groups lobbied against the project since it was perceived to have a negative impact on their businesses:
- Railway companies that saw the seaway as a direct competitor for their inland freight distribution system in the Midwest. By the early 20th century, rail development has reached a phase of maturity in North America and was beginning a phase of rationalization.
- East coast ports, with the exception of those along St. Lawrence (mainly Montreal), that also saw the seaway as a competitor.
Additionally, World War I and the Great Depression of the 1930s created a negative commercial environment. By the late 1940s, pressures were mounting to improve the waterways, particularly since it also involved the opportunity to build new hydroelectric power plants. Still, the project was being rejected by the US Senate until 1954 when the Canadian government declared that it would unilaterally build the Seaway on its side of the border. Initial construction work thus began in 1954 with the full cooperation from the Canadian and American governments. It was an impressive task, moving 192.5 million cubic meters of earth, pouring 5.7 million cubic meters of concrete, building 72 km of dikes, and digging 110 km of channels. The goal was to replace a 14-foot (4.3 meters) deep waterway with 30 locks with a 27-foot (8.3 meters) deep channel with 15 locks. Each lock has 766 feet (233.5 meters) of usable length, 80 feet (24.4 meters) of usable width, and 30 feet (9.1 meters) of depth.
One of the first challenges was the relocation of the neighboring population of the international rapids, which was to be flooded to provide sufficient depth as well as power pools. In total, 260 square kilometers of land were expropriated. The American side did not present many relocation problems since it was sparsely settled, but the more densely populated Canadian side included several riverside towns such as Iroquois, Morrisburg, Ingleside, and Long Sault. Overall, the flooding of this section involved the relocation of 6,500 residents to new towns built at the expense of the project.
Different sections of the Seaway were subject to different construction works depending on the power generation potential. The International Rapids section was particularly subject to power projects such as the Saunders-Moses Dam and a set of spillway dams (Long Sault Dam) and control dams (Iroquois Dam). Provincial (or State) governments were mainly responsible to finance and undertake power projects (Hydro Ontario and New York State Power Authority) while federal governments were concerned with navigation projects.
Navigation work mainly included building locks and dredging channels to the 27 feet (8.3 meters) standard. In the International Rapids section, the United States built and dredged a 16 km long channel with two 800 feet (244 meters) long, 80 feet (24.5 meters) wide, and 30 feet (9.2 meters) deep locks, the Dwight D. Eisenhower and the Bertrand H. Snell Locks. The Thousand Islands sections between Lake Ontario and the International Rapids was also dredged to 27 feet by both Canadian and American Governments. A significant share of the work was undertaken by the Canadian Government with the construction of a lock (Iroquois Lock) to bypass the Iroquois Dam, the enlargement of the Beauharnois Canal (25.7 km long) and two locks (Upper and Lower Beauharnois) and a new 32 km canal to bypass the Lachine Rapids near Montreal and which included two locks (St. Lambert and Cote Ste. Catherine). Lake St. Francois and Lake St. Louis were also dredged as well as the Welland Canal which was deepened to 27 feet.
The St. Lawrence Seaway was opened to commercial navigation on April 25th, 1959. The official opening ceremonies were held three months later on June 26th in presence of Queen Elizabeth II (representing Canada) and President Dwight D. Eisenhower. Overall, the project cost 470 million US dollars of which $336.2 million were paid by Canada and $133.8 million by the United States. Income from the operations of the Seaway is thus shared accordingly between the two federal agencies responsible for its management and upkeep; the Saint Lawrence Seaway Management Corporation (Canada) and the Saint Lawrence Seaway Development Corporation (United States).
2. Sections of the Seaway
The St. Lawrence Seaway is only one part of a greater navigation system and should not be confused with the St. Lawrence River or the Great Lakes. It is overall a relatively small section of the system that begins in Montreal, goes through Lake Ontario and ends at Lake Erie at the outcome of the Welland Canal. Navigation beyond that point is no longer considered part of the seaway. The St. Lawrence Seaway can be divided into four major sections corresponding to specific infrastructures.
- Lachine Section. This 50 km section is the doorway of the St. Lawrence Seaway, which begins around 1 km east of the Jacques Cartier Bridge. Its main purpose is to bypass the Lachine Rapids, the first major natural obstacle along the St. Lawrence. Instead of using the north shore, as the Lachine Canal did, the Seaway passes through the south shore, a much longer route. The main rationale was to avoid passing through the congested Montreal harbor and the St. Mary’s current. Also, the south shore presented fewer impacts over the waterfront as well as better integration with existing transport infrastructure. Two locks provide a 45 feet (13 meters) climb, the St. Lambert Lock, and the Cote Ste. Catherine Lock.
- Beauharnois Section. This 74 km section extends from the end of Lake St. Louis to Cornwall in Ontario. It serves two major purposes, which are navigation and power generation. Two 42 feet (12 meters) lift dams were built, the Upper and Lower Beauharnois locks, permitting the Seaway to cross the Cascades, Split Rock, Cedars, and Coteau Rapids between Lake St. Louis and Lake St. Francois. The second purpose is a power dam taking advantage of an 80 feet (24 meters) drop, the Beauharnois Power Plant. This power plant is supported by a set of dams that regulate the flow along this section.
- International Section. Such as the Beauharnois section, the International section has been the object of navigation and power works, but this section is jointly administered by Canada and the United States. It is 71 km long and consists of a set of dams (Long Sault and Iroquois), powerhouses, locks (Iroquois, Dwight. D. Eisenhower, and Bertrand H. Snell), channels, and dikes, creating vast power pools. This section climbs 93 feet (28 meters). It can be subdivided between the International Rapids section and the Thousand Islands section.
- Great Lakes Channels. This section is composed of a series of channels and locks linking the Great Lakes together. The Welland Canal is the most significant, with 8 locks climbing 326 feet (99 meters) from Lake Ontario to Lake Erie, where the seaway ends. The channels linking Lake Erie and Lake Huron (St. Claire River, Lake St. Claire and Detroit River), Lake Huron and Lake Michigan (Straits of Mackinac) and Lake Huron and Lake Superior (St. Mary’s River and Soo Locks, a 6 meters climb) are also part of this system, but not considered to be part of the seaway.
Further, the St. Lawrence is part of the North American system of river and coastal navigation, which is complementing the existing railway and highway systems.
The Seaway is generally open for navigation from late March / early April to mid-December, which is about 275 days. It can accommodate ships up to 766 feet (233.5 meters) long and 80 feet (24.4 meters) wide in the range of 30,000 dwt. The draft of the Seaway was upgraded in 2006 to 26’6″ and there are plans to expand the draft to 26’9″. Each additional inch of draft enables a ship to carry an additional 500 tons. A typical ship designed to use the Seaway, a Laker, can carry about 25,000 tons and is 222 meters long and 23 meters wide. This ship class is also referred to as Seawaymax since designed to fit specifically in the Seaway’s locks. It takes 8 to 10 days for a ship to go from Lake Superior to the Atlantic Ocean. On the Welland Canal, the slowest section of the seaway, the average transit time is about 11 hours. For the Montreal-Lake Ontario section, the average transit time is 24 hours upstream and 22 hours downstream. The difference is mainly attributed to the downstream river current. Pleasure boats can also use the Seaway to go from the Great Lakes to the Atlantic Ocean, but priority is obviously given to commercial ships at locks.
At the end of the first navigation season on December 3rd, 1959, 6,595 ships passed through the Seaway handling a total of 18.7 million metric tons. The tonnage passed to 20 million in 1961, 30 million in 1964, 40 million in 1966, and 50 million in 1973. In 1977 a record was reached with 57.4 million metric tons being handled by the Seaway and since then this record remains unsurpassed. On average, 50 million tons of cargo are handled each year (over a period of 8 months) and the majority of the flows are downbound (from the Great Lakes to the Atlantic). Over one billion tons of cargo passed over the Seaway over its first 25 years of operation (1959-84) and by 1997 this number has reached more than two billion tons handled by more than 250,000 vessel trips. Still, the Seaway remains used at about 50% of its design capacity.
The St. Lawrence Seaway generates around 40,000 jobs and 2 billion dollars of annual personal income, but its most significant contribution is related to the cargo it handles, supporting a vast array of industries. The system carries bulk cargo such as grain, iron ore, coal, and petroleum products and general cargo such as containers, steel, and machinery. The first category accounts for 90% of the annual tonnage while the second account for the remaining 10%.
- Grain. It is the most important cargo in terms of volume and accounts for 40% of all the cargo handled. Most of the grain comes from the American and Canadian prairies (mostly Manitoba and Saskatchewan) and is exported to international markets through the Seaway. Wheat accounts for 50% of the total grain, while corn and soybeans take 30%. Barley, oats, rye and other grains account for the 20% that remains. Several ports along the Seaway have grain handling infrastructures.
- Iron Ore. With a third of all the cargo handled, iron ore is the second most important commodity. It is generally shipped from mines in Labrador, Quebec, Ontario, and Minnesota (Mesabi Range) to ports along the St. Lawrence or the Great Lakes and then to steel mills. Pittsburgh was (and is still) one of the most significant steel production centers of the Great Lakes.
- Coal. Coal is either used for steel making or to heat thermal plants for electricity production. The Appalachians are a major coal extraction region of the United States and coal is then shipped from the mines to the ports of Lake Erie and then to other plants of the region or to the international market.
- Steel. With 10% of the total annual tonnage, steel is mainly used by heavy industries such as construction and the automotive industry.
The St. Lawrence Seaway and the Great Lakes are thus mainly used to ship heavy raw materials and limited general cargo traffic occurs past Montreal (a major container port). One of the main reasons behind such a characteristic is that general cargo is now shipped through containers and that the railway system is faster to ship containers to eastern and western seaboard ports than transporting containers through the Seaway. For instance, it takes a little more than 24 hours to transit a container by rail from Chicago to Montreal, while this operation would take around one week through the Great Lakes and the Seaway. Additionally, the fact that the Seaway is closed for about 3 months is incompatible with supply chain management that requires constant flows. Since the peak season for containerized cargo is mostly between April and November, using the Seaway may represent a niche market for import retail cargo where empty containers could be filled with commodities on the return trip.
They were oceanic containership services using the Seaway in the 1960s and 1970s with ships of about 800 TEUs, mostly with the UK. They were abandoned afterward as containerships got bigger and as specialized container terminals started to emerge. In 2009, a container barge service with a weekly rotation between Hamilton and Montreal was inaugurated but canceled the year after because of the lack of demand. In 2014, a new oceangoing service between the Port of Cleveland and Antwerp was inaugurated. It combines breakbulk and container cargoes on specifically designed self-unloading ships. It remains to be seen the extent to which the Seaway can be used for container transportation and which role it will play in North American freight transportation. |
Who is William Harvey?June 26, 2021
William Harvey, who lived from April 1, 1578 to June 3, 1657, was an English medical doctor, scientist, and thinker who was among the pioneers of science. William Harvey is one of the pioneers of the scientific world, especially with his studies on blood circulation and his discoveries on the functions of the heart. William Harvey is also considered one of the founders of modern physiology.
Who is William Harvey?
William Harvey was born in Folkestone, England. Harvey completed his school life at Caius College in Cambridge and the medical school in Padova and began serving science as a medical doctor.
After Harvey’s education, he went to St. He started to work as a medical doctor in the Bartholomew Hospital and also gave medicine lessons to people who were trained and worked in this hospital. Harvey, who also served as the personal physician of former English kings James I and Charles I, died on June 3, 1657, after a stroke.
William Harvey is known as the first person to accurately describe the circulation of blood starting from the heart. Although blood circulation was described much earlier by the Spanish Doctor Michael Servetus, the discovery of blood circulation is attributed to Harvey, since William Harvey was the first to reveal this structure with permanent explanations and texts.
William Harvey and the Circulation
Harvey, who described and revealed the blood circulation in his 72-page work titled “Exercitatio Anatomica De Motu Cordis Et Sanguinis In Animalibus” published in Frankfurt in 1628, after many years of research, sees the heart as a muscular pump and describes the movement of the heart as follows:
First, the auricula contracts and during this time, blood is thrown into the ventricle. When this place is filled, the heart expands, after an upward movement, its fibers are stretched and a beat occurs with the contraction of the ventricles. Thus, the blood from the auricules is immediately thrown into the arteries; the right ventricular pulse is sent to the lung through a vehicle called the vena arteriosa, which is an artery in structure and function, and from there it passes into the left ventricle via the left auricula; sends the left ventricular pulse to the aorta. From here, it spreads to the whole body with arteries and returns to the heart via veins. All these events are repeated over and over in order. Thus, the pulsation of the arteries consists of the impulse of the blood sent from the heart.
Harvey conducted experiments with 80 different species of animals to obtain this conclusion.
It was his teacher Acquapendenteli Girolama Fabricio (1537-1619) in Padova who prompted him to work on the direction of circulation and was the first to describe venous valves. However, Harvey embarked on a series of studies, as it seemed unreasonable to Harvey for his teacher to see these as obstacles to slowing the circulation of venous circulation from the proximal to the periphery.
Harvey observed that when he ligated the proximal part of the vein, the distal part swelled. As a result, he suggested that the venous valves are structures that direct the circulation towards the heart and prevent the reverse direction. Thus, Harvey became the first physician to observe and best explain the movements of the blood circulation and heart.
The image below shows stamps issued to Harvey’s name by the British and Hungarian governments and the personal stamp of Harvey’s teacher, Acquapendenteli Girolama Fabricio:
Source: Prof. Dr. Teoman ONAT |
Why do we change our clocks?
The idea of moving clocks forwards and backwards came to Britain from America. Benjamin Franklin, an American politician and inventor, first suggested it in 1784. He thought that if people got up earlier, when it was lighter, then it would save on candles.
Back in Britain, a leaflet called The Waste of Daylight was published, encouraging people to get out of bed earlier. In 1908, the government discussed making it law to change the clocks but this wasn’t popular. The change did happen in Mabel’s lifetime, finally coming into being in 1916, during World War I.
Now, clocks around Britain always go back by one hour on the last Sunday in October and forward by one hour on the last Sunday in March. |
A vet or qualified equine dentist should be called in regularly to thoroughly examine and carry out any necessary work on your horse’s teeth. Horses aged 2-5 years should have their teeth checked prior to commencing work or at six monthly intervals. After the age of five years (when the horse has a full set of permanent teeth) all horses should have at least one annual dental check, more often if the horse is being fed significant amounts of concentrates as chew patterns and therefore tooth wear is different when eating concentrates. Remember that horses need a very highly fibrous diet for many reasons; correct tooth wear is only one of them.
The horse’s diet, mainly tough fibrous and often abrasive material, requires a lot of chewing and grinding. In normal situations the teeth of the horse are well equipped to cope with this diet. The top surface area of the teeth contains folds that help the horse to chew fibrous material. The teeth continuously erupt throughout the life of the horse in order to cope with what they evolved to eat. By five years old the horse has a full set of very large teeth. The roots of the molars (back) teeth are so large that they are often seen as bumps in the jaw line of the horse (usually the bottom jaw line and sometimes the upper jaw line with certain breeds). These bumps disappear as the teeth begin to wear inside the mouth and the teeth begin erupting on a continual basis.
Well-cared for domestic horses generally live for a lot longer than free living (wild and feral) horses. This means that their teeth have to last them for a lot longer too.
The teeth can develop sharp edges and uneven wear. If the horse’s ability to grind down food sufficiently is compromised for any reason, the enzymes and microbes of the gastrointestinal tract have a hard time continuing the digestive process and one of the results is a drop in condition. Often it is poor condition of the teeth that leads to starvation and premature death in free living horses.
Teeth problems can also cause behavioural problems as the horse attempts to alleviate any pain. Horses need regular dental care if they are to get the maximum benefit from their feed and perform well.
Potential problems include:
- Sharp cheek teeth (molars). This occurs to some extent in all horses but its occurrence is accelerated when horses have a high grain diet because the horse chews grain differently to grasses and hay. This causes the teeth to wear shaper edges.
- Imperfect meeting of the teeth such as parrot mouth – overbite – (where the top row of incisors are further forward than the bottom row) and sow mouth – underbite – (where the bottom row of incisors are further forward than the top row) causes problems with grazing (as the horse cannot ‘clip’ the grass properly) and the horse usually develops sharp ‘hooks’ on the last molars at the back of the mouth due to them also being out of alignment.
- Wolf teeth which are a much-reduced vestige of a tooth that was well developed in the ancestor of the horse. They sit in front of the first molar and because they usually have shallow roots they can be loose. A loose wolf tooth may cause a horse to head toss or be reluctant to respond to the bit.
- Teething problems, as with human babies, the eruption of teeth in young horses may cause transitory trouble. Also the horse may have ‘caps’ which are temporary (milk) teeth that have not fallen out but form a cap on top of a newer permanent tooth.These caps can create decay as food gets trapped under them.
- Decayed teeth, this can lead to the destruction of the tooth, which may lead to infection of the surrounding bone.
Some of the signs of dental problems include:
- Behavioural problems
- Weight loss
- Loss of coat shine
- Irregular chewing patterns
- Quidding (dropping partially chewed food out of the mouth)
- Unresponsiveness to the bit or head tossing
- Excessive salivation
- Bad breath
- Swelling of the face or jaw
- Lack of desire to eat hard food
- Reluctance to drink cold water.
However some horses show hardly any if no outward signs even though they are experiencing extreme discomfort, so don’t wait for signs before doing anything. Schedule regular visits from your vet or qualified equine dentist.
For more information please see: https://www.equiculture.net/responsiblehorsecare |
Teachers are experiencing success working in small data teams to promote student growth in the Ritenour School District. By collecting and analyzing student data, teams of teachers work together toward specific curriculum goals. In Ritenour, these teams are known as "CLT"s (Collaborative Learning Teams) or "PLC"s (Professional Learning Communities). Working in teams allows teachers to brainstorm, think critically, and collaborate to find the best way to help students meet their goals. For each goal/cycle, Ritenour data teams follow these guidelines:
Data Teams Flow Chart
Examine the expectations. Look at the state standards or frameworks, district power or priority standards, "unwrapped" standards.
Develop a curriculum map. Create a year-long pacing chart/calendar.
Develop a common post-assessment. What must students master as a result of your teaching?
Administer the short-cycle, common formative assessment (pre-instruction). You need to know where students are in their learning before instruction occurs. What data tell you that the lessons you are preparing are the lessons students need?
Follow the Data Teams Process for Results.
Teach students using common instructional strategies.
Administer the common formative assessment (post-instruction).
Score the assessment and submit the data to the Data Team leader.
Meet as a team to determine if the goal was met.
Determine next step for students who did not reach proficiency on the assessment.
Return to step 1.
Begin the process again with the next critical expectation based on the pacing guide.
Data Teams Process for Results
Collect and chart data. Data teams gather and display data from formative assessment results. Through the disaggregation in this step, teams will be able to plan for the acceleration of learning for all students.
Analyze data and prioritize needs. Data Teams identify the strengths and needs of student performance and then form inferences based on the data. Data Teams also prioritize by focusing on the most urgent needs of the learners.
Set, review and revise incremental SMART goals. Teams collaboratively set incremental goals. These short-term goals are reviewed and revised throughout the data cycle.
Select common instructional strategies. Teams collaboratively identify research-based instructional strategies. The determination is based on the analysis in step 2.
Determine results indicators. Data Teams create descriptors of successful strategy implementation as well as improvements to be seen in ongoing student work that would indicate the effectiveness of the selected strategies.
Monitor and evaluate results throughout the entire process.
Case Study: Collaborative Learning Teams at Kratz Elementary
First-grade teachers at Kratz Elementary used their CLT to develop math skills in students, first dividing them into four groups (Proficient, Close to Proficient, Far to Go and Intervention) based on each student's performance on a common pre-test. Over the course of two months, teachers tracked student performance and spent time working with each proficiency level group. Their goal was for each classroom to achieve 59% proficiency in Addition Fluency within 10. By the end of the cycle, all classes met their goal.
Use this video playlist to learn what Collaborative Learning Teams look like for these first-grade teachers at Kratz Elementary, and consider how these same ideas could be utilized across different grade levels and subjects in Ritenour.
Learn more about Data Teams and Data Team Certification on the DESE website.
The Ritenour School District is an equal opportunity employer. The District is also committed to maintaining a workplace and educational environment free from illegal discrimination or harassment in admission or access to, or treatment or employment in, its programs, activities and facilities. Discrimination or harassment against employees, students or others on the basis of race, color, religion, sex, national origin, ancestry, disability, age or any other characteristic protected by law is strictly prohibited in accordance with law. |
When it comes to programming languages, there is a plethora to choose from. Moreover, there are tons of them out there for different applications. The primary function of a computer program is to solve a problem with a set of given instructions (or code). However, if all of them can solve problems, why is there a need for so many? Can’t one program do it all? To answer these questions, let’s look at what it takes to execute a computer program.
Hardware that Computes the Program
Almost all computing of any nature (with a few exceptions) is done in the Central Processing Unit (CPU). The modern-day CPU that we have today contains many microelectronic components. The CPU can be further divided into smaller computing units known as Logical Transistor Gates, commonly known as logic gates. The primary function of these logic gates is to implement three primary operators: AND, OR and NOT gates.
The above logical operators work on a number system known as a binary, which works on a numeric system of only 0s and 1s. 0 signifies no electric current passing through the transistor, and 1 implies that there is an electric current passing through the transistor.
The most bizarre phenomenon is, if you combine just the above logical operators with binary, you will be able to express all of the logic we know, including arithmetic operations, integers, and almost everything else you can imagine. A modern-day CPU has something called a microcode preinstalled within it. A microcode contains a set of basic instructions used to carry out more complex executions, known as machine code.
Levels of Software for the Execution of Code
Now that we know the hardware behind the execution of computer code (there’s much more, but for the sake of simplicity, I will stick to the parts where the main computation occurs), let’s move on to how many levels of software are needed for the execution of the software. The first program needed is the Assembler, which helps in the conversion of the assembly language into machine code. An assembly-level language is a lower level language that acts as a correspondence program between a high-level programming language (e.g., Python, C++, Java) and the machine code. The layout of any software architecture is as follows:
The next important program is the Compiler. The compiler takes high-level abstract language and converts it into assembly language, and sometimes machine code as well. The last final program (or package) that brings everything together is the Operating System. The operating system is basically a program that starts when your computer starts and runs until the computer is shut down. The operating system is also responsible for managing complicated tasks, such as file management, input & output management. This functions so that every time you want to execute a program, you don’t have to keep reimplementing it again and again. It also makes the interactions of the user function in a more intuitive way, rather than always trying to communicate with the computer in the form of 0s and 1s.
One Size Does Not Fit All
Now we understand that all the different programming languages go through the same form of execution. One program should fit for all causes, right? Wrong. It would be an ideal world where we only needed one program for everything and anything. All programming languages do serve the same purpose, whether it is C++, Java, Python, etc., but all of them are necessary for different reasons.
What creates the need for different programming languages is that each has a different functionality. When a programming language is created, it is designed with the intention to add as much functionality to it as possible, but there is no programming language that can do everything under the sun. With that in mind, let’s take a look at the different programming languages and their functionality:
- Java: This is a general-purpose software primarily used for Android development. It is also sometimes used in website development and embedded software. It is considered one of the more popular programming languages.
- C++: This is the successor to the infamous general-purpose programming language C. It is currently used for the development of computer games with high-end graphics. It is great at memory management and very smooth during run time. It is also used in developing operating systems and desktop applications.
- Python: Again, this is for general-purpose software. It is not as fast as other programming languages, but it is excellent for website development and data handling. Program build time is much less in Python as compared to other programming languages.
- HTML/CSS: This isn’t really a general-purpose programming language, per se, but it is more specifically geared towards website development. Languages like Python, which are used to create the backend (the part that interacts with the database) of websites, work by sending HTML to the server. HTML provides a structure for the website, while CSS is used for styling and restructuring the website.
- PHP: This is used to add functionality to an HTML program. It allows you to retrieve data and generate HTML in useful ways before it’s sent to a user’s browser to be displayed. Companies like Facebook use this language extensively.
- R: This has become quite a famous programming language in recent years. It’s used exclusively for statistical analysis and things that are closely related to that field.
As you can see, although each of the programming languages is computing some form of data, each has its own use case and functionality. Also, the choice of a language depends on what each person needs and how comfortable they are with the respective languages. |
A karyotype (from Greek κάρυον karyon, "kernel", "seed", or "nucleus", and τύπος typos, "general form") is the number and appearance of chromosomes in the nucleus of a eukaryotic cell. The term is also used for the complete set of chromosomes in a species, or an individual organism.
Karyotypes describe the chromosome count of an organism, and what these chromosomes look like under a light microscope. Attention is paid to their length, the position of the centromeres, banding pattern, any differences between the sex chromosomes, and any other physical characteristics. The preparation and study of karyotypes is part of cytogenetics.
The study of whole sets of chromosomes is sometimes known as karyology. The chromosomes are depicted (by rearranging a photomicrograph) in a standard format known as a karyogram or idiogram: in pairs, ordered by size and position of centromere for chromosomes of the same size.
The basic number of chromosomes in the somatic cells of an individual or a species is called the somatic number and is designated 2n. Thus, in humans 2n = 46. In the germ-line (the sex cells) the chromosome number is n (humans: n = 23).p28
So, in normal diploid organisms, autosomal chromosomes are present in two copies. There may, or may not, be sex chromosomes. Polyploid cells have multiple copies of chromosomes and haploid cells have single copies.
The study of karyotypes is important for cell biology and genetics, and the results may be used in evolutionary biology (karyosystematics) and medicine. Karyotypes can be used for many purposes; such as to study chromosomal aberrations, cellular function, taxonomic relationships, and to gather information about past evolutionary events.
- 1 History of karyotype studies
- 2 Observations on karyotypes
- 3 Diversity and evolution of karyotypes
- 4 Depiction of karyotypes
- 5 Chromosome abnormalities
- 6 See also
- 7 References
- 8 External links
History of karyotype studies
Chromosomes were first observed in plant cells by Carl Wilhelm von Nägeli in 1842. Their behavior in animal (salamander) cells was described by Walther Flemming, the discoverer of mitosis, in 1882. The name was coined by another German anatomist, Heinrich von Waldeyer in 1888.
The next stage took place after the development of genetics in the early 20th century, when it was appreciated that chromosomes (that can be observed by karyotype) were the carrier of genes. Grygorii Levitsky seems to have been the first person to define the karyotype as the phenotypic appearance of the somatic chromosomes, in contrast to their genic contents. The subsequent history of the concept can be followed in the works of C. D. Darlington and Michael JD White.
Investigation into the human karyotype took many years to settle the most basic question: how many chromosomes does a normal diploid human cell contain? In 1912, Hans von Winiwarter reported 47 chromosomes in spermatogonia and 48 in oogonia, concluding an XX/XO sex determination mechanism. Painter in 1922 was not certain whether the diploid of humans was 46 or 48, at first favouring 46, but revised his opinion from 46 to 48, and he correctly insisted on humans having an XX/XY system. Considering the techniques of the time, these results were remarkable.
In textbooks, the number of human chromosomes remained at 48 for over thirty years. New techniques were needed to correct this error. Joe Hin Tjio working in Albert Levan's lab was responsible for finding the approach:
- Using cells in tissue culture
- Pretreating cells in a hypotonic solution, which swells them and spreads the chromosomes
- Arresting mitosis in metaphase by a solution of colchicine
- Squashing the preparation on the slide forcing the chromosomes into a single plane
- Cutting up a photomicrograph and arranging the result into an indisputable karyogram.
The work took place in 1955, and was published in 1956. The karyotype of humans includes only 46 chromosomes. Rather interestingly, the great apes have 48 chromosomes. Human chromosome 2 is now known to be a result of an end-to-end fusion of two ancestral ape chromosomes.
Observations on karyotypes
The study of karyotypes is made possible by staining. Usually, a suitable dye, such as Giemsa, is applied after cells have been arrested during cell division by a solution of colchicine usually in metaphase or prometaphase when most condensed. In order for the Giemsa stain to adhere correctly, all chromosomal proteins must be digested and removed. For humans, white blood cells are used most frequently because they are easily induced to divide and grow in tissue culture. Sometimes observations may be made on non-dividing (interphase) cells. The sex of an unborn fetus can be determined by observation of interphase cells (see amniotic centesis and Barr body).
Six different characteristics of karyotypes are usually observed and compared:
- Differences in absolute sizes of chromosomes. Chromosomes can vary in absolute size by as much as twenty-fold between genera of the same family. For example, the legumes Lotus tenuis and Vicia faba each have six pairs of chromosomes, yet V. faba chromosomes are many times larger. These differences probably reflect different amounts of DNA duplication.
- Differences in the position of centromeres. These differences probably came about through translocations.
- Differences in relative size of chromosomes. These differences probably arose from segmental interchange of unequal lengths.
- Differences in basic number of chromosomes. These differences could have resulted from successive unequal translocations which removed all the essential genetic material from a chromosome, permitting its loss without penalty to the organism (the dislocation hypothesis) or through fusion. Humans have one pair fewer chromosomes than the great apes. Human chromosome 2 appears to have resulted from the fusion of two ancestral chromosomes, and many of the genes of those two original chromosomes have been translocated to other chromosomes.
- Differences in number and position of satellites. Satellites are small bodies attached to a chromosome by a thin thread.
- Differences in degree and distribution of heterochromatic regions. Heterochromatin stains darker than euchromatin. Heterochromatin is packed tighter. Heterochromatin consists mainly of genetically inactive and repetitive DNA sequences as well as containing a larger amount of Adenine-Thymine pairs. Euchromatin is usually under active transcription and stains much lighter as it has less affinity for the giemsa stain. Euchromatin regions contain larger amounts of Guanine-Cytosine pairs. The staining technique using giemsa staining is called G banding and therefore produces the typical "G-Bands".
A full account of a karyotype may therefore include the number, type, shape and banding of the chromosomes, as well as other cytogenetic information.
Variation is often found:
- between the sexes,
- between the germ-line and soma (between gametes and the rest of the body),
- between members of a population (chromosome polymorphism),
- in geographic specialization, and
- in mosaics or otherwise abnormal individuals.
The normal human karyotypes contain 22 pairs of autosomal chromosomes and one pair of sex chromosomes (allosomes). Normal karyotypes for females contain two X chromosomes and are denoted 46,XX; males have both an X and a Y chromosome denoted 46,XY. Any variation from the standard karyotype may lead to developmental abnormalities.
Diversity and evolution of karyotypes
Although the replication and transcription of DNA is highly standardized in eukaryotes, the same cannot be said for their karyotypes, which are highly variable. There is variation between species in chromosome number, and in detailed organization, despite their construction from the same macromolecules. This variation provides the basis for a range of studies in evolutionary cytology.
In some cases there is even significant variation within species. In a review, Godfrey and Masters conclude:
- "In our view, it is unlikely that one process or the other can independently account for the wide range of karyotype structures that are observed... But, used in conjunction with other phylogenetic data, karyotypic fissioning may help to explain dramatic differences in diploid numbers between closely related species, which were previously inexplicable.
Although much is known about karyotypes at the descriptive level, and it is clear that changes in karyotype organization has had effects on the evolutionary course of many species, it is quite unclear what the general significance might be.
- "We have a very poor understanding of the causes of karyotype evolution, despite many careful investigations... the general significance of karyotype evolution is obscure." Maynard Smith.
Changes during development
Instead of the usual gene repression, some organisms go in for large-scale elimination of heterochromatin, or other kinds of visible adjustment to the karyotype.
- Chromosome elimination. In some species, as in many sciarid flies, entire chromosomes are eliminated during development.
- Chromatin diminution (founding father: Theodor Boveri). In this process, found in some copepods and roundworms such as Ascaris suum, portions of the chromosomes are cast away in particular cells. This process is a carefully organised genome rearrangement where new telomeres are constructed and certain heterochromatin regions are lost. In A. suum, all the somatic cell precursors undergo chromatin diminution.
- X-inactivation. The inactivation of one X chromosome takes place during the early development of mammals (see Barr body and dosage compensation). In placental mammals, the inactivation is random as between the two Xs; thus the mammalian female is a mosaic in respect of her X chromosomes. In marsupials it is always the paternal X which is inactivated. In human females some 15% of somatic cells escape inactivation, and the number of genes affected on the inactivated X chromosome varies between cells: in fibroblast cells up about 25% of genes on the Barr body escape inactivation.
Number of chromosomes in a set
A spectacular example of variability between closely related species is the muntjac, which was investigated by Kurt Benirschke and his colleague Doris Wurster. The diploid number of the Chinese muntjac, Muntiacus reevesi, was found to be 46, all telocentric. When they looked at the karyotype of the closely related Indian muntjac, Muntiacus muntjak, they were astonished to find it had female = 6, male = 7 chromosomes.
- "They simply could not believe what they saw... They kept quiet for two or three years because they thought something was wrong with their tissue culture... But when they obtained a couple more specimens they confirmed [their findings]" Hsu p73-4
The number of chromosomes in the karyotype between (relatively) unrelated species is hugely variable. The low record is held by the nematode Parascaris univalens, where the haploid n = 1; and an ant: Myrmecia pilosula. The high record would be somewhere amongst the ferns, with the Adder's Tongue Fern Ophioglossum ahead with an average of 1262 chromosomes. Top score for animals might be the shortnose sturgeon Acipenser brevirostrum at 372 chromosomes. The existence of supernumerary or B chromosomes means that chromosome number can vary even within one interbreeding population; and aneuploids are another example, though in this case they would not be regarded as normal members of the population.
The fundamental number, FN, of a karyotype is the number of visible major chromosomal arms per set of chromosomes. Thus, FN ≤ 2 x 2n, the difference depending on the number of chromosomes considered single-armed (acrocentric or telocentric) present. Humans have FN = 82, due to the presence of five acrocentric chromosome pairs: 13, 14, 15, 21, and 22. The fundamental autosomal number or autosomal fundamental number, FNa or AN, of a karyotype is the number of visible major chromosomal arms per set of autosomes (non-sex-linked chromosomes).
Ploidy is the number of complete sets of chromosomes in a cell.
- Polyploidy, where there are more than two sets of homologous chromosomes in the cells, occurs mainly in plants. It has been of major significance in plant evolution according to Stebbins. The proportion of flowering plants which are polyploid was estimated by Stebbins to be 30–35%, but in grasses the average is much higher, about 70%. Polyploidy in lower plants (ferns, horsetails and psilotales) is also common, and some species of ferns have reached levels of polyploidy far in excess of the highest levels known in flowering plants.
Polyploidy in animals is much less common, but it has been significant in some groups.
Polyploid series in related species which consist entirely of multiples of a single basic number are known as euploid.
- Haplo-diploidy, where one sex is diploid, and the other haploid. It is a common arrangement in the Hymenoptera, and in some other groups.
- Endopolyploidy occurs when in adult differentiated tissues the cells have ceased to divide by mitosis, but the nuclei contain more than the original somatic number of chromosomes. In the endocycle (endomitosis or endoreduplication) chromosomes in a 'resting' nucleus undergo reduplication, the daughter chromosomes separating from each other inside an intact nuclear membrane.
In many instances, endopolyploid nuclei contain tens of thousands of chromosomes (which cannot be exactly counted). The cells do not always contain exact multiples (powers of two), which is why the simple definition 'an increase in the number of chromosome sets caused by replication without cell division' is not quite accurate.
This process (especially studied in insects and some higher plants such as maize) may be a developmental strategy for increasing the productivity of tissues which are highly active in biosynthesis.
The phenomenon occurs sporadically throughout the eukaryote kingdom from protozoa to man; it is diverse and complex, and serves differentiation and morphogenesis in many ways.
- See palaeopolyploidy for the investigation of ancient karyotype duplications.
Aneuploidy is the condition in which the chromosome number in the cells is not the typical number for the species. This would give rise to a chromosome abnormality such as an extra chromosome or one or more chromosomes lost. Abnormalities in chromosome number usually cause a defect in development. Down syndrome and Turner syndrome are examples of this.
Aneuploidy may also occur within a group of closely related species. Classic examples in plants are the genus Crepis, where the gametic (= haploid) numbers form the series x = 3, 4, 5, 6, and 7; and Crocus, where every number from x = 3 to x = 15 is represented by at least one species. Evidence of various kinds shows that that trends of evolution have gone in different directions in different groups. Closer to home, the great apes have 24x2 chromosomes whereas humans have 23x2. Human chromosome 2 was formed by a merger of ancestral chromosomes, reducing the number.
Some species are polymorphic for different chromosome structural forms. The structural variation may be associated with different numbers of chromosomes in different individuals, which occurs in the ladybird beetle Chilocorus stigma, some mantids of the genus Ameles, the European shrew Sorex araneus. There is some evidence from the case of the mollusc Thais lapillus (the dog whelk) on the Brittany coast, that the two chromosome morphs are adapted to different habitats.
The detailed study of chromosome banding in insects with polytene chromosomes can reveal relationships between closely related species: the classic example is the study of chromosome banding in Hawaiian drosophilids by Hampton L. Carson.
In about 6,500 sq mi (17,000 km2), the Hawaiian Islands have the most diverse collection of drosophilid flies in the world, living from rainforests to subalpine meadows. These roughly 800 Hawaiian drosophilid species are usually assigned to two genera, Drosophila and Scaptomyza, in the family Drosophilidae.
The polytene banding of the 'picture wing' group, the best-studied group of Hawaiian drosophilids, enabled Carson to work out the evolutionary tree long before genome analysis was practicable. In a sense, gene arrangements are visible in the banding patterns of each chromosome. Chromosome rearrangements, especially inversions, make it possible to see which species are closely related.
The results are clear. The inversions, when plotted in tree form (and independent of all other information), show a clear "flow" of species from older to newer islands. There are also cases of colonization back to older islands, and skipping of islands, but these are much less frequent. Using K-Ar dating, the present islands date from 0.4 million years ago (mya) (Mauna Kea) to 10mya (Necker). The oldest member of the Hawaiian archipelago still above the sea is Kure Atoll, which can be dated to 30 mya. The archipelago itself (produced by the Pacific plate moving over a hot spot) has existed for far longer, at least into the Cretaceous. Previous islands now beneath the sea (guyots) form the Emperor Seamount Chain.
All of the native Drosophila and Scaptomyza species in Hawaiʻi have apparently descended from a single ancestral species that colonized the islands, probably 20 million years ago. The subsequent adaptive radiation was spurred by a lack of competition and a wide variety of niches. Although it would be possible for a single gravid female to colonise an island, it is more likely to have been a group from the same species.
Chromosomes display a banded pattern when treated with some stains. Bands are alternating light and dark stripes that appear along the lengths of chromosomes. Unique banding patterns are used to identify chromosomes and to diagnose chromosomal aberrations, including chromosome breakage, loss, duplication, translocation or inverted segments. A range of different chromosome treatments produce a range of banding patterns: G-bands, R-bands, C-bands, Q-bands, T-bands and NOR-bands.
Depiction of karyotypes
Types of banding
- G-banding is obtained with Giemsa stain following digestion of chromosomes with trypsin. It yields a series of lightly and darkly stained bands — the dark regions tend to be heterochromatic, late-replicating and AT rich. The light regions tend to be euchromatic, early-replicating and GC rich. This method will normally produce 300–400 bands in a normal, human genome.
- R-banding is the reverse of G-banding (the R stands for "reverse"). The dark regions are euchromatic (guanine-cytosine rich regions) and the bright regions are heterochromatic (thymine-adenine rich regions).
- C-banding: Giemsa binds to constitutive heterochromatin, so it stains centromeres.The name is derived from centromeric or constitutive heterochromatin. The preparations undergo alkaline denaturation prior to staining leading to an almost complete depurination of the DNA. After washing the probe the remaining DNA is renatured again and stained with Giemsa solution consisting of methylene azure, methylene violet, methylene blue, and eosin. Heterochromatin binds a lot of the dye, while the rest of the chromosomes absorb only little of it. The C-bonding proved to be especially well-suited for the characterization of plant chromosomes.
- Q-banding is a fluorescent pattern obtained using quinacrine for staining. The pattern of bands is very similar to that seen in G-banding.They can be recognized by a yellow fluorescence of differing intensity. Most part of the stained DNA is heterochromatin. Quinacrin (atebrin) binds both regions rich in AT and in GC, but only the AT-quinacrin-complex fluoresces. Since regions rich in AT are more common in heterochromatin than in euchromatin, these regions are labelled preferentially. The different intensities of the single bands mirror the different contents of AT. Other fluorochromes like DAPI or Hoechst 33258 lead also to characteristic, reproducible patterns. Each of them produces its specific pattern. In other words: the properties of the bonds and the specificity of the fluorochromes are not exclusively based on their affinity to regions rich in AT. Rather, the distribution of AT and the association of AT with other molecules like histones, for example, has an impact on the binding properties of the fluorochromes.
- T-banding: visualize telomeres.
- Silver staining: Silver nitrate stains the nucleolar organization region-associated protein. This yields a dark region where the silver is deposited, denoting the activity of rRNA genes within the NOR.
Classic karyotype cytogenetics
In the "classic" (depicted) karyotype, a dye, often Giemsa (G-banding), less frequently Quinacrine, is used to stain bands on the chromosomes. Giemsa is specific for the phosphate groups of DNA. Quinacrine binds to the adenine-thymine-rich regions. Each chromosome has a characteristic banding pattern that helps to identify them; both chromosomes in a pair will have the same banding pattern.
Karyotypes are arranged with the short arm of the chromosome on top, and the long arm on the bottom. Some karyotypes call the short and long arms p and q, respectively. In addition, the differently stained regions and sub-regions are given numerical designations from proximal to distal on the chromosome arms. For example, Cri du chat syndrome involves a deletion on the short arm of chromosome 5. It is written as 46,XX,5p-. The critical region for this syndrome is deletion of p15.2 (the locus on the chromosome), which is written as 46,XX,del(5)(p15.2).
Spectral karyotype (SKY technique)
Spectral karyotyping is a molecular cytogenetic technique used to simultaneously visualize all the pairs of chromosomes in an organism in different colors. Fluorescently labeled probes for each chromosome are made by labeling chromosome-specific DNA with different fluorophores. Because there are a limited number of spectrally distinct fluorophores, a combinatorial labeling method is used to generate many different colors. Spectral differences generated by combinatorial labeling are captured and analyzed by using an interferometer attached to a fluorescence microscope. Image processing software then assigns a pseudo color to each spectrally different combination, allowing the visualization of the individually colored chromosomes.
This technique is used to identify structural chromosome aberrations in cancer cells and other disease conditions when Giemsa banding or other techniques are not accurate enough.
Digital karyotyping is a technique used to quantify the DNA copy number on a genomic scale. Short sequences of DNA from specific loci all over the genome are isolated and enumerated. This method is also known as virtual karyotyping.
Chromosome abnormalities can be numerical, as in the presence of extra or missing chromosomes, or structural, as in derivative chromosome, translocations, inversions, large-scale deletions or duplications. Numerical abnormalities, also known as aneuploidy, often occur as a result of nondisjunction during meiosis in the formation of a gamete; trisomies, in which three copies of a chromosome are present instead of the usual two, are common numerical abnormalities. Structural abnormalities often arise from errors in homologous recombination. Both types of abnormalities can occur in gametes and therefore will be present in all cells of an affected person's body, or they can occur during mitosis and give rise to a genetic mosaic individual who has some normal and some abnormal cells.
Chromosomal abnormalities that lead to disease in humans include
- Turner syndrome results from a single X chromosome (45, X or 45, X0).
- Klinefelter syndrome, the most common male chromosomal disease, otherwise known as 47, XXY is caused by an extra X chromosome.
- Edwards syndrome is caused by trisomy (three copies) of chromosome 18.
- Down syndrome, a common chromosomal disease, is caused by trisomy of chromosome 21.
- Patau syndrome is caused by trisomy of chromosome 13.
- Trisomy 9, believed to be the 4th most common trisomy has many long lived affected individuals but only in a form other than a full trisomy, such as Trisomy 9p syndrome or Mosaic trisomy 9. They often function quite well, but tend to have trouble with speech.
- Also documented are trisomy 8, and trisomy 16, although they generally do not survive to birth.
Some disorders arise from loss of just a piece of one chromosome, including
- Cri du chat (cry of the cat), from a truncated short arm on chromosome 5. The name comes from the babies' distinctive cry, caused by abnormal formation of the larynx.
- 1p36 Deletion syndrome, from the loss of part of the short arm of chromosome 1.
- Angelman syndrome – 50% of cases have a segment of the long arm of chromosome 15 missing; a deletion of the maternal genes, example of imprinting disorder.
- Prader-Willi syndrome – 50% of cases have a segment of the long arm of chromosome 15 missing; a deletion of the paternal genes, example of imprinting disorder.
Chromosomal abnormalities can also occur in cancerous cells of an otherwise genetically normal individual; one well-documented example is the Philadelphia chromosome, a translocation mutation commonly associated with chronic myelogenous leukemia and less often with acute lymphoblastic leukemia.
- Concise Oxford Dictionary
- White 1973, p. 28
- Stebbins, G.L. (1950). "Chapter XII: The Karyotype". Variation and evolution in plants. Columbia University Press.
- King, R.C.; Stansfield, W.D.; Mulligan, P.K. (2006). A dictionary of genetics (7th ed.). Oxford University Press. p. 242.
- Levitsky G.A. 1924. The material basis of heredity. State Publication Office of the Ukraine, Kiev. [in Russian]
- Levitsky G.A. (1931). "The morphology of chromosomes". Bull. Applied Bot. Genet. Plant Breed 27: 19–174.
- Darlington C.D. 1939. Evolution of genetic systems. Cambridge University Press. 2nd ed, revised and enlarged, 1958. Oliver & Boyd, Edinburgh.
- White M.J.D. 1973. Animal cytology and evolution. 3rd ed, Cambridge University Press.
- Kottler MJ (1974). "From 48 to 46: cytological technique, preconception, and the counting of human chromosomes". Bull Hist Med 48 (4): 465–502. PMID 4618149.
- von Winiwarter H. (1912). "Etudes sur la spermatogenese humaine". Arch. Biologie 27 (93): 147–9.
- Painter T.S. (1922). "The spermatogenesis of man". Anat. Res. 23: 129.
- Painter T.S. (1923). "Studies in mammalian spermatogenesis II". J. Expt. Zoology 37 (3): 291–336. doi:10.1002/jez.1400370303.
- Wright, Pearce (11 December 2001). "Joe Hin Tjio The man who cracked the chromosome count". The Guardian.
- Tjio J.H, Levan A. (1956). "The chromosome number of man". Hereditas 42: 1–6. doi:10.1111/j.1601-5223.1956.tb03010.x.
- Hsu T.C. 1979. Human and mammalian cytogenetics: a historical perspective. Springer-Verlag, NY.
- Human chromosome 2 is a fusion of two ancestral chromosomes Alec MacAndrew; accessed 18 May 2006.
- Evidence of common ancestry: human chromosome 2 (video) 2007
- A preparation which includes the dyes Methylene Blue, Eosin Y and Azure-A,B,C
- Gustashaw K.M. 1991. Chromosome stains. In The ACT Cytogenetics Laboratory Manual 2nd ed, ed. M.J. Barch. The Association of Cytogenetic Technologists, Raven Press, New York.
- Stebbins, G.L. (1971). Chromosomal evolution in higher plants. London: Arnold. pp. 85–86.
- Thompson & Thompson Genetics in Medicine 7th Ed
- Godfrey LR, Masters JC (August 2000). "Kinetochore reproduction theory may explain rapid chromosome evolution". Proc. Natl. Acad. Sci. U.S.A. 97 (18): 9821–3. Bibcode:2000PNAS...97.9821G. doi:10.1073/pnas.97.18.9821. PMC 34032. PMID 10963652.
- Maynard Smith J. 1998. Evolutionary genetics. 2nd ed, Oxford. p218-9
- Goday C, Esteban MR (March 2001). "Chromosome elimination in sciarid flies". BioEssays 23 (3): 242–50. doi:10.1002/1521-1878(200103)23:3<242::AID-BIES1034>3.0.CO;2-P. PMID 11223881.
- Müller F, Bernard V, Tobler H (February 1996). "Chromatin diminution in nematodes". BioEssays 18 (2): 133–8. doi:10.1002/bies.950180209. PMID 8851046.
- Wyngaard GA, Gregory TR (December 2001). "Temporal control of DNA replication and the adaptive value of chromatin diminution in copepods". J. Exp. Zool. 291 (4): 310–6. doi:10.1002/jez.1131. PMID 11754011.
- Gilbert S.F. 2006. Developmental biology. Sinauer Associates, Stamford CT. 8th ed, Chapter 9
- King, Stansfield & Mulligan 2006
- Carrel L, Willard H (2005). "X-inactivation profile reveals extensive variability in X-linked gene expression in females". Nature 434 (7031): 400–404. doi:10.1038/nature03479. PMID 15772666.
- Wurster DH, Benirschke K (June 1970). "Indian muntjac, Muntiacus muntjak: a deer with a low diploid chromosome number". Science 168 (3937): 1364–6. Bibcode:1970Sci...168.1364W. doi:10.1126/science.168.3937.1364. PMID 5444269.
- Crosland M.W.J. & Crozier, R.H. (1986). "Myrmecia pilosula, an ant with only one pair of chromosomes". Science 231 (4743): 1278. Bibcode:1986Sci...231.1278C. doi:10.1126/science.231.4743.1278. PMID 17839565.
- Khandelwal S. (1990). "Chromosome evolution in the genus Ophioglossum L". Botanical Journal of the Linnean Society 102 (3): 205–217. doi:10.1111/j.1095-8339.1990.tb01876.x.
- Kim, D.S.; Nam, Y.K.; Noh, J.K.; Park, C.H.; Chapman, F.A. (2005). "Karyotype of North American shortnose sturgeon Acipenser brevirostrum with the highest chromosome number in the Acipenseriformes" (PDF). Ichthyological Research 52 (1): 94–97. doi:10.1007/s10228-004-0257-z.
- Matthey, R. (1945-05-15). "L'evolution de la formule chromosomiale chez les vertébrés". Experientia (Basel) 1 (2): 50–56. doi:10.1007/BF02153623. Retrieved 2011-03-16.
- de Oliveira, R.R.; Feldberg, E.; Mdos Anjos, . B.; Zuanon, J. (July–September 2007). "Karyotype characterization and ZZ/ZW sex chromosome heteromorphism in two species of the catfish genus Ancistrus Kner, 1854 (Siluriformes: Loricariidae) from the Amazon basin". Neotropical Ichthyology (Sociedade Brasileira de Ictiologia) 5 (3): 301–6. doi:10.1590/S1679-62252007000300010. Cite uses deprecated parameter
- Pellicciari, C.; Formenti, D.; Redi, C.A.; Manfredi, M.G.; Romanini, (February 1982). "DNA content variability in primates". Journal of Human Evolution 11 (2): 131–141. doi:10.1016/S0047-2484(82)80045-6. Cite uses deprecated parameter
- Souza, A.L.G.; de O. Corrêa, M.M.; de Aguilar, C.T.; Pessôa, L.M. (February 2011). "A new karyotype of Wiedomys pyrrhorhinus (Rodentia: Sigmodontinae) from Chapada Diamantina, northeastern Brazil" (PDF). Zoologia 28 (1): 92–96. doi:10.1590/S1984-46702011000100013. Cite uses deprecated parameter
- Weksler, M.; Bonvicino, C.R. (2005-01-03). "Taxonomy of pygmy rice rats genus Oligoryzomys Bangs, 1900 (Rodentia, Sigmodontinae) of the Brazilian Cerrado, with the description of two new species" (PDF). Arquivos do Museu Nacional, Rio de Janeiro 63 (1): 113–130. ISSN 0365-4508.
- Stebbins, G.L. (1940). "The significance of polyploidy in plant evolution". The American Naturalist 74 (750): 54–66. doi:10.1086/280872.
- Stebbins 1950
- Comai L (November 2005). "The advantages and disadvantages of being polyploid". Nat. Rev. Genet. 6 (11): 836–46. doi:10.1038/nrg1711. PMID 16304599.
- Adams KL, Wendel JF (April 2005). "Polyploidy and genome evolution in plants". Curr. Opin. Plant Biol. 8 (2): 135–41. doi:10.1016/j.pbi.2005.01.001. PMID 15752992.
- Stebbins 1971
- Gregory, T.R.; Mable, B.K. (2011). "Ch. 8: Polyploidy in animals". In Gregory, T. Ryan. The Evolution of the Genome. Academic Press. pp. 427–517. ISBN 978-0-08-047052-8.
- White, M.J.D. (1973). The chromosomes (6th ed.). London: Chapman & Hall. p. 45.
- Lilly M.A., Duronio R.J. (2005). "New insights into cell cycle control from the Drosophila endocycle". Oncogene 24 (17): 2765–75. doi:10.1038/sj.onc.1208610. PMID 15838513.
- Edgar BA, Orr-Weaver TL (May 2001). "Endoreplication cell cycles: more for less". Cell 105 (3): 297–306. doi:10.1016/S0092-8674(01)00334-8. PMID 11348589.
- Nagl W. 1978. Endopolyploidy and polyteny in differentiation and evolution: towards an understanding of quantitative and qualitative variation of nuclear DNA in ontogeny and phylogeny. Elsevier, New York.
- Stebbins, G. Ledley, Jr. 1972. Chromosomal evolution in higher plants. Nelson, London. p18
- IJdo JW, Baldini A, Ward DC, Reeders ST, Wells RA (October 1991). "Origin of human chromosome 2: an ancestral telomere-telomere fusion". Proc. Natl. Acad. Sci. U.S.A. 88 (20): 9051–5. Bibcode:1991PNAS...88.9051I. doi:10.1073/pnas.88.20.9051. PMC 52649. PMID 1924367.
- Rieger, R.; Michaelis, A.; Green, M.M. (1968). A glossary of genetics and cytogenetics: Classical and molecular. New York: Springer-Verlag. ISBN 9780387076683.
- White 1973, p. 169
- Clague, D.A.; Dalrymple, G.B. (1987). "The Hawaiian-Emperor volcanic chain, Part I. Geologic evolution". In Decker, R.W.; Wright, T.L.; Stauffer, P.H. Volcanism in Hawaii (PDF) 1. pp. 5–54. U.S. Geological Survey Professional Paper 1350.
- Carson HL (June 1970). "Chromosome tracers of the origin of species". Science 168 (3938): 1414–8. Bibcode:1970Sci...168.1414C. doi:10.1126/science.168.3938.1414. PMID 5445927.
- Carson HL (March 1983). "Chromosomal sequences and interisland colonizations in Hawaiian Drosophila". Genetics 103 (3): 465–82. PMC 1202034. PMID 17246115.
- Carson H.L. (1992). "Inversions in Hawaiian Drosophila". In Krimbas, C.B.; Powell, J.R. Drosophila inversion polymorphism. Boca Raton FL: CRC Press. pp. 407–439. ISBN 0849365473.
- Kaneshiro, K.Y.; Gillespie, R.G.; Carson, H.L. (1995). "Chromosomes and male genitalia of Hawaiian Drosophila: tools for interpreting phylogeny and geography". In Wagner, W.L.; Funk, E. Hawaiian biogeography: evolution on a hot spot archipelago. Washington DC: Smithsonian Institution Press. pp. 57–71.
- Craddock E.M. (2000). "Speciation processes in the adaptive radiation of Hawaiian plants and animals". Evolutionary Biology 31: 1–43. doi:10.1007/978-1-4615-4185-1_1. ISBN 978-1-4613-6877-9.
- Ziegler, Alan C. (2002). Hawaiian natural history, ecology, and evolution. University of Hawaii Press. ISBN 978-0-8248-2190-6.
- Lisa G. Shaffer, Niels Tommerup, ed. (2005). ISCN 2005: An International System for Human Cytogenetic Nomenclature. Switzerland: S. Karger AG. ISBN 3-8055-8019-3.
- Schröck E, du Manoir S, Veldman T, et al. (July 1996). "Multicolor spectral karyotyping of human chromosomes". Science 273 (5274): 494–7. Bibcode:1996Sci...273..494S. doi:10.1126/science.273.5274.494. PMID 8662537.
- Wang TL, Maierhofer C, Speicher MR, et al. (December 2002). "Digital karyotyping". Proc. Natl. Acad. Sci. U.S.A. 99 (25): 16156–61. Bibcode:2002PNAS...9916156W. doi:10.1073/pnas.202610899. PMC 138581. PMID 12461184.
- Media related to Karyotypes at Wikimedia Commons
- Making a karyotype, an online activity from the University of Utah's Genetic Science Learning Center.
- Karyotyping activity with case histories from the University of Arizona's Biology Project.
- Printable karyotype project from Biology Corner, a resource site for biology and science teachers.
- Chromosome Staining and Banding Techniques
- Bjorn Biosystems for Karyotyping and FISH |
Einstein was born during the imperial era in Germany in 1879. He died 76 years later in Princeton, New Jersey exactly one decade after the defeat of Nazi Germany and the dropping of the atomic bombs on Japan. He thus witnessed the two world wars, the high point and demise of the old European order, and the rise of industrialization and new technologies such as telephones, automobiles, X-rays, and radioactivity. But Einstein himself inaugurated some of the most fundamental transformations of his age, including the rise of theoretical physics, the extension of Newtonian mechanics to the submicroscopic realm of atoms and nuclei, and the birth of relativity theory. Einstein was thus both a product and a shaper of the scientific and cultural context in which he lived and worked.
Einstein grew up during the years following the unification of Germany in 1871, a time of widespread growth in European industrial power, strong militaristic nationalism, and imperialist expansion. Technological advances led to a renewed faith in material progress, especially with the replacement of the old steam- and mechanically powered world with the new modern "electropolis." The rise of electric power challenged the reigning nineteenth-century mechanical worldview, which holds that all matter obeys Newton's laws of motion and that all natural phenomena arise from the interactions of moving matter. New advances in electromagnetic theory by nineteenth-century scientists such as Michael Faraday and James Clerk Maxwell could not be explained in terms of the old mechanical picture, and physicists in Einstein's day were confronted with the challenge of finding a complete mechanical account of electrodynamic theory that was consistent with the Newtonian paradigm.
Einstein grew up as a Jew in time of rising anti-Semitism. The reverberations of the Dreyfus Affair in France spread across Europe in the 1890s and inspired early Zionist thinkers such as Theodore Herzl to work towards the creation of a Jewish state. In 1911, the headquarters of the Zionist movement relocated to Berlin, where Einstein was teaching. Thus in spite of his own disavowal of traditional religious rituals and traditions, Einstein became involved in one of the greatest movements in Jewish history. Einstein lived just long enough to witness the creation of the Jewish state of Israel in 1948; he was even asked to be the president of the new nation in 1952, an offer he graciously declined.
Einstein's support of the Zionist movement was partially a response to the rampant anti-Semitism that spread across Germany with the rise of the National Socialist (Nazi) Party in January 1933. Under the infamous Law for the Restoration of the Career Civil Service of April 1, 1933, the Nazis excluded Jews from all state posts, including universities and other research institutions. Physics was one of the disciplines most devastatingly affected by this new law, suffering a loss of at least 25% of its 1932-33 personnel. Yet even before the 1930s, many academicians were increasingly suspicious of the high rate of Jewish participation in medicine and the natural sciences. This anti-Semitic sentiment was combined with a more general suspicion of the materialism and commercialism associated with science as a field. Hitler held mathematics and the physical sciences in low regard in comparison to those disciplines that promoted Kultur, man's humanistic achievements in society. Einstein, as a Jew and as a physicist, was one of the first targets of Nazi propaganda.
In contrast, in America, science enjoyed enormous prestige in the 1920s and 1930s; thus when Einstein arrived on a tour of the country in 1922, he was hailed as a hero. The 1920s witnessed the rapid growth of the physics community in America, including a rise in the numbers of Jews in the sciences, since science was one of the few fields that offered American Jews the opportunity for professional status in the gentile world. The 1920s and 1930s were also years of mass popularization and politicization of science. Thus, the arrival of refugees from Europe (such as Einstein) in the years immediately preceding World War II only served to strengthen what was already one of the strongest and most vigorous branches of the world physics community at the time. |
What are leukoplakia and erythroplakia?
Leukoplakia and erythroplakia are lesions observed most frequently on the mucosa of the mouth, but also on occasion in the throat and on the vocal folds. They are commonly seen in smokers, individuals exposed to toxic irritants, and patients with nutritional deficiencies - but they also occur in the absence of such factors.
Leukoplakia and erythroplakia are generally considered precancerous lesions, although their potential to turn into cancer cannot be assessed by their appearance. This risk is assessed by examining samples under a microscope for a characteristic of cells called dysplasia. Having dysplasia does not guarantee that cells are cancerous, but it places patients at a higher risk level than the general population for eventual cancer development. Even so, the majority of leukoplakia and erythroplakia lesions, including those with dysplasia, will never lead to cancer formation.
What are the symptoms of leukoplakia and erythroplakia?
Leukoplakia and erythroplakia can go unnoticed when they occur in the mouth or the throat, and are sometimes found during a routine head and neck examination, or voice screening. When affecting the vocal folds, they will generally cause changes that can range from simple hoarseness to complete loss of voice. Symptoms typically evolve slowly.
What do leukoplakia and erythroplakia look like?
Leukoplakia means “white patch”, and erythroplakia means “red patch”, which are self-explanatory descriptions of the appearance of those lesions. Frequently, the patches appear somewhat thickened as a result of the excess keratin formation (keratosis), similar to what is seen in corns and calluses. A combination of white and red areas can be seen in the same lesion - this is known as erythroleukoplakia.
How are leukoplakia and erythroplakia treated?
Once leukoplakia and/or erythroplakia are identified, a biopsy and specialized examination of the specimen under a microscope are usually needed in order to determine whether the lesions harbor dysplasia. Subsequent management will depend on the location of the lesion and the nature of the biopsy findings. The options can include a combination of simple observation, laser treatments in the office, or surgical removal (microlaryngoscopy). Smoking cessation and avoidance of other irritants should always be part of the treatment strategy.
Treating leukoplakia surgically may sometimes be necessary to improve voice function when it affects the vocal folds, even in the absence of dysplasia or other concerning features on biopsy. |
What does it really say?
CSE Briefing Paper 1
To understand the Kyoto Protocol (KP), adopted in Kyoto on December 11, 1997, it is important to read it together with the Framework Convention on Climate Change (FCCC). The following paragraphs explain what the Kyoto Protocol says and, where necessary, the relevant articles of the FCCC have been quoted. And, wherever possible, the implications of the various clauses of the Kyoto Protocol have been drawn out, especially from the point of view of developing countries.
The preamble to the Kyoto Protocol states that the Protocol has been developed to meet the ‘ultimate objective’ of the FCCC as stated in its Article 2. The objective of the FCCC is "to achieve....stabilisation of greenhouse gas concentrations in the atmosphere at a level that would prevent anthropogenic interference with the climate system. Such a level should be achieved within a time-frame sufficient to allow ecosystems to adapt naturally to climate change, to ensure that food production is not threatened and to enable economic development to proceed in a sustainable manner."
The preamble further states that the development of KP has been guided by the Article 3 of FCCC. The Article 3 of the FCCC lists the following guiding principles, among others:
Nations which have become a party to the FCCC will take action to protect the climate system keeping in mind the following:
the benefits of present and future generations; equity; and, the common but differentiated responsibilities and respective capabilities of nations. Developed countries are expected to take the lead in combating climate change and its adverse effects.
The special needs and circumstances of developing countries will be given full consideration, especially the needs of those that are particularly vulnerable to the adverse effects of climate change, and those that would have to bear a disproportionate or abnormal burden under the FCCC.
Signatories to the FCCC will take precautionary measures keeping the following in mind:
anticipate, prevent or minimise the causes of climate change and mitigate its adverse effects;
where there are serious threats of serious or irreversible damage, lack of full scientific certainty will not be used as a reason for postponing such measures;
policies and measures to deal with climate change will be cost-effective so as to ensure global benefits at the lowest possible cost; and,
these policies and measures should take into account different socio-economic contexts, be comprehensive, cover all relevant sources, sinks and reservoirs of greenhouse gases and adaptation, and cover all economic sectors.
Interested nations can cooperate amongst themselves to address climate change.
Nations have a right to, and should, promote sustainable development. Policies and measures should be appropriate for the specific conditions of each nation and should be integrated with national development programmes, taking into account that economic development is essential for adopting measures to address climate change.
Nations should cooperate to promote a supportive and open international economic system so that there is sustainable economic growth in all nations, especially developing countries, which will enable them to address climate change in a better way. Measures taken to combat climate change, including unilateral ones, should not lead to arbitrary or unjustifiable discrimination or a disguised restriction on international trade.
Equity and common but differentiated responsibilities are, therefore, important guiding principles of the Kyoto Protocol.
Article 1: Definitions
This article defines conference of parties, convention, Intergovernmental Panel on Climate Change, Montreal Protocol, parties present and voting, party and party included in
A party included in Annex 1 means a nation included in Annex 1 to the FCCC. These are developed countries and other nations which have committed themselves to actions specifically provided under Article 4(2) of the FCCC. These actions described under Article 4, sub-paragraphs 2(a) and 2(b) are:
adoption of national policies and measures and implementation of corresponding measures to mitigate climate change by limiting their anthropogenic emissions of greenhouse gases and protecting their sinks and reservoirs; and,
keeping other nations informed about their policies and measures so that this information can be reviewed by the Conference of Parties (CoP).
Sub-paragraph 4(2)(d) says that the COP will in the light of best scientific information available review the commitments made under sub-paragraphs 4(2)(a) and 4(2)(b) and adopt amendments to these commitments.
Sub-paragraph 4(2)(f) says that the CoP, no later than December 31, 1998, will review the Annexes I and II and take decisions to amend them with the approval of the nation concerned. This means that the CoP-4 to be held in Buenos Aires could amend Annex I and II. Sub-paragraph 4(2)(g) says that any nation can notify at any time it wishes that it is prepared to be bound by sub-paragraphs 4(2)(a) and 4(2)(b).
This means that nations can take on voluntary commitments and join Annex I countries. |
Technology integration is a buzzword in education right now- and has been for a few years. The way we live our lives is changing due to technology, and so is the way we are teaching. Here are a few benefits of technology use in the classroom.
One benefit of using technology in the classroom is that it allows students to construct their own knowledge as they take problem solving into their own hands. Rather than being passive listeners, students are actively using their 21st century skills of accessing, evaluating, and applying information. Whether they are using tablets, computers, or cell phones students are learning to use these tools as consumers and producers of information. An important lifelong skill for many professions today.
As educators, we know how individual the learning process can be for students. Technology provides students and teachers with more resources and tools to meet diverse needs. Using technology resources in the classroom also helps diverse learners succeed by providing multiple means of representation, expression, and engagement- helping teachers align their lessons with the principles of Universal Design for Learning.
As we know, it is important to have strong communication and problem solving skills. There are many technology resources that provide students with the opportunity to learn and share online. Google Drive allows students to work on the same document simultaneously while chatting about ideas and feedback. The website epals.com connects classrooms around the country as they complete research projects together. Collaborative learning websites foster skills that help prepare students for college and career.
As we can see there are great benefits to using technology in the classroom. However, it is important to remember that technology is constantly evolving and changing. Thankfully, there are excellent resources available to teachers online including websites like edutopia.org and iste.org that provide information from researchers, teachers, and writers in the field of educational technology. |
Pop Art was a visual art movement that began in the 1950s and was influenced by popular mass culture drawn from television, movies, advertisements and comic books. The consumer boom of the 1950s and the general sense of optimism throughout the culture influenced the work of pop artists. As more products were mass-marketed and advertised, artists began creating art from the symbols and images found in the media.
Pop Art stemmed as a reaction to the Abstract Expressionist movement, which attempted to show feelings and emotions through large, rapidly painted gestural works. According to artist Charles Moffat, Jasper Johns and Robert Rauschenberg led the American Pop Art movement, employing images of popular culture that targeted a broader audience. These images often emphasized kitsch items recognizable to the masses. The work was characterized by clear lines, sharp paintwork and the clear representation of symbols, people and objects. This art movement coincided with the globalization of pop music and a youthful culture, drawing upon musical artists, such as Elvis and The Beatles, and representing them in art.
Jasper Johns was inspired by many ideas from the Dada movement. He took from artist Marcel Duchamp his idea of readymades, or found objects. Rather than found objects, Johns used found images, such as flags, targets, letters and numbers. Johns found these familiar subjects to be immediately recognizable to the audience but neutral enough that he could explore the visual and physical qualities of his medium on many levels.
Robert Rauschenberg combined found images with other real objects. He worked in collage form, using materials he found in his neighborhood with collage and painting. He developed the process of combining oil painting with silkscreen, which allowed him to experiment with images he found in newspapers, magazines, television and film. Rauschenberg was then able to reproduce these images in different sizes and colors and as elements on canvases or in print. His work emphasized the mass-media, production and consumerism that bombarded the public on a daily basis through advertising and marketing, relating them to one's own individual experience and understanding. |
Using advanced techniques at the Canadian Light Source (CLS) at the University of Saskatchewan, scientists have created three-dimensional images of the complex interior anatomy of the human ear, information that is key to improving the design and placement of cochlear implants.
“With the images, we can now see the relationship between the cochlear implant electrode and the soft tissue, and we can design electrodes to better fit the cochlea,” said Dr. Helge Rask-Andersen, senior professor at Uppsala University in Sweden.
“The technique is fantastic and we can now assess the human inner ear in a very detailed way.”
The cochlea is the part of the inner ear that looks like a snail shell and receives sound in the form of vibrations. In cases of hearing loss, cochlear implants are used to bypass damaged parts of the ear and directly stimulate the auditory nerve. The implant generates signals that travel via the auditory nerve to the brain and are recognized as sound.
By imaging the soft and bony structures of the inner ear with implant electrodes in place, Rask-Andersen said the researchers were able to discover what the auditory nerve looks like in three dimensions, and to learn how cochlear implant electrodes behave inside the cochlea. This is very important when cochlear implants are considered for people with limited hearing.
Image: the inner ear |
Native observations of change in the marine environment of the Bering Strait region
Special Advisor on Native Affairs
Marine Mammal Commission
P.O. Box 217
Kotzebue, AK 99752, USA
Since the late 1970s, Alaska Natives in communities along the coast of the northern Bering and Chukchi Seas have noticed substantial changes in the ocean and the animals that live there. While we are used to changes from year-to-year in weather, hunting conditions, ice patterns, and animal populations, the past two decades have seen clear trends in many environmental factors. If these trends continue, we can expect major, perhaps irreversible, impacts to our communities. With these concerns in mind, we believe this workshop will be a vital opportunity to discuss our concerns and observations with scientists who are working on similar issues in the same area, and to work together to figure out what can be done.
Beginning in the late 1970s, the patterns of wind, temperature, ice, and currents in the northern Bering and Chukchi Seas have changed. The winds are stronger, commonly 15-25 mph, and there are fewer calm days. The wind may shift in direction, but remains strong for long periods. In spring, the winds change the distribution of the sea ice and combine with warm temperatures to speed up the melting of ice and snow. When the ice melts or moves away early, many marine mammals go with it, taking them too far away to hunt. Near some villages (such as Savoonga, Diomede, and Shishmaref), depending on the geography of the coast, the wind may force the pack ice into shore, making it impossible to get boats to open water to go hunting or to move boats through if they are already out. The high winds also make it difficult to travel in boats for hunting (even winds of 1012 mph from the wrong direction can create waves 23 feet high, stopping small boats), reducing the number of days that hunters can go out. For all these reasons, access to animals during the spring hunting period is lower now than it was before.
From mid-July to September, there has been more wind from the south, making for a wetter season. With less sea ice and more open water, fall storms have become more destructive to the coastline. Erosion has increased in many areas, including the locations of some villages, such as Shishmaref and Kivalina, threatening houses and perhaps the entire community. Wave action has changed some sandy beaches into rocky ones, as the sand washes away. There have been no new sandy beaches, but there are many new rocky ones.
The south shore of St. Lawrence Island has also been affected a great deal by erosion in recent years. Some shallow spits that used to be above water are now underwater, due perhaps to a combination of higher water and erosion. The storms and high wavesup to 30 feetalso change the sea bed near shore. After storms, kelp and other bottom-dwelling plants and animals such as clams can be found washed up on the beach. These disturbances to the bottom affect shallow feeders such as eiders.
The formation of sea ice in fall has been late in many recent years, due largely to warmer winters, though winds play a role as well. In such years, the ice, when it does form, is thinner than usual, which contributes to early break-up in spring. Another aspect of late freeze-up is the way in which sea ice forms. Under normal conditions, the water is cold in fall, and permafrost under the water and near the shoreline helps create ice crystals on the sea floor. When they are large enough, these crystals float to the top, bringing with them sediments. The sediments have nutrients used by algae growing in the ice, thus stimulating the food chain in and near the ice. When the ice melts in spring, the sediments are released, providing nutrients in the melt water. In years with warm summers and late freeze-up, on the other hand, the water is warm and freezes first from the top as it is cooled by cold winds in late fall or early winter. Less ice is brought up from the bottom, and fewer nutrients are available in the ice and in the melt water the following spring, and overall productivity is lower.
Precipitation patterns have also changed. In the last two years, there has been little snow in fall and most of the winter, but substantial snowfall in late winter and early spring. In the winter of 199899, the weather was cold so that the ice was thick, but there was no snow. The lack of snow makes it difficult for polar bears and ringed seals to make dens for giving birth or, in the case of male polar bears, to seek protection from the weather. The lack of ringed seal dens may affect the numbers and condition of polar bears, which prey on ringed seals and often seek out the dens. Hungry polar bears may be more likely to approach villages and encounter people.
Other marine mammals have been affected to greater or lesser degrees by the changes in sea ice, wind, and temperature. The physical condition of walrus was generally poor in 199698, as the animals were skinny and their productivity was low. One cause was the reduced sea ice, which forced the walrus to swim farther between feeding areas in relatively shallow water and resting areas on the distant ice. This is the pattern for females and young in summer, and when the ice retreated far to the north in the Chukchi Sea, the animals suffered. Males typically haul out on land, and may have eaten most of the food near the haulouts, forcing them to go farther in search of clams. Due to wave action and sedimentation, the productivity of the sea bed may have declined, too, making it harder for walrus to find food. In the spring of 1999, however, the walrus were in good condition following a cold winter with good ice formation in the Bering Sea. When the winter ice forms late and is too thin, walrus cannot haul out and rest the way they need to, and they will be in poor condition the following spring.
Most seals seem to be doing fairly well. Hunters have been having more success hunting bearded seals lately. The seals are in good condition, and it may be that there are more of them or that they are concentrated in hunting areas for some reason. Spotted seals, on the other hand, seem to have declined from the late 1960s/early 1970s to the present. In 1996 and 1997, in which spring break-up came early, there were more strandings of baby ringed seals on the beach. These weanlings were probably left on their own too early. The mothers train their young on the shorefast ice where they den, but if the ice melts, the seals must abandon their dens early. Ringed seals seem to need more time to train their young, and are greatly affected if spring is early. There are fewer seals in the Nome area these days, perhaps as a result of less shore ice for ringed seal dens.
There are many other biological changes and effects in the region, such as:
- In spring, bird migrations are early. Geese and songbirds have been arriving in late April, earlier than ever before. Sudden cold snaps at this time of year can harm the birds. Snipe seem to be affected most, perhaps because they need unfrozen ground to feed, and many die in such cold spells.
- In August of 1996 and 1997, there were large die-offs of kittiwakes and murres, though other birds seem to be doing reasonably well.
- In the warm summers, especially if they are also dry, many different kinds of insects appear on the tundra. These include lots of caterpillars on bushes, and then butterflies. Other bugs that haven't been seen before have appeared, though mosquitoes are still the same.
- Chum salmon in Norton Sound crashed in the early 1990s, and have been down ever since.
- The treeline has moved westward across the Seward and Baldwin Peninsulas (i.e., into formerly treeless areas). Bushes are getting bigger and taller. Willows are now like trees, taller than houses, whereas in the 1970s they were small and scrubby.
- Mild winters with little snow have been good for ptarmigan, which are healthy and abundant. This may also be a result of low hare populations, leaving little competition for the ptarmigan.
There is no record of this type of extended change. In the 1880s, during the time of the Great Famine in western Alaska, there were very cold winters for a long period. The main factor in the famine was the decimation of walrus and whale populations due to the commercial harvest by Yankee whalers, but lots of ice and the long, cold winters did not make things easier.
As we think about the future and where these trends may lead us, we wonder what alternatives are available to Native villages in Alaska and elsewhere in the Arctic. If marine mammal populations are no longer available or accessible to our communities, what can replace them? In the Great Famine, there were no alternatives to the food provided by hunting and fishing. Today, there are stores with food and other resources that can be harvested. A gradual change might give us time to adjust, but a sudden shift might catch us unprepared and cause great hardship. As managers, we need to think about the overall effects on marine mammals and other resources. Some may adjust, but others will not. The polar bear and walrus are likely to be the most affected. With these thoughts in mind, we need to consider the potential emergencies facing villages that depend so heavily on marine mammals. How can we prepare ourselves, and how much can be done to prevent hardship?
Our ancestors taught us that the Arctic environment is not constant, and that some years are harder than others. But they also taught us that hard years are followed by times of greater abundance and celebration. As we have found with other aspects of our culture's ancestral wisdom, modern changes, not of our doing, make us wonder when the good years will return.
- Agriculture and Problems of the Arctic Climate in Finland (sponsored by the European Union)
- Western Arctic Climate Change
- Archeologists Find Milder Arctic Climate Many Have Aided Aleutian Settlement from the NSF |
View Article in PDF
Unaided and under the darkest conditions, the human eye can see only about 9,000 stars around Earth. The Large Synoptic Survey Telescope (LSST)—looking at only half of the night sky—is expected to detect an estimated 17 billion stars and discover so much more over the course of a 10-year mission. However, without the crucial roles played by Lawrence Livermore, this major new telescope, now being constructed in Chile, might have only been science fiction instead of being on the verge of delivering game-changing science.
From icy comets to stealthy planetoids, exploding stars, newborn galaxies, and everything in between, billions of objects are expected to be discovered by LSST during its survey of the universe. What also thrills stargazers and researchers alike is its great potential to yield a myriad of unexpected discoveries. LSST’s core science areas are investigating the nature of dark matter and dark energy; cataloging moving bodies in the solar system, including hazardous asteroids; exploring the changing sky; and further understanding the formation and structure of the Milky Way.
Ground- and space-based telescopes are typically optimized to address one or two of these areas, resulting in designs that inhibit study in the others. However, the ingenuity of LSST’s design, its grand size, and its use of groundbreaking technology promise to open the universe to exploration leaps ahead of what the telescopes of today can do. LSST director Steve Kahn, a physicist at Stanford University and the SLAC National Accelerator Laboratory, says, “Livermore has played a very significant technical role in the camera and a historically important role in the telescope design.” Livermore’s researchers made essential contributions to LSST’s optical design, lens and mirror fabrication, the way LSST will survey the sky, how it compensates for atmospheric turbulence and gravity, and more. Kahn adds, “Livermore also plays a substantial role in the science of dark energy.”
Livermore joins forces with a team of hundreds representing dozens of domestic institutions and international contributors from more than twenty countries. The consortium also relies on industry experts to fabricate components that have designs pushing well beyond current boundaries. LSST Corporation formed in 2002 and began privately funding the early development of LSST, including mirror fabrication and survey operations. The Department of Energy (DOE) funds camera fabrication, while the National Science Foundation funds the remaining telescope fabrication, facility construction, data management, and education and outreach efforts.
The telescope is scheduled to begin full scientific operations in 2023. The first stone of the LSST Summit Facility was laid in April 2015 on El Peñón, a peak 2,682 meters high along the Cerro Pachón ridge in the Andes Mountains and located 354 kilometers north of Santiago, the capital of Chile. Locations around the world were scrutinized to find the most suitable site. Chile—already a world leader as a site for modern mountaintop telescopes—won out by offering the best combination of high altitude, for less atmosphere to peer through; desolation, for less light pollution; dry environment, for fewer cloudy days; stable air, for less turbulence; and the infrastructure necessary for construction and operation.
Every night for 10 years, LSST will conduct a wide, deep, and fast survey comprising roughly 1,000 “visits” that together will canvas a third of the sky above the Southern Hemisphere. In each visit, the telescope will capture a pair of 15-second exposures before moving on to a new location. Taking back-to-back exposures of a single patch of sky will help eliminate erroneous detections, such as when cosmic rays strike the camera’s detector. After finishing a visit, a neighboring part of the sky typically will be chosen to minimize slew time—the approximate 5 seconds that LSST will need to reposition itself and for vibrations to settle down to a level that will not impede image resolution. However, LSST can respond to sudden changes in surrounding conditions, such as clouds appearing, by optimizing its survey pattern on the fly and rapidly switching to a filter that provides more favorable viewing. Furthermore, for every spot visited, LSST will return an hour later to take another pair of exposures. Thus every three days LSST will take four images of every single patch of sky in the observable Southern Hemisphere.
In the early days of LSST, now-retired Livermore physicist Kem Cook helped develop the operations simulator, which schedules where the telescope will point for every exposure over the 10 years. Kahn says, “Cook’s scientific interests were in time domain astronomy—looking at how things changed with time. How we sample the sky is key to the time history of all these exposures.” Over 10 years, the telescope will generate 5.5 million individual images. When stitched together like individual movie frames, the images will yield what LSST’s builders call “the first motion picture of the universe.” In fact, the total image data produced will be the equivalent of 15,500 feature-length motion pictures on 35-millimeter film. This moving picture prowess will evolve astronomy from traditional static views of the universe to stellar time-lapse photography, like studying how a bird flies in real time instead of only looking at individual photographs.
In a galaxy far, far away is the preamble to a well-known science-fiction epic, but 20 billion very far-away galaxies is what LSST expects to view. Instead of the “dark side,” the telescope will investigate the mysteries of dark matter and dark energy. A cosmic accounting of all the solids, liquids, gases, and other identifiable forms of matter does not even come close to estimates of the grand total of all matter in the universe. “Dark matter is about 95 percent of the total mass that we can infer exists in the universe,” explains Michael Schneider, the physicist leading Livermore’s LSST science efforts, which are funded primarily by the Laboratory Directed Research and Development (LDRD) Program and DOE’s Office of Science. “Furthermore, all mass in the universe—dark matter and normal matter—accounts for only about a third of the total energy density. The other two-thirds is dubbed dark energy.” Do dark matter and dark energy exist only in deep space? Or do they also exist in our solar system and even around Earth, as well? Do they help comprise our planet and our physical bodies? Such are the questions that researchers will seek to answer with LSST.
In addition, light from bright galaxies can be distorted by the gravitational force from dark matter caught in the line of sight. This gravitational lensing can be understood by measuring the shapes of galaxies. Schneider says, “LSST may be the final ground-based survey instrument built in our lifetimes to measure that effect with subpercent precision, map the mass in the universe, and thereby locate dark matter and dark energy and help figure out what they are.”
Although mystery currently shrouds the composition of dark matter and dark energy, scientists understand all too well the compositions of hidden “space invaders”—objects on an intercept course with Earth. The wide, fast, and deep features of LSST make it unquestionably the most efficient way of identifying near-Earth objects (NEOs) whose orbits cross Earth’s and so could one day strike our planet. The search for NEOs is the primary reason for the hourly revisits programmed into LSST’s survey cadence. The time between the first and second pairs of images is enough to show differences revealing an NEO, which can then be tracked and its orbit determined. If LSST’s 10-year mission is extended by a couple of years, the telescope could detect 90 percent of all potentially hazardous NEOs larger than 140 meters in diameter. LSST could also provide 1 to 3 months of warning for those as small as 45 meters. Even an NEO with a diameter of less than 100 meters could impact Earth with the force of a nuclear bomb. Advance notice offers the time needed to defeat these otherworldly threats. (See S&TR, December 2016, Making an Impact on Asteroid Deflection.)
An important parameter of a telescope is its étendue, or “grasp,” defined as the field of view multiplied by the surface area of the primary mirror. At 319 meters squared–degrees squared, LSST’s étendue exceeds that of any current facility by more than a factor of 10. Such an enormous increase will lead to science opportunities never before possible. Images taken with such great breadth, speed, and depth will capture fleeting astronomical events, possibly recording up to 4 million supernovas, for instance. By better recording objects in the solar system and the rest of the Milky Way and how they move, scientists will more clearly understand how Earth’s solar system and galaxy formed. Putting these varied puzzle pieces in place could unravel the mysteries surrounding the universe, including Earth’s cosmic origins.
LSST must move like a racecar among telescopes to accomplish such science goals—a great challenge considering that the camera by itself weighs 3,060 kilograms. The entire moveable structure of the telescope, including the camera, tips the scales at 272,000 kilograms. To achieve its speedy performance, LSST’s design needed to be as compact as possible. This contradicts the conventional wisdom that to see wider and deeper into space, a telescope and its optics must be built as big as possible. Materials and fabrication limitations alone would prohibit the type of telescope—a long tube capped by a lens at each end—that has been widely used ever since Galileo used one to first identify Jupiter’s moons. Moreover, the machinery to wield such a telescope would be colossal. Moving such a behemoth between locations and waiting for the vibrations to subside would be measured in hours instead of seconds, rendering the telescope useless for any science dependent on rapid scanning. In short, a breakthrough in telescope design was needed.
The breakthrough began in 1998, a year after the first paper positing the existence of dark energy was published, when Roger Angel of the University of Arizona proposed a three-mirror design with a primary-mirror diameter of roughly 6 meters. As the potential for additional capabilities became more apparent, this diameter was increased to 8.4 meters. Kahn notes, “A lot of refinements radically changed the early design concept, and Livermore physicist Lynn Seppala played a very important role. A lot of his ingenuity went into it.” Seppala, now retired, was tasked with evaluating Angel’s optical design to determine whether the sizes, shapes, and placements of the optical elements—lenses, mirrors, and filters—would meet the demands of LSST. He determined the design would theoretically work but would be essentially unbuildable because of its complexity, size, and lack of robustness. More iterations followed. “I wanted to make everything as foolproof as possible,” says Seppala. He felt ease of manufacture was essential to LSST’s optical design. “How are you going to test it? And how are you going to certify it? My strategy was to carry out, with each design iteration, a set of simple fabrication tests for all of the lenses, the three-mirror telescope without the camera, and the assembled three-lens camera corrector. Simplifying these tests would increase reliability both during fabrication and in assembling the camera and telescope.”
A groundbreaking approach that further simplified the optics and drastically reduced the length of the telescope came when Seppala helped optimize a design where the primary mirror (M1) and the tertiary mirror (M3) were combined into the same piece of glass, eliminating a set of support and alignment structures. The weight-savings appeal of this dual optical surface eventually inspired new designs, which have been patented by the Laboratory. Miniature versions of this technology can be found in tiny CubeSat satellites. (See S&TR, April/May 2012, Launching Traffic Cameras into Space.) Seppala also emphasized keeping the 3.4-meter-diameter secondary mirror (M2) as spherical as possible for ease of manufacture without sacrificing performance. The enormous size of this mirror can be understood by noting that the secondary mirror of any existing telescope could fit easily inside the center hole of M2. LSST’s giant camera also conveniently fits inside the center hole, greatly simplifying its mounting.
Because the camera is so sensitive, its detector is sealed inside a vacuum chamber, and three lenses correct the path of inbound light from the mirrors before striking the detector. At a diameter of 1.55 meters, the first lens through which the light passes is also the largest high-performance optical glass lens ever built. The third lens (L3) also acts as the vacuum chamber window. Between the second lens and L3 will fit one of six filters, which can be switched in and out by the armature of a fast-acting carousel, similar to the mechanism that changes records in an old-fashioned jukebox. Each filter has an individualized coating that allows through only light in a specific wavelength range. By uniformly scanning the sky with each filter, LSST will permit multicolor analyses.
Whereas the mirror coatings must reflect as much light as possible and the filter coatings must exclude all but specific bands of light, LSST’s lenses have antireflection coatings to maximize the amount of light passing through them. Fused silica, an amorphous form of quartz, is the glass used to make M2 and all the lenses and filters. A special spin-casting process was used to create a single piece of honeycomb-backed borosilicate glass, which was then polished into the shapes of M1 and M3. This approach reduced overall weight by 90 percent over conventional methods, making the telescope lighter and speedier to maneuver. This lightweight, compact design allows the mirrors and camera to be more easily and safely removed, minimizing down time for maintenance.
A telescope’s field of view determines how much of the sky it can see. The entire sky from horizon to horizon encompasses a 180-degree field of view. A full Moon is only 0.5 degrees wide, but ground- and space-based telescopes typically have a field of view that is only a fraction of this lunar width. LSST, in contrast, boasts a gigantic 3.5-degree field of view, capturing an image area equivalent to 49 Moons. Needing fewer images to capture the entire sky speeds up the survey. LSST will also collect more light in less time than other telescopes, enabling viewing of objects as faint as 24 on the astronomical magnitude scale—roughly 10 million times dimmer than what is detectable by the unaided human eye. By combining images, LSST can reach even deeper, to a magnitude of 27. By comparison, relatively bright Saturn registers 1 on this scale, the stars of the Big Dipper 2, and the moons of Jupiter 5. The limit for the Hubble Space Telescope is a magnitude of 31, partly because of its orbit in space. Although able to detect dimmer objects than LSST can, Hubble has a very narrow field of view, equivalent to only a fraction of a full Moon.
Although Hubble does not have to contend with Earth’s atmosphere, LSST faces atmospheric disturbances that threaten to drastically reduce image quality, as well as changes in environmental conditions and the shifting pull of gravity as the massive mirrors change positions. One way Livermore strove to minimize such effects was by drawing on its experience in adaptive optics (AO), a technique the Laboratory helped pioneer. (See S&TR, September 2014, Giant Steps for Adaptive Optics.) Another optics innovation incorporated into LSST is an active optics system that Livermore researchers developed in collaboration with the National Optical Astronomy Observatory. In this system, the reflective surface of all three mirrors is finely tuned by networks of actuators mounted on the backs of the mirrors. By slightly changing a mirror’s shape, the actuators compensate for distortions in light caused by the small deflections of mirror surfaces that arise from changes in temperature or gravitational pull. In short, AO and active optics are key to LSST achieving its goals.
The Laboratory is lending its optics expertise in other areas, as well. Livermore engineer Scott Winters, the optics subsystems manager for the LSST camera, states, “Livermore has a history of building complex optical systems, the latest being the National Ignition Facility (NIF). From fabrication, coatings, assemblies, and precision cleaning to other aspects, we’re able to harvest this knowledge and apply it directly to the camera’s optical systems. In short, LSST is getting years and years of experience and lessons learned from the Laboratory.”
Livermore personnel led the procurement and delivery of the camera’s optical assemblies, which include the three lenses, and six filters, all in their final mechanical mount. Livermore focused on the design and then delegated fabrication to industry vendors, although the filters will be placed into the carousel interface mount at the Laboratory before being shipped off to SLAC for final integration into the camera. Partnering with industry has been the approach taken to build NIF and the Laboratory’s other large laser systems. Winters states, “LSST is all about leading-edge technology, including the world’s largest camera, so it’s an exciting project. We’re able to do amazing things by engaging various people in cutting-edge work. This is a great win–win for everyone.”
SLAC is managing subcomponent integration and final assembly of the $168 million camera, which is currently over 60 percent complete and due to be finished by 2020. Livermore engineer Vincent Riot, interim project manager of the camera, says, “Many challenges come with making the largest camera in the world.” The camera’s detector is a bee’s-eye mosaic of 189 ultrahigh-purity silicon sensors, each 100 micrometers thick. Each captured image is 4,096 by 4,096 pixels in size, or 16.8 megapixels in all. The entire detector thus delivers a combined pixel count of 3.2 gigapixels. In each corner of the detector are a wavefront sensor and two guide sensors, which ensure image quality by monitoring surrounding conditions and feeding back data that drive corrective measures, such as with the active optics system. Riot explains, “The wavefront and guide sensors must be sensitive to a very broad range of wavelengths, from deep ultraviolet to infrared, and must have very low noise and be very flat.”
After light circuits its way through the optics, an image forms on the detector’s 63.4-centimeter-diameter focal plane. The compactness of the telescope’s optics makes the focus very unforgiving. A blurry image could result if any part of the detector surface is misaligned to the incoming light by more than 11 micrometers—approximately one-fifth the diameter of a human hair. The camera’s sensors are charge-coupled devices, which create images by converting the incoming light (photons) into electrons. For this reason, the vacuum vessel that houses the camera is cryogenically cooled to an operating temperature of –100 to –80 degrees Celsius. This cooling has twin benefits—preventing overexposure defects, which tend to occur when such sensors are operated at warmer temperatures, and reducing signal noise from various sources. This improvement allows electrons to be more accurately counted and therefore produce the best images possible.
Each 3.2-gigapixel image will be saved as an image file roughly 6 gigabytes large. Because LSST needs only 2 seconds to read out raw data between exposures, the telescope will amass roughly 15,000 gigabytes of image data each night. Compounding this over 10 years of operation yields a total of 60 petabytes (1015 bytes). The images compiled in a single visit will be immediately compared, and if a difference is found, suggesting some event, an alert will be automatically issued within 60 seconds. Each night, LSST is expected to detect about 10 million such events. Single images and catalogs of images will be frequently streamed online, and the LSST computational system will do more advanced processing, such as time-lapse movies. Much of this imagery will be made available for free so that the public can learn about discoveries in near real-time and participate in “citizen science” opportunities.
Schneider explains, “The data science aspect of LSST is highly important. Rather than study one object at a time, which was the astronomy model of the past, LSST is actually statistical analysis conducted on a large, complicated dataset, and to do the really big science requires thinking about algorithms and computing in a different way than astronomers are used to doing.” LSST’s immense repository of data will not only drive research in new ways but also necessitate all-new studies on how to better analyze tremendous amounts of data. Livermore’s wide-ranging contributions have elevated LSST to the forefront of science and technology. Soon LSST will deliver the promises of world-changing astronomy and a clearer understanding of how we fit into this wide, deep, fast universe of ours. Who knows where this journey will take us?
Key Words: active optics, adaptive optics (AO), dark energy, dark matter, data science, dual optical surface, étendue, field of view, gigapixel camera, Large Synoptic Survey Telescope (LSST), National Ignition Facility (NIF), near-Earth object (NEO), optical design, spin-cast mirror, telescope, ultrahigh-purity silicon sensor.
For further information contact Michael Schneider (925) 422-4287 ([email protected]). |
INTRODUCTION TO BLOCKS - CREATING AND INSERTING
In this lesson you will be introduced to blocks. By definition, a block is a collection of objects (lines, arcs, circles, text, etc.) that form a more complex entity that normally represents an object in the real world, e.g. a door, a chair, a window, a computer.
Blocks are a single entity. This means that you can modify (move, copy, rotate) a block by selecting only one object in it.
You can build up a library of blocks consisting of the parts that you require many times in your workday. These blocks can be stored in a separate folder and even on a network so that all drafters have access to them. Think of plumbing parts, valves, elbows, etc as well as electrical symbols or furniture.
Using blocks can help keep your file size down. AutoCAD stores block definitions in its database. When you insert a block, AutoCAD only stores the name of the block, its location (insertion point), scale and rotation. This can be very noticeable in large drawing.
If you need to change something, you can redefine a block. For example, you draw a chair and turn it into a block. Later, you're told that the size of the chair has changed. Since you used a block you can redefine the block and all of your chairs are updated automatically. If you had drawn (or copied) 100 chairs in your drawing, you would have to manually change each one.
Blocks can also contain non-graphical information. These are text objects called attributes. For example, you have made blocks of different chairs. You can add information to the block such manufacturer, cost, weight, etc. This information stays with the block, but can also be extracted to a database or spreadsheet. This would be useful for things such as a bill of materials. Attributes can also be visible or invisible in your drawing. Another good use of attributes could be a title block.
You can even easily add internet hyperlinks to blocks so you can connect a block to a page on a supplier's online catalog.
There are two types of blocks you can create: blocks that are internal to your current drawing, and those that are external, or saved as a separate file. To create the different types, different commands are used. Many companies use a template that will include a number of blocks for use in the project.
Here are the commands that you will need for using blocks in this lesson:
|Bmake / B|| |
Home > Block > Create
Creates a block from separate entities (internal to current drawing)
|Wblock / W|| |
Creates a block and writes it to a file (external)
|Insert / I|| |
Home > Block > Insert
Inserts a block (internal or external)
|Explode / X|| |
Home > Modify > Explode
Explodes a block or other compound object into its component parts
For this assignment, you will be using any one of the floor plans you drew earlier in Lesson 2-1.
Open the drawing.
Zoom in to one section of the room close to a desk (Draw a rectangle to represent a desk if you don't have any in your drawing.).
Create a new layer called COMPUTERS and make the color #73 (remember LA invokes the Layer Properties Manager).
Make the Zero Layer current.
ZERO LAYER :
Zero Layer has special properties. When creating blocks, if the objects in the block are drawn on Zero layer, they will assume the properties of the current layer when they are inserted. Example: If you draw the computer below on Zero layer, and insert it on the 'COMPUTERS' layer, it will assume the color, linetype and lineweight of the Computers layer. If you drew and created it on the 'DOOR' layer, and inserted it in the Computers layer, it would retain the properties of the Door layer. For this reason, blocks are drawn on the Zero layer - you need them to assume the layer's properties, whether it is in your template, or a client's.
Draw the following objects to create what you need for the computer block (top view of keyboard and monitor) You do not need to dimension the computer.
Click here to view the metric version.
Start the BLOCK command by either typing B or using the pull down menu or the icon. You will see a dialog box that looks like the one shown. Enter information the same way though. Remember to approach all new dialog boxes from the top and work your way down.
1 : The first thing that you want to do is give your block a name. Type COMPUTER in the edit box beside Name. Some names may need to be more descriptive, such as part number, or size.
2 : Now you need to select an insertion/base point. Pick the Pick Point button and then pick the midpoint of the bottom line of the keyboard. Make sure that the retain button is selected (this will keep your objects on the screen as individual objects. (You will see in a moment that selecting the Pick Point with blocks is very important when you later insert them into the drawing - always pick a point that will allow you to place the block easily.) If you don't select a base point, your block will default to 0,0,0 and you will insert all your blocks at the same location - the origin. (This is where many students find their blocks when they tell me "It didn't work!")
3 : Next you want to select the objects for your block. Pick the Select Objects button and then select all the parts of your computer and press <ENTER>. Be careful not to select any other objects, or you'll just get to do it over again.
4 : Select the drawing units you used to create the original objects in.
5 : This is optional, but you can add a description here. This is good if you are creating specific parts, like maybe a motor and want to add some quick specifications. It's also great if co-workers know what the block is used for (more information = better).
6: It's usually a good idea to give a block a short description and it's a great idea if you think other people will be using this block. Remember - more information is always better. Your job as a CAD drafter is to convey information to other people.
7: Uncheck the "Open in Block Editor" checkbox if it is currently checked.
8: Pick the OK Button and the dialog box closes. It will look like nothing happened, but the drawing file now has a "Block Definition" for a Computer in it. Congratulations, you have created your first block.
Now select any object in the new block and you will see that all of the objects are selected and the basepoint you picked is highlighted as it's (Insertion Point) Osnap.
Now that you have created a block, it's time to add another one to your drawing by inserting it.
Change to the Computer Layer. Start the Insert command by typing I <ENTER>. You will see this dialog box on the screen:
By default, all the options you need are pre-selected. Since you only have one block in your drawing, its name is displayed (in some drawings, you might have a long list of block names to choose from).
Make sure that the Insertion Point - Specify On-screen box is checked, and the Explode button is not checked. The Scale - Specify On-screen should not be checked. Then press the OK button.
You now have the block attached to your cursor and it is ready to be placed in the drawing. Click on one of the tables in your drawing. Notice how the block that you drew on the white Zero Layer is now Red and on the Computer layer.
Now insert a computer on every desk in your drawing. You can also copy the block instead of re-inserting each time, but make sure you know how to insert.
Extra Practice: Find something in the room where you are and measure it (approximate is fine) and make a block out it.
BREAK TIME :Take a moment to think about what you just did. You drew a bunch of lines and turned them into a digital computer. It's an easy process, but the power of blocks is immense. In any line of CAD work, you'll need to be fluent in blocks.
In AutoCAD 2006, Dynamic Blocks were introduced. These are parametric blocks which can be easily modified by the user. In this exercise, you will create and insert a simple Dynamic Block. The goal is to create a Chair block that can be easily rotated to orientate it to a desk.
Draw the Chair below on the Zero layer.
Start the same Block command you used last time, only this time make sure that the "Open in Block Editor" checkbox is checked. This time when you select OK, you will be magically taken to the Block Editor (see below). If you didn't select the box, you can just select the block, then right-click and choose "Block Editor".
Now that you in the block editor, you can edit the objects in your block, or in this exercise add dynamic interaction to your block. As mentioned earlier, our goal is to create a block that we can easily rotate. First you will need to add a parameter to the block, followed by an action.
Click on the "Rotation Parameter" icon in the palette. Then check the command line for prompts (as usual). Use the entries below:
Command: _BParameter Rotation
Specify base point or [Name/Label/Chain/Description/Palette/Value set]:<select midpoint of the seat>
Specify radius of parameter: 9
Specify default rotation angle or [Base angle] <0>:<ENTER>
What you have done is selected the location of the parameter, then set the radius of the parameter (in this case, where you will have a 'grip' to rotate it, and finally the default setting for the parameter - 0° means the block will look like you drew it when inserted.
After defining the parameter, you then need to apply an action to the parameter. Select the tab on the palette that says "Actions". You will need to select the "Rotate Action" (makes sense). You will be asked to select the parameter that you want to apply the action to (select the parameter you just drew). Finally, select the location of the Action )- once again select the center of the seat, and then the objects that you want the action to act on (in this case, all of them). Your block should now look like this:
Now that the Block is complete, you can select the "Close Block Editor" at the top of the drawing screen. You will be returned to the regular drawing screen, and your block will be created. Click on it, and you should see all the objects highlight, and the grip for the block's Pick Point. You will also see a grip for the block's dynamic rotation parameter.
Now that your Dynamic Block is in your drawing, pick on the rotation parameter, and move it around. You'll see that you can rotate the chair without using the rotate command. Features like this can be a real time saver. Think of other blocks that could benefit from parametric abilities : windows, doors, ceiling lights - the list is endless. Also, when you work on a new drawing from someone else, be aware the Dynamic Blocks could be present.
This exercise has shown the basic steps that are used in creating a Dynamic Block. There are many other parameters and uses, and they can make your CAD life much easier. Check out the next video to see some more options in how these are created.
This time you will be creating an external block using the wblock command. This difference here is that the block will become a separate, external drawing file for use in other CAD drawings.
In the dialog box below, you will see that you have almost all the same options. Instead of giving the block a name like you did before, you give it a filename in a specific folder.
After creating the other blocks above, you should be able to work your way through this dialog box. Make sure you put the block in a logical path and give it a good, descriptive name.
When you insert an external block, use the same INSERT command that you did above and use the Browse button to navigate to the folder where you stored your block. Insert it like you did before. Put some chairs in front of the desks in your drawings, and rotate them if needed.
NOTE: For some reason, Autodesk won't allow us to use the write block command to create Dynamic Blocks. Perhaps this will be added in a later version. You can always create a drawing with just the block you want separate and then save the file.
Now you have created three blocks. The process is the same for any other block that you need to create from drawing objects.
If you want more practice, draw more objects and create blocks from their geometry.
Now that you know how useful block are, you should know that there are times when you need to explode a block. The EXPLODE command works on blocks, rectangles and other objects that are not the basic lines, arcs, circles, etc. If you have a block on your screen, type X <ENTER> to start the explode command. Select the block that you want to explode and press <ENTER>. Now you have all of the components that made up the block as individual objects.
If you want to see how to update a block, explode and modify the computer block by stretching the monitor out by one inch on each side. Create a new block using the same COMPUTER name. After re-defining the block, you will see this box pop up to warn that you are about to update the block definition:
Select Yes, and all your 'computer' blocks in the drawing will update to the new definition.
This is a good example of how blocks save you time. In a typical work situation, the original specifications for the computer could have changed from large CRT monitors to flat screens and you would need to change them all. Using blocks is much easier than changing each individual object one at a time.
Conclusion: Remember that blocks are powerful tools for the reasons listed above. In any discipline of CAD drafting, you will use them a lot. Usually, you will insert the block first and then copy then to other locations. They are powerful, yet easy to work with. Remember, Create the Block, Insert it - it's that simple (In most cases, you will use the blocks in a template to save time). When creating a block, your choice is if you want an external, internal or dynamic block. That choice depends entirely on how that block will be used.
There are times when you might name a block and decide later that you want to change it. Maybe you designed a bracket and now that the design is finalized, it is given a part number (or maybe you noticed a spelling mistake). Now you want to change the name of the bracket from "BRACKET-01" to "BRACKET-BR734". The easiest way to do this is to use the RENAME command. This will open a dialog box that allows you rename a number of AutoCAD objects.
This command is simple to use. Select the type of object first in the left column, and it will show you all the different items in the right column. Make sure that the 'Old Name' field is filled, and then type the new name in the box below it. Check your edits and press OK.
This command can be used for changing layer names, linetypes and most other AutoCAD objects.
Here you have learned about one of AutoCAD's most powerful and commonly used tools. Do you realize how much time and effort you can save by using blocks? I've given a few examples in the tutorial - but try to think of more.
If you are not fully aware and comfortable with how blocks work after going through this tutorial, I ask you to work through it again. Working with blocks should be as easy as drawing a line. |
Having information means making choices may be easier. It can also mean being an equal partner in making decisions. This in turn can help to reduce fear and uncertainty, and may help regain a sense of control.
Different people need different information, sometimes given in different ways at different times.
This may depend on:
- Who they are
- Their outlook on life and on serious illness
- Their culture or upbringing.
It is important to have an opportunity to find out any information that is required. Patients and family carers can ask health professionals for information.
For example, when someone is ill, their need for information may differ slightly from their family carer. Other important people in their life may also need information. In some cultures the family of the person who is ill expects to receive information about them.
Last updated 24 January 2017 |
Surfactant is a complex substance containing phospholipids and a number of apoproteins. This essential fluid is produced by the Type II alveolar cells, and lines the alveoli and smallest bronchioles. Surfactant reduces surface tension throughout the lung, thereby contributing to its general compliance. It is also important because it stabilizes the alveoli. Laplace’s Law tells us that the pressure within a spherical structure with surface tension, such as the alveolus, is inversely proportional to the radius of the sphere (P=4T/r for a sphere with two liquid-gas interfaces, like a soap bubble, and P=2T/r for a sphere with one liquid-gas interface, like an alveolus: P=pressure, T=surface tension, and r=radius). That is, at a constant surface tension, small alveoli will generate bigger pressures within them than will large alveoli. Smaller alveoli would therefore be expected to empty into larger alveoli as lung volume decreases. This does not occur, however, because surfactant differentiallyreduces surface tension, more at lower volumes and less at higher volumes, leading to alveolar stability and reducing the likelihood of alveolar collapse.
Surfactant is formed relatively late in fetal life; thus premature infants born without adequate amounts experience respiratory distress and may die. |
Tension, Compression and Bending
To investigate the concept of Tension, Compression and Bending.
Definition and Theory
Applied forces are forces, such as a push or a pull, that act on the outside of an object. The ability for an object to resist these externally applied forces and remain static (not moving or breaking) is the result of an objects internal structure to resist these forces and in some cases external anchoring to a larger body. Three basic types of internal forces or stresses that keep a structure static are compression, tension and bending. Large structures such as towers, cranes and bridges are composed of many small internal structural members. These small internal structural members are primarily designed to translate loads into compression and tension and try to avoid bending. This is because bending generally uses more material (costing more) than tension or compression for a similar load and length.
- Using an 1/8" x 1/8" x 12" piece of balsa wood, conceptually test tension, compression and bending as shown in the diagrams above.
- Tension: Place both hands on each end of the piece and pull. DO NOT BREAK THE PIECE.
- Compression: Place both hands on each end of the piece and push until the wood bows. DO NOT BREAK THE PIECE.
- Bending: Place the piece over an 11.5" span and push down in the middle until the piece bends. DO NOT BREAK THE PIECE.
- Record your observations.
- Lookup the formal definition of tension, compression and bending in an encyclopedia or engineering mechanics book.
- Sketch several common objects in your classroom or school. Label the kinds of stresses that may occur in each part when loads are applied for proper use.
- Answer the questions in the lab. |
This text is part of:
Table of Contents:
1 Athens, then a walled city, was temporarily abandoned by her people before the battle of Salamis, and destroyed by the troops of Xerxes. After the Persian Wars, she became the head of the Confederacy of Delos. See Isoc. 6.42 ff., and Isoc. 4.71-72.
2 At the end of the Peloponnesian War, Athens was at the mercy of Sparta and the Spartan allies. The latter proposed that Athens be utterly destroyed and her citizens sold into slavery, but the Spartans refused to allow the city “which had done a great service to Hellas” to be reduced to slavery. Xen. Hell. 2.2.19-20. Cf. Isoc. 8.78, 105; Isoc. 14.32; Isoc. 15.319. |
Bullying is different from conflict.
Bullying is done with a goal to hurt, harm, or humiliate. With bullying, there is often a power imbalance between the students involved, with power defined as elevated social status, being physically larger, or as part of a group against an individual. Students who bully perceive their target as vulnerable in some way and often find satisfaction in harming them.
In normal conflict, students self-monitor their behavior. They read cues to know if lines are crossed, and then modify their behavior in response. Those guided by empathy usually realize they have hurt someone and will want to stop their negative behavior. On the other hand, students intending to cause harm and whose behavior goes beyond normal conflict will continue their behavior even when they know it’s hurting someone.
Sometimes people think that bullying and conflict are the same thing, but they aren’t. In one way or another, conflict is a part of everyday experience, in which we navigate the complexities of how we interact. Typically minor conflicts don’t make someone feel unsafe or threatened. Bullying, on the other hand, is a behavior with intention to hurt, harm or humiliate and the person targeted is not able to make it stop.
Teen author from the “Ask Jamie” column shares her thoughts on how conflict is different than bullying.
In one way or another, conflict is a part of everyday experience. Even if it is something small, which it typically is, there is the constant navigation of the complexities of human relationships. This is normal, and minor conflicts typically don’t make someone feel unsafe or threatened.
The questions to ask yourself when you are unsure about the tone of a certain conversation or encounter to determine if it is bullying include:
Sometimes, it can be easy to minimize a bullying situation because you don’t really want to deal with the realities of what is happening to you. It is easy to get into a pattern of qualifying bullying as conflict in order to avoid facing the actual problem, when really it is something that you don’t deserve and something that requires outside intervention. It can be helpful to ask these questions to yourself, as it can help you sort out the reality of your particular situation.
Teens answer a few of the most frequently asked questions including “What’s the difference between bullying and conflict?” and “How does peer pressure impact bullying behavior?”
A question from Zoey, an elementary school student, who wants to know more about how to be kind. |
Historical Geology/Geological column< Historical Geology
In this article, I shall explain what the geological column is, how it is constructed, and what relationship it bears to the geological record. I shall also provide a rough description of the geological column summarizing some of the major trends observed in it.
Construction of the geological columnEdit
By using the principles explained in the previous articles (superposition, faunal succession, the use of index fossils) it is possible to produce an account of the order of deposition of the organisms found in the fossil record, noting that one was deposited before the other, that the deposition of such-and-such a group starts after the deposition of some other group ceases, and so forth.
Prolog to a sketch of the geological columnEdit
Below, I sketch out the major geological systems from the Vendian onwards. Note that it is written from the bottom upwards, so that the earliest-deposed fossils are at the bottom; the reader may therefore find it tells a more coherent story if read from the bottom upwards.
It is no more than a sketch: it records the appearance and disappearance of major groups, rather than individual species; and it has been divided into the large stratigraphic units known as systems, which geologists would divide into series, which they would further divide into stages, which they would then further subdivide into zones. I am, then, only giving the broadest outline of the geological column; those who require the finer details must look elsewhere.
I have not attached any dates to the geological systems discussed here, because as we have not yet reached our discussion of absolute dating, it would be premature to do so. All that our study of fossils and their faunal succession tells us is the order of deposition. (It is for this reason that I have used the increasingly obsolete term "geological column" rather than "geological timeline"; it is not a timeline until we get round to attaching dates to it.)
I have also avoided using terms such as "evolution" and "extinction". From a biological standpoint, it is obvious that these are the underlying cause of the patterns in the fossil record; but as with the evolutionary explanation of the principle of faunal succession this biological explanation is irrelevant to the practice of geology. For the purposes of doing stratigraphy it doesn't really matter if dinosaurs appear in the geological column because they evolved from more basal archosaurs or because they parachuted out of the sky from an alien spaceship, and it doesn't matter if they disappear from the geological column because they went extinct or because they all went to live in cities on the Moon; what matters is that we can find out where their fossils come in the sequence of deposition.
A sketch of the geological columnEdit
Marked by the existence and spread of modern humans and the decline and disappearance of many groups of large fauna extant in the Neogene.
Contains recognizable horses, canids, beaver, deer, and other modern mammal groups. The Neogene also contains many large mammalian fauna no longer extant: glyptodonts, ground sloths, saber-toothed tigers, chalicotheres, etc. First hominds found in Africa.
Marked by the diversification of mammals and birds. Among the mammals we see the first that can be easily identified with modern mammalian orders: primates, bats, whales, et cetera. Similarly representatives of many modern bird types are identifiable in the Paleogene, including pigeons, hawks, owls, ducks, etc. Now-extinct groups of birds found in the Paleogene include the giant carnivorous birds known colloquially as "terror birds".
Here we see the diversification of angiosperms (flowering plants) from beginnings around the Jurassic-Cretaceous boundary; representatives of modern groups of trees such as plane trees, fig trees, and magnolias can be identified in the Cretaceous. Here also we see the first bees, ants, termites, grasshoppers, lepidopterans. Dinosaurs reach their maximum diversity; some of the best known dinosaurs such as Triceratops and Tyrannosaurus are found in the Cretaceous. Mosasaurs appear near the end of the Cretaceous, only to disappear at the Cretaceous-Paleogene boundary, which also sees the last of the dinosaurs (excluding birds, which biologists classify as dinosaurs) and the last pterosaurs, plesiosaurs, ichthyosaurs, ammonites, rudists, and a host of other groups.
This system is notable for the diversification of dinosaurs. It has the first short-necked plesiosaurs (pliosaurs); first birds; first rudists and belemnites. Mammals are certainly present, but tend to be small and insignificant by comparison with reptile groups. The first placental mammals are known from the Upper Jurassic.
The Triassic contains the first crocodiles, pterosaurs, dinosaurs, lizards, frogs, snakes, plesiosaurs, ichthyosaurs, and primitive turtles. Whether or not there were mammals in the Upper Triassic depends on what exactly one classifies as a mammal. The Triassic-Jurassic boundary sees the loss of many groups, including the last of the conodonts, most of the large amphibians, and all the marine reptiles except plesiosaurs and ichthyosaurs.
This system is noted for the diversification of reptiles: the first therapsids (mammal-like reptiles) and the first archosaurs (the group including crocodiles and dinosaurs). It also has the first metamorphic insects, including the first beetles. It has the first trees identifiable with modern groups: conifers, ginkgos and cycads. Many species and larger groups come to an end at or shortly before the Permian-Triassic boundary, including blastoids, trilobites, eurpterids, hederellids, and acanthodian fish.
This system contains the first winged insects. Amphibious vertebrates diversify and specialize. The Carboniferous has the first reptiles, including, in the Upper Carboniferous, the first sauropsid, diapsid, and synapsid reptiles. Foraminifera become common. All modern classes of fungi are present by the Upper Carboniferous.
The Devonian has the first (wingless) insects; the first ammonites; the first ray-finned and lobe-finned fish; the first amphibious vertebrates; the first forests. Terrestrial fungi become common. The first seed-bearing plants appear in the Upper Devonian. The last placoderms are found at the Devonian-Carboniferous boundary. Almost all groups of trilobite have disappeared by the Devonian-Carboniferous boundary, but one group (Proetida) survives until the Permian-Triassic boundary.
In the Silurian, coral reefs are widespread; fish with jaws are common; it has the first freshwater fish; first placoderms (armour-plated fish); the first hederellids; the first known leeches. Diversification of land plants is seen.
In the Ordovician system we see the first primitive vascular plants on land; jawless fishes; some fragmentary evidence of early jawed fishes. Graptolites are common, and the first planctonic graptolites appear. Bivalves become common. The first corals appear. Nautiloids diversify and become the top marine predators. Trilobites diversify in form and habitat. The first eurypterids ("sea scorpions" appear in the Upper Ordovician. Trilobite forms such as Trinucleoidea and Agnostoidea disappear at the Ordovician-Silurian boundary, as do many groups of graptolites.
This system sees the first animals with hard parts (shells, armor, teeth, etc). Trace fossils reveal the origin of the first burrowing animals. Trilobites are common; chordates exist but are primitive. Archaeocyathids are common reef-forming organisms in the Lower Cambrian and then almost completely vanish by the Middle Cambrian. Condonts are first found in the Upper Cambrian. Many groups of nautiloids and trilobites disappear at the top of the Cambrian, but some groups survive to diversify again in the Ordovician.
This system contains the first complex life, including sponges, cnidarians, and bilaterians.
The geological column and the geological recordEdit
We should distinguish between the geological record and the geological column. The geological record is a thing: it is the actual rocks. The geological column is not a thing, it is a table of the sort given above. To ask questions such as "where can I go to see the geological column?" or "how thick is it?" is therefore a category error along the lines of asking how many people can be seated around the Periodic Table.
The relationship between the geological column and the geological record is this: when we look at a series of strata in the geological record and use the principle of superposition and way-up structures to discover the order deposition of the fossils in it, then if we find that A < B in the strata, this will correspond to B being shown above A in the geological column. The geological column is therefore a particularly simple and neat way of recording what relationships we do and don't find in the geological record.
However, the geological column is not a picture of what we find in the geological record. There are three reasons for this.
First, as we know, the geological record is folded and faulted in some places. Recall that when we write A < B we are talking about the original order of deposition of fossils, as reconstructed by using the principle of superposition and way-up structures: it does not necessarily mean that A is actually below B; whereas the geological column is always depicted as a vertical column with A below B when A < B.
Second, by using index fossils geologists produce a single time-line for the entire planet; but clearly any particular location will only have local fossils: the column, if written out in full, would show exclusively South American Cretaceous dinosaurs above exclusively North American Jurassic dinosaurs, but these will not in fact be found in the same assemblage of strata.
The third reason is that deposition will typically not happen continuously in one place: sediment is deposited in low-lying areas; it would not be deposited on top of a mountain. What's more, an elevated area will typically undergo erosion: not only will fresh sediment not be deposited, but existing sedimentary rocks and their fossils will be destroyed. Also, marine sediment will be destroyed by subduction, so the sediment of the oceanic crust will be no older than the ocean that it's in, and even then only at the edges — it will be considerably younger near the mid-ocean rifts.
Consequently, the meaning of the geological column is not that any location in the geological record will look like the geological column: the column is merely an elegant way of representing the facts about faunal succession.
The geological column: how do we know?Edit
As was explained at the start of this article, the geological column is constructed using ideas introduced in previous articles: the principle of superposition, the principle of faunal succession, and the use of index fossils.
Note that the geological column does not except in the weakest sense constitute a scientific theory. It does resemble one, because there is a sense in which it suggests what we are likely to observe, which is the role of a theory; but it is essentially descriptive in nature. That is, it does not really predict the sequence of fossils that we will find, it is determined by, and summarizes, the sequences of fossils that we have found. Since it is likely that what we will find tomorrow will be similar to what we have been finding for the past couple of hundred years, the column is in that sense predictive, but its predictive power goes no further than that.
So if tomorrow we found that some trilobites were deposited above the Permian system, we should simply amend the geological column to reflect this, and it would be surprising not because it contradicted the geological column as such, but because in centuries of paleontology no-one has yet made such a discovery.
Compare this with how we would feel if we consistently found violations of the principle of faunal succession. This would present a difficulty in theory, and would require us to give up on the principle of faunal succession (and to give up on using it to construct a geological column, something that would then become impossible). But finding something that contradicts the geological column as it stands is merely unlikely in practice, not in theory, and would only require us to revise the geological column in one particular detail (i.e. to take the new discovery into account) without requiring us to rethink any fundamental ideas.
So the geological column is trustworthy simply because it is no less, but no more, than an up-to-date summary of our knowledge, and so it can be taken as such. To which we might add that after all these years of looking at the fossil record it is extremely unlikely that we'll find anything so unusual as to require any major revision of the column. |
Learning the ukulele fretboard unlocks a whole new way of experiencing the excitement and joy of making music on the ukulele.
When you understand the relationship of notes across the fretboard, you can do things like:
- Figure out where to fingerpick the melody of a song on the fretboard
- Discover how to build chords across the fretboard
- Riff and improvise solos across the fretboard (like in the blues or jazz)
But to do these things, it all starts with learning the C major scale on ukulele.
Today, you learn the first position of the C major scale (there are a total of five C major scale positions).
Watch the video to learn the C major scale on ukulele and keep reading to learn the music theory behind the scale.
Why Start With Learning the C Major Scale on Ukulele
Why start with the C major scale?
Why not the blues scale? Why not the pentatonic scale?
What you might not know about the C major scale is that it contains all natural notes.
Natural notes are the white keys on the piano or the first seven letters of the alphabet:
A, B, C, D, E, F, G
Since the C major scale contains all natural notes, this means if you learn C major scale positions across the fretboard, you automatically learn where all natural notes are across the fretboard!
If you know where the natural notes are, it’s easy to modify these “home base” positions later on to create scales in other keys by adding in sharps or flats.
How to Play a C Major Scale in Position #1 on Ukulele
Let’s build a C major scale now.
In a major scale, there are just seven notes (eight if you include the octave note).
It’s easy to make a major scale in any key with a simple formula based on whole step and half step intervals.
The major scale formula is:
whole, whole, half, whole, whole, whole, half
A whole step interval is a note two frets away, like from the 2nd fret of the C-string to the 4th fret of the C-string.
A half step interval is a note one fret away, like from the 4th fret of the C-string to the 5th fret of the C-string.
See an example of this in action by making a C major scale starting on the open C-string, which is a C note, known as the root note of the scale.
C Major Scale played on the open C-string of the ukulele
Now that you know the formula and how to make a major scale it’s more efficient to play the C major scale on the bottom three strings of the ukulele.
C Major Scale played on the bottom three strings in Position #1 of the ukulele
Pay specific attention where the notes are written on the music staff and where those notes are positioned on the fretboard.
From your fretting hand, assign the index finger to fret notes that fall on the first fret. Assign the middle finger to fret notes that fall on the second fret. Assign the ring finger to fret notes that fall on the third fret.
See this position indicated in the following fretboard diagram.
C Major Scale Position #1 ukulele fretboard diagram
Memorize this position and take it to heart. As you pluck each note of the position, say the note name out loud to put it to memory. By learning the C major scale, you’ll learn the natural notes across the fretboard.
If you know the natural notes, it’s easy to figure out where the sharps and flats are (more on that later).
A Note About the Top String
You might be wondering:
“Why isn’t the top g-string of the ukulele in this scale position?”
The main reason is because in standard tuning the top g-string is tuned higher than the middle two strings. This means you are unable to play the scale position linearly or from the lowest to highest note.
For this reason, we learn the scale positions on the bottom three strings and then incorporate the top g-string very easily later on.
You’re On Your Way to Mastering the Ukulele Fretboard
Memorizing and learning scales isn’t easy.
It takes energy to commit a scale to memory. Take your time and enjoy the process of learning.
By learning the C major scale, you learn where all the natural notes are on the fretboard, which provides a solid foundation for you to make various scales like the blues scale, pentatonic scale, bebop scale, and more in various keys across the fretboard.
Watch out because next week I will post the next C major scale position. |
Nuclear Magnetic Resonance (NMR) is a nuceli (Nuclear) specific spectroscopy that has far reaching applications throughout the physical sciences and industry. NMR uses a large magnet (Magnetic) to probe the intrinsic spin properties of atomic nuclei. Like all spectroscopies, NMR uses a component of electromagnetic radiation (radio frequency waves) to promote transitions between nuclear energy levels (Resonance). Most chemists use NMR for structure determination of small molecules.
In 1946, NMR was co-discovered by Purcell, Pound and Torrey of Harvard University and Bloch, Hansen and Packard of Stanford University. The discovery first came about when it was noticed that magnetic nuclei, such as 1H and 31P (read: proton and Phosphorus 31) were able to absorb radio frequency energy when placed in a magnetic field of a strength that was specific to the nucleus. Upon absorption, the nuclei begin to resonate and different atoms within a molecule resonated at different frequencies. This observation allowed a detailed analysis of the structure of a molecule. Since then, NMR has been applied to solids, liquids and gasses, kinetic and structural studies, resulting in 6 Nobel prizes being awarded in the field of NMR. More information about the history of NMR can be found in the NMR History page. Here, the fundamental concepts of NMR are presented.
Spin and Magnetic Properties
The nucleus consists of elementary particles called neutrons and protons, which contain an intrinsic property called spin. Like electrons, the spin of a nucleus can be described using quantum numbers of I for the spin and m for the spin in a magnetic field. Atomic nuclei with even numbers of protons and neutrons have zero spin and all the other atoms with odd numbers have a non-zero spin. Furthermore, all molecules with a non-zero spin have a magnetic moment, \(\mu\), given by
where \(\gamma\) is the gyromagnetic ratio, a proportionality constant between the magnetic dipole moment and the angular momentum, specific to each nucleus (Table 1).
|Nuclei||Spin||Gyromagetic Ratio (MHz/T)||Natural Abundance (%)|
The magnetic moment of the nucleus forces the nucleus to behave as a tiny bar magnet. In the absence of an external magnetic field, each magnet is randomly oriented. During the NMR experiment the sample is placed in an external magnetic field, \(B_0\), which forces the bar magnets to align with (low energy) or against (high energy) the \(B_0\). During the NMR experiment, a spin flip of the magnets occurs, requiring an exact quanta of energy. To understand this rather abstract concept it is useful to consider the NMR experiment using the nuclear energy levels.
Figure 1: Application of a magnetic field to a randomly oriented bar magnet. The red arrow denotes magnetic moment of the nucleus. The application of the external magnetic field aligns the nuclear magnetic moments with or against the field.
Nuclear Energy Levels
As mentioned above, an exact quanta of energy must be used to induce the spin flip or transition. For any m, there are 2m+1 energy levels. For a spin 1/2 nucleus, there are only two energy levels, the low energy level occupied by the spins which aligned with \(B_0\) and the high energy level occupied by spins aligned against \(B_0\). Each energy level is given by
\[E=-m\hbar \gamma B_0\]
where m is the magnetic quantum number, in this case +/- 1/2. The energy levels for \(m>1/2\), known as quadrupolar nuclei, are more complex and information regarding them can be found here.
The energy difference between the energy levels is then
\[\Delta E=\hbar \gamma B_0\]
where \(\hbar\) is Planks constant.
A schematic showing how the energy levels are arranged for a spin=1/2 nucleus is shown below. Note how the strength of the magnetic field plays a large role in the energy level difference. In the absence of an applied field the nuclear energy levels are degenerate. The splitting of the degenerate energy level due to the presence of a magnetic field in known as Zeeman Splitting.
Figure 2: The splitting of the degenerate nuclear energy levels under an applied magnetic field. The green spheres represent atomic nuclei which are either aligned with (low energy) or against (high energy) the magnetic field.
Energy Transitions (Spin Flip)
In order for the NMR experiment to work, a spin flip between the energy levels must occur. The energy difference between the two states corresponds to the energy of the electromagnetic radiation that causes the nuclei to change their energy levels. For most NMR spectrometers, \(B_0\) is on the order of Tesla (T) while \(\gamma\) is on the order of \(10^7\). Consequently, the electromagnetic radiation required is on the order of Hz. The energy of a photon is represented by
and thus the frequency necessary for absorption to occur is represented as:
For the beginner, the NMR experiment measures the resonant frequency that causes a spin flip. For the more advanced NMR users, the sections on NMR detection and Larmor frequency should be consulted.
Figure 3: Absorption of radio frequency radiation to promote a transition between nuclear energy levels, called a spin flip.
The power of NMR is based on the concept of nuclear shielding, which allows for structural assignments. Every atom is surrounded by electrons, which orbit the nucleus. Charged particles moving in a loop will create a magnetic field which is felt by the nucleus. Therefore the local electronic environment surrounding the nucleus will slightly change the magnetic field experienced by the nucleus, which in turn will cause slight changes in the energy levels! This is known as shielding. Nuclei that experinece differnet magnetic fields due to the local electronic interactions are known as inequivalent nuclei. The change in the energy levels requires a different frequency to excite the spin flip, which as will be seen below, creates a new peak in the NMR spectrum. The shielding allows for structural determination of molecules.
Figure 4: The effect that shielding from electrons has on the splitting of the nuclear energy levels. Electrons impart their own magnetic field which shields the nucleus from the externally applied magnetic field. This effect is greatly exaggerated in this illustration.
The shielding of the nucleus allows for chemically inequivalent environments to be determined by Fourier Transforming the NMR signal. The result is a spectrum, shown below, that consists of a set of peaks in which each peak corresponds to a distinct chemical environment. The area underneath the peak is directly proportional to the number of nuclei in that chemical environment. Additional details about the structure manifest themselves in the form of different NMR interactions, each altering the NMR spectrum in a distinct manner. The x-axis of an NMR spectrum is given in parts per million (ppm) and the relation to shielding is explained here.
Figure 5: 31P spectrum of phosphinic acid. Each peak corresponds to a distinct chemical environment while the area under the peak is proportional to the number of nuclei in a given environment.
Relaxation refers to the phenomenon of nuclei returning to their thermodynamically stable states after being excited to higher energy levels. The energy absorbed when a transition from a lower energy level to a high energy level occurs is released when the opposite happens. This can be a fairly complex process based on different timescales of the relaxation. The two most common types of relaxation are spin lattice relaxation (T1) and spin spin relaxation (T2). A more complex treatment of relaxation is given elsewhere.
Figure 6: The process of relaxation
To understand relaxation, the entire sample must be considered. By placing rhe nuclei in an external magnetic field, the nuclei create a bulk magnetization along the z-axis. The spins of the nuclei are also coherent. The NMR signal may be detected as long as the spins are coherent with one another. The NMR experiment moves the bulk magnetization from the z-axis to the x-y plane, where it is detected.
- Spin-Lattice Relaxation (\(T_1\)): T1 is the time it takes for the 37% of bulk magnetization to recovery along Z-axis from the x-y plane. The more efficient the relaxation process, the smaller relaxation time (T1) value you will get. In solids, since motions between molecules are limited, the relaxation time (T1) values are large. Spin-lattice relaxation measurements are usually carried out by pulse methods.
- Spin-Spin Relaxation (\(T_2\)): T2 is the time it takes for the spins to lose coherence with one another. T2 can either be shorter or equal to T1.
The two major areas where NMR has proven to be of critical importance is in the fields of medicine and chemistry, with new applications being developed daily
Nuclear magnetic resonance imaging, better known as magnetic resonance imaging (MRI) is an important medical diagnostic tool used to study the function and structure of the human body. It provides detailed images of any part of the body, especially soft tissue, in all possible planes and has been used in the areas of cardiovascular, neurological, musculoskeletal and oncological imaging. Unlike other alternatives, such as computed tomography (CT), it does not used ionized radiation and hence is very safe to administer.
Figure 7: 1H MRI of a human head showing the soft tissue such as the brain and sinuses. The MRI also clearly shows the spinal column and skull.
In many laboratories today, chemists use nuclear magnetic resonance to determine structures of important chemical and biological compounds. In NMR spectra, different peaks give information about different atoms in a molecule according specific chemical environments and bonding between atoms. The most common isotopes used to detect NMR signals are 1H and 13C but there are many others, such as 2H, 3He, 15N, 19F, etc., that are also in use.
NMR has also proven to be very useful in other area such as environmental testing, petroleum industry, process control, earth’s field NMR and magnetometers. Non-destructive testing saves a lot of money for expensive biological samples and can be used again if more trials need to be run. The petroleum industry uses NMR equipment to measure porosity of different rocks and permeability of different underground fluids. Magnetometers are used to measure the various magnetic fields that are relevant to one’s study.
- Calculate the magnetic field, B0 that corresponds to a precession frequency of 600 MHz for 1H.
- What is the field strength (in tesla) needed to generate a 1H frequency of 500 MHz?
- How do spin-spin relaxation and spin-lattice relaxation differ from each other?
- The 1H NMR spectrum of toluene shows that it has two peaks because of methyl and aromatic protons recorded at 60 MHz and 1.41 T. Given this information, what would be the magnetic field at 400 MHz?
- What is the difference between 13C and 1H NMR?
- B0= 14.1 T.
- Using the equation used in problem 1 and solving it for B0we get a field strength of 11.74 T.
- Look under relaxation.
- Since we know that the NMR frequency is directly proportional to the magnetic strength, we calculate the magnetic field at 400 MHz: B0 = (400 MHz/60MHz) x 1.41 T = 9.40 T
- Look under applications.
- Atta-ur-Rahman. Nuclear Magnetic Resonance. New York: Springer-Verlag, 1986.
- Freeman, Ray. Magnetic Resonance in Chemistry and Medicine. New York: Oxford University Press, 2003.
- Lambert, Joseph B and Eugene P Mazzola. Nuclear Magnetic Resonance Spectroscopy: An Introduction to Princliples, Applications, and Experimental Methods. Upper Saddle River: Pearson Education, 2004.
- Chang, Raymond. Physical Chemistry for the Biosciences. University Science Books, 2005
- Derrick Kaseman (UC Davis) and Revathi Srinivasan Ganesh Iyer (UCD) |
Keeping Fit Past 500
Scientists have long believed that young forests serve as carbon sinks, consuming more atmospheric carbon dioxide than they produce. Reflecting this assumption, the Kyoto Protocol encourages nations to develop new carbon sinks by restoring lost forests. But the protocol offers no incentives for leaving existing primary and old-growth forests intact. One reason for the omission is that these forests are assumed to be carbon-neutral, with roughly equal rates of carbon storage (through plant growth) and release (through decomposition of woody debris and soil organic matter). A study in Nature challenges this assumption, finding that most old-growth forests continue adding to their carbon stores over a period of centuries.
An international team led by Sebastiaan Luyssaert of the University of Antwerp compiled data on ecosystem productivity—the annual difference between CO2 uptake and release—from 519 different studies in boreal and temperate forests. If old-growth forests are carbon-neutral on a global scale, the large dataset should have shown a more-or-less equal number of sources and sinks, with an average rate of carbon storage near zero.
Instead, Luyssaert’s team found that forests of all ages are much more likely to store carbon than to release it. While carbon storage slows somewhat in forests beyond 80 years of age, it continues to occur in forests that are 300 to 800 years old. The report shows that primary boreal and temperate forests in the northern hemisphere alone sequester at least 1.3 gigatons of carbon per year. These forests constitute about 15 percent of the world’s total forest cover and account for about ten percent of global ecosystem uptake of CO2.
Nonetheless, old growth’s role as an active player in the carbon cycle has been largely overlooked, and Luyssaert points out that the neutrality assumption has been built into numerous ecological models of carbon flux and into carbon-accounting schemes for greenhouse-gas mitigation. He and his coauthors note that their findings make the preservation of old-growth forests even more important—and suggest that carbon-accounting rules for forests should be revised to give credit for leaving old-growth forests intact. ❧
Luyssaert, S. et al. 2008. Old growth forests as global carbon sinks. Nature 455(7210):213-215.
photo: ©Anja Hil/iStock.com |
These games teach valuable skills and have a high fun and educational rating.
Your child develops writing skills by learning about the different parts of a letter and then composing one.
Your child develops creative skills by exploring a number of different creative outlets including designing pictures, cards and movies.
Your child learns to write a thank you message to someone and show appreciation by watching this video.
Your child learns how to write stories by doing this activity.
Your child develops writing skills by learning the different parts of a postcard and then creating their own unique postcard.
Your child develops literacy skills by following along with the activity to write a letter for Grandparents Day.
Your child improves writing by answering questions about the proper locations of various aspects of writing a letter.
Your child develops poetry writing skills using a virtual version of Magnetic Poetry.
Your child improves writing by choosing the best answers about making a leaflet.
Your child learns the components needed to write and mail a friendly letter by navigating through Moby's maze. |
June 2017 ENSO update: pancake breakfast
One of the key ways we measure ENSO uses the temperature anomaly in the central tropical Pacific, meaning the difference from the long-term average. So, for example, the 2015-16 El Niño peaked during November–January, when the seasonal Oceanic Niño Index was 2.3°C (around 4°F) above average. But what exactly is average? Like most things having to do with weather & climate, it’s not so simple.
“Average” for the Oceanic Niño Index is based on the most recent 30-year period, updated every 5 years. So right now in 2017, “average” is defined by the 1986-2015 period. Meaning, to get the March–May 2017 Oceanic Niño Index, we would first calculate the average sea surface temperature during March–May over the 30 years from 1986-2015: 27.7°C (81.9°F). Then we’d subtract that from the March–May 2017 temperature, which was 28.1°C (82.6°F), yielding an anomaly of 0.4°C (0.7°F).
In 2021, the average period will be updated to 1991-2020. We update this 30-year period every five years because the temperature in the central Pacific, just like most of the rest of the world, has been increasing due to human-caused climate change. (Some resources on climate change: climate.gov, NOAA, NASA, IPCC.)
The temperature in the Niño3.4 region hasn’t warmed quite as fast as much of the rest of the planet, but still has increased around 0.5-0.75°C (0.9 – 1.35°F) since the beginning of the previous century. The warming trend over the past 50 or so years is even steeper than that starting in 1900.
By updating our reference for comparison, we can remove most of the warming trend from the temperatures. If we used only one base period (for example, 1971-2000) from when the ocean was cooler overall, the trend would more clearly affect our ability to reach El Niño and La Niña thresholds (in effect, leaving the trend in would make it easier to achieve El Niño and harder to achieve La Niña). There’s more information about this method of trend removal here.
This trend in SSTs poses a problem because there’s a lot more to ENSO than just the temperature of the Nino3.4 region. For example, the temperature gradient (change with distance) across the tropical Pacific is critical to driving the system. ENSO also requires changes in the atmosphere, and there are no associated trends in the Southern Oscillation Index, for example. Identifying a way to distinguish the strength and variability of ENSO events in a warming world is an active area of research by some of the top climate scientists. |
The National Aeronautics and Space Administration (NASA) has revealed more details about its Asteroid Redirect Mission (ARM), which has an operation planned for the mid-2020s.
The mission calls for a robotic spacecraft to capture a boulder from the surface of a near-Earth asteroid and move it into a stable orbit around the moon for later exploration by astronauts. Not only will the operation show the feasibility of capturing and moving an asteroid, it will also demonstrate the capability of sending astronauts into deep space and to Mars.
The plan calls for NASA to select a specific asteroid for the mission by 2019, about a year before the launch of the robotic spacecraft. Before selection scientists will determine the asteroid’s characteristics, size, rotation, shape, and orbit. So far NASA has identified three possible candidates for the mission -– Itokawa, Bennu, and 2008 EV5. The agency has said that it will identify one or two more candidates each year leading up to the mission.
After the rendezvous with the target asteroid, the unmanned ARM spacecraft will deploy robotic arms to capture a boulder from the asteroid’s surface. The boulder will then be redirected into an orbit around the moon. The process is expected to take six years.
Besides towing the boulder to a moon orbit, the ARM robotic spacecraft will also test a number of capabilities needed for future human missions into deep space. This will include advanced Solar Electric Propulsion (SEP), which is technology that converts sunlight to electrical power through solar arrays and uses that power to propel charged atoms to move a spacecraft. Although slower than conventional chemical rocket propulsion, this method can move massive cargo very efficiently and requires significantly less propellant and fewer launches in a manned mission. It will also help to reduce the cost of the operation.
According to NASA, future spacecraft powered in this manner could pre-position cargo or vehicles for future manned missions into deep space.
In addition, the ARM’s SEP-powered robotic spacecraft will test new trajectory and navigation techniques in deep space and work with the moon’s gravity to place the asteroid boulder into a stable lunar orbit. This could serve as a staging point for astronauts to rendezvous with as they journey to Mars.
The mission will also provide NASA with the opportunity to test planetary defense techniques to help prevent potential asteroid impact of Earth in the future.
In 2005, NASA performed a deep impact comet science mission that tested technology to assist in changing the course of a near-Earth object using a direct hit with a spacecraft. The ARM robotic spacecraft provides another option for planetary defense.
In the mid-2020s, NASA will launch Orion on a Space Launch System rocket, and carry astronauts to a rendezvous with the asteroid boulder where it will be explored. It is expected that such a mission will take from 24 to 25 days.
This manned mission will also test capabilities needed for a deep space mission to Mars and elsewhere. These capabilities will include new sensor technologies and a docking system that will connect Orion to the robotic spacecraft carrying the asteroid mass. Astronauts will conduct spacewalks, wearing new spacesuits designed for deep space missions, outside Orion and study and collect samples from the asteroid boulder.
Collecting samples will help astronauts and mission managers determine the best methods to secure and safely return samples from Mars. The asteroid samples will also provide data for scientists and commercial entities. This is important because the examination of these remnants will teach us more about the formation of the solar system.
NASA launched the asteroid initiative in 2012 and it has increased the detection of near-Earth Asteroids by 65 percent. More than 12,000 asteroids have been identified. |
There is a ticking time bomb in the Arctic and off most mid-latitude continental shelves. Geologic history indicates that it has happened before. National Geographic Special: The Day The Oceans Boiled
If allowed to explode again, there’s a very, very strong chance that the human race and most other earthly life forms won’t survive the experience. This time bomb is rarely reported in mainstream media.
The early warning signs for us today include that suddenly appear as methane-containing permafrost melts, forming in the sea around Siberia, and continuously-increasing methane levels across the world.
Five times in the Earth’s history, most of the life on our planet has died off. Increasingly, it looks like global warming drove each of these extinctions, and an explosion in the amount of methane released into the atmosphere was the “final straw” event in most, if not all, of these events.
In two extinction events, the Permian Mass Extinction and the Paleocene-Eocene Thermal Maximum (PETM) methane was almost certainly the final driver of climate extremes so severe they killed off most life.
While most people know that carbon dioxide (CO2) is a greenhouse gas, most don’t realize that methane (CH4 – the “active ingredient” in natural gas) is massively more powerful at trapping atmospheric heat, and when oxidized or metabolized its carbon atom most frequently ends up in a molecule of CO2.
The Clathrate Gun Hypothesis
There are stored in permafrost and as a snow cone-like slurry on the floor of our oceans, particularly the shallow Arctic Ocean. That slurry is called Methane Hydrate or Methane Clathrate, referring to methane gas that’s trapped – by temperature and pressure – in a molecular “cage” of frozen water molecules. When the water warms above 32 degrees Fahrenheit, the frozen-water clathrate “cages” dissolve and methane gas is released.
Since around 1850 – the beginning of the modern industrial revolution made possible by fossil fuels – we’ve burned around 350 billion tons of carbon into our atmosphere. It has warmed our planet by around 1 degree Celsius, acidified our oceans, and increased the ferocity of storms and drought around the world.
Burning another 350 billion tons would be a disaster. Yet there are tens of thousands of billions of tons of carbon stored as methane in the Arctic, which could be released by the simple process of warming the Arctic sea and the permafrost surrounding it. Some scientists propose this as the process that drives a truly destructive mass extinction event.
In eras past, the source of the warming that provoked the melting of the clathrates was most likely tectonic-induced volcanic activity. Thousands to tens of thousands of years of massive volcanic eruptions both warmed the atmosphere and threw enough tons of greenhouse gasses into the atmosphere, eventually warming the oceans to the point where the Clathrate Gun was fired, producing a rapid and destructive acidification of the seas and warming of the atmosphere.
This massive methane release is the absolute worst-case scenario for life on Earth, and nobody knows for sure exactly how much heating of the atmosphere it will take to fire it. But every day we continue to burn fossil fuels moves us closer in that direction. |
How did the Cold War develop? 1943–56
The Cold War, 1956 - 1969.
The Berlin Crisis
The Cuban Missile Crisis
The establishment and control of the Soviet satellite statesHow had the USSR gained control of Eastern Europe by 1948?
Between 1945 and 1949 Stalin created a Russian empire in Eastern Europe. This empire included Poland, Hungary, Rumania, Bulgaria, Czechoslovakia and East Germany. Each had a Communist government. In the West they were called satellites because they clung closely to the Soviet Union like satellites round a planet.
Stalin was able to create this empire for a number of reasons. The first was the military might of the Soviet Union in Europe after 1945. Unaffected by the pressures of domestic opinion, Stalin was able to keep huge numbers of troops in a state of readiness, whereas the western powers were under intense pressure to ‘bring the boys back home’ as soon as possible. Neither Britain nor the United States were prepared to fight over Eastern Europe and Stalin knew this.
Another reason for the spread of Communism after the war was the gratitude of many Eastern Europeans for their liberation from Nazism. This, and the often appalling conditions at the end of the war, played into the hands of east European communist parties, which were, of course, backed by Stalin and the Soviet Union. Leaders of these parties were often trained in Moscow and certainly received much friendly assistance from the Russians. At first Stalin moved slowly. There was no sudden imposition of Soviet Communism. Opposition parties were allowed and in the first elections the voters were given a relatively free choice, provided the governments they chose were at least sympathetic to Communist aims and ideals. But gradually the East European Communists took over the running of their countries.
By 1946 the West was becoming increasingly aware of what was happening in eastern Europe. One of the most prominent critics of the changes was Winston Churchill, now leader of the opposition in Britain. In March 1946 he made his now famous ‘Iron Curtain’ Speech at Fulton, Missouri (USA) in which he declared that: “From Stettin in the Baltic to Trieste in the Adriatic an iron curtain has descended across the continent”. Further, he claimed that the Russians were intent on “indefinite expansion of their power and doctrines”.
In the meantime, state after state in eastern Europe fell under communist control and Soviet influence. The map below details a number of these conquests for the Soviet Union.
The story of two countries will serve to illustrate the nature of the Soviet takeover.
At the Tehran Conference in 1943 Stalin had agreed to attack Germany through Poland and the Danube countries (Austria, Hungary, Rumania and Bulgaria). The USA and Britain also agreed that the USSR was to get its pre-1921 land back from Poland (which the Poles had seized during the civil war). The Poles were to get parts of eastern Germany as compensation, including the rich industrial area of Silesia. But the Polish government-in-exile in London refused to accept this proposal which, despite their objection, later became part of the Yalta Agreement.
As the Red Army approached the Polish capital, Warsaw, in August 1944, the London Polish government organised a desperate rising by the Polish Home Army against both the Germans and the thought of a Red Army ‘liberation’. It was a disaster. After two months 200,000 Polish civilians were dead. Warsaw was flattened by the Germans. For his part, Stalin had refused to support the rising and even ordered the Red Army to halt its advance, giving the Germans the opportunity to crush the rising mercilessly.
The failure of the rising destroyed the support the London government-in-exile had enjoyed in Poland itself. Stalin’s Communist-dominated Provisional Government of National Union in Lublin won the initiative and gained in support. This signed a Treaty of Friendship and Postwar Cooperation with Stalin, who promised his support in return. The London Poles were forced to join this government as a minority partner in June 1945 and to accept the Yalta Agreement.
In Poland, although individuals were persecuted, there was none of the heavy repression which had taken place in the USSR in the 1930s. At first the communists were relatively popular. They had fought the Nazis as nationalists and so were considered by many to be heroes. They also brought in land reform which gave land to the peasants who made up two-thirds of the population.
In the elections of January 1947 the Communists and their allies won 384 out of 444 seats in what was seen in the West as a rigged election. The Peasant Party leader, who had not been able to campaign freely, resigned and fled into exile in London. After that the Communist government banned other political parties and established a one-party state.
Events in Czechoslovakia were just as tragic. The prewar Prime Minister, Benes, was not a communist, but he no longer trusted the West. He had seen at Munich in 1938 how Britain and France had abandoned his country to Hitler. He was therefore determined to establish good relations with the USSR in order to have protection for his country in the future. He visited Stalin and told him how he intended to favour the communists in his own country after the war. In return he wanted Stalin’s help to deport the 2 million Germans still living in Czechoslovakia. Stalin got this request written into the Potsdam Declaration.
In May 1946 the Communist Party received 38% of the vote in free elections.
Again, to many Czechs the communists were national heroes at this time.
The Social Democrats also did well in the elections and as their leadership
was largely in favour of an alliance with the USSR, the two parties
formed a coalition government with Benes as President and Klement Gottwald
as Prime Minister. However, in 1947, a dispute arose over whether the
Czechs should seek aid from the American Marshall Plan (see below).
Benes, and other non-communists in the government, hoped that Czechoslovakia
could become a bridge between east and west. Stalin, however, was determined
to prevent this and therefore approved a coup d’état by
Gottwald to remove the opposition and force Benes to resign. A month
later, the leading non-communist in the government, the Foreign Minister,
Jan Masaryk, was found dead beneath his office windows. His death was
officially described as suicide, but subsequent opening of the archives
proved that it was murder. When new elections were held in 1948 there
was only one list of candidates, all communists. |
Commonly found in moist, acidic soil where there is plenty of sunlight.
This plant is specifically found in North and South Carolina, but can be found in all of the southeastern states of the U.S.
Due to the Venus flytrap’s sandy coastal habitat, it does not get the nutrition needed to thrive, specifically Nitrogen; thus, the consumption of insects allows this carnivorous plant to obtain the nourishment it needs.
When an insect touches one of the spines twice, the trap quickly closes the majority of the way. The trap does not close if it is touched once; this saves it from using energy to close if a leaf falls upon it.
When an insect is caught the trap is tightly sealed and will remain so for up to a week before re-opening.
If an insect is too large it will hang out of the trap. This causes bacteria to grow on the insect, which then spreads to the trap itself causing it to decay and eventually fall off.
Carnivorous plants have existed on Earth for thousands of years. There are more than 575 kinds of plants that supplement their food supply with insects. Venus flytraps gather nutrients from gases in the air and from the soil. They live in nitrogen poor environments so they have adapted to gathering additional nutrients from insects. The leaves of the Venus flytrap are wide with short, stiff trigger hairs. Once an object bends these hairs the trap will close. It does not close all the way; this is thought to let small insects escape as they would not provide enough food for the plant. If the right size insect is captured, the trap will seal completely shut and digestive juices will be secreted to start breaking down the insect. This process takes 5-12 days. The exoskeleton is not broken down and will blow away or be washed out once the trap reopens. The time it takes for the trap to reopens depends on the size of the insect, the temperature, the age of the plant and how often it has completed this process before.
There is a lot of habitat loss due to drainage, development and fire suppression. The biggest factor in the loss of habitat is major timber growth where land has been drained, cleared and planted in a pine culture. The Venus flytrap is a state ‘Species of Special Concern’ in North Carolina. Illegal trafficking of this species is regulated under CITIES. |
Climate change signifies variations in the global or regional climates of the earth. It describes the changes in atmospheric states over time, ranging from hundreds to millions of years. Climate change can be due to earth’s internal processes and external forces such as varying intensity of sunlight and human activities. In recent years, the term “climate change” was linked primarily to global warming. A global condition where there is a rise in the average surface temperature of the earth.
NASA continually develops new ways to observe essential signs from the earth. This includes the air, space and land, from satellites to ground-based and airborne campaigns. Research teams perform studies and monitor the interconnected systems of earth with computer analysis tools and data records. This is done to observe how the planet is changing over the long term.
NASA Research Operations
The average global temperature has steadily increased over the recent few decades. In the Arctic region, the increase reaches up to three times faster than elsewhere. Three airborne research campaigns were carried out by NASA from Alaska. These campaigns are directed at gathering data on Arctic clouds and sea ice, observing the Alaskan glaciers and assessing concentrations of greenhouse gas close to the surface of the earth. These observations will help researchers better understand the reaction of the Arctic region to rising temperatures.
CARVE (Carbon in Arctic Reservoirs Vulnerability Experiment) is a 5-year airborne research operation handled by the Jet Propulsion Laboratory in California. This research operation utilises C-23 Sherpa aircraft to gather a detailed picture of the way the atmosphere and the land interact with each other in the Arctic. This procedure is done 2 weeks a month between May and November.
ARISE (Arctic Radiation Ice Bridge Sea and Ice Experiment) is another NASA airborne campaign that measures atmospheric and cloud properties. It gathers data on thinning ice in the Arctic Sea to answer questions regarding the relationship between the Arctic climate and the diminishing sea ice. The goal is to better understand the causes of ice loss in the Arctic and how it links to the overall system of the earth. This campaign collects unique and timely data to identify the influence of clouds on the climate when changes occur in sea ice conditions.
Effects of Climate Change
According to recent studies that focus on the future effects of climate change, the US in particular will face great potential risk in their economy. The effects of climate change will significantly affect crop yields, jobs and energy production. Though business research does not present any new climate science, it highlights the dire effects on the economy if appropriate action is not taken. For example, concerns include crop yields that could fall by more than 70% in the Midwest and property worth billions of dollars on the East Coast literally underwater. A study by a bipartisan group of former prominent public officials, businessmen and entrepreneurs acknowledged the need for more research on climate change and a need for action from the investment community. The study influenced the business community to apply the science of risk management as a major tool for combating climate change.
The study revealed that by the middle of the century, there is a chance for more than $23 billion of prime property in Florida to go underwater. It also revealed that by the end of the century, there’s a one in a hundred chance that property worth as much as $681 billion will submerge. Threats are widespread across the US and a new report has found that climate change has already had a broad impact on the economy.
Cause of Climate Change
A realistic projection of what could happen is that agricultural yields in some parts of the US could fall by up to 50%. This is due to the temperatures that prevent farm workers from going outdoors for an appreciable part of the year.
According to scientists, carbon emissions are a major cause of climate change. It has been proposed to levy a tax on carbon emissions as an incentive to wean the US economy off carbon-based fuel. While a carbon tax is one way to let the market operate by placing a price on the pollution caused by carbon emissions, it is unlikely to pass in Congress soon enough to have the required impact. A requirement by the Financial Accounting Standards Board and the Securities and Exchange Commission for companies to disclose the potential risks of climate change on company assets and profits has been proposed, which does not need the approval of Congress.
There are hundreds of research projects ongoing worldwide. The main goal is to try to counteract the effects of climate change as people realise that we have to address the problem on all possible fronts. If this problem is not given great attention, there is a real possibility that life on earth, as we know it, is at stake. Emissions of carbon dioxide are the main contributors to global warming. And it is set to rise once more, probably reaching a record of 40 billion tonnes in 2014. Research shows that the CO2 emission quota remaining is in danger of being used up in the next generation and that more than 50% of the fossil fuel reserves will have to be left untapped. UK researchers at Tyndall Centre for Climate Change project a rise of 2.5% in the burning of fossil fuel, which will have a tremendous impact on global warming. |
It’s easy to find activities in science, especially with the Internet. But integrating content and activities/investigations in a planned and purposeful (and engaging) way can be a challenge for teachers. The articles in NSTA publications have many examples of how this can be done, including planning tools, rubrics, connections to standards, and assessments. Tools such as SciLinks can provide just-in-time content and background information for both students and teachers [See Scientific Investigations and Developing Classroom Activities for examples.]
The featured articles in this issue focus on these planned and purposeful activities and investigations:
- Noodling for Mollusks (even the title of the article is intriguing) describes how to model and practice field sampling with students. I must admit I was unfamiliar with the term “noodling”– searching for an organism using your sense of touch but not your sense of sight (sounds like a real-life application of the mystery box). The article describes a classroom simulation based on the experiences of one of the authors. So even if you don’t have access to an aquatic environment, you can use their directions to create a noodling site, collect data, and analyze the results. [SciLinks: Mollusks]
- Make Your Own Phylogenetic Tree has a detailed description of a simulation to help students understand phylogeny and molecular similarity. [SciLinks: Phylogenetic Trees, Mutations]
- Chemistry Cook-Off shows how cooking can be used to help students learn chemistry concepts, such as chemical and physical changes. (But remember that cooking and eating in the science lab is not a safe practice.) The article includes guidelines and a rubric. [SciLinks: Physical/Chemical Changes. See also Kitchen Chemistry from the Royal Society of Chemistry (UK) , the Science of Cooking from the Exploratorium, and Cheeseburger Chemistry from NBC Learn and NSTA.
- It’s All in the Particle Size describes an investigation about sedimentation and topics related to weathering, erosion, and deposition as a prelude to a study of sedimentary rock. The author includes the essential questions for the investigation and graphics related to the investigation [SciLinks: Soil, Weathering, Sedimentary Rock]
- The author of A Hidden Gem describes the important role teachers play in guiding students as they access and use online resources. She describes a three-phase approach to a student investigation of global warming (the GEM of the title – Generate ideas, Evaluate ideas, Modify ideas) and the online resources used. [SciLinks: Global Warming, Climate Change]
Don’t forget to look at the Connections for this issue (November 2012), which includes links to the studies cited in the research article. These Connections also have ideas for handouts, background information sheets, data sheets, rubrics, etc. |
THE GREAT DEPRESSION
The Great Depression (1929-39) was the deepest and longest-lasting economic downturn in the history of the Western industrialized world. In the United States, the Great Depression began soon after the stock market crash of October 1929, which sent Wall Street into a panic and wiped out millions of investors. Over the next several years, consumer spending and investment dropped, causing steep declines in industrial output and rising levels of unemployment as failing companies laid off workers. By 1933, when the Great Depression reached its nadir, some 13 to 15 million Americans were unemployed and nearly half of the country’s banks had failed. Though the relief and reform measures put into place by President Franklin D. Roosevelt helped lessen the worst effects of the Great Depression in the 1930s, the economy would not fully turn around until after 1939, when World War II kicked American industry into high gear. This affected the people in role of thunder because they weren't treated equal and people are trying to buy their land. |
Discrimination Worksheet Essay
Sorry, but copying text is forbidden on this website!
• What is discrimination? How is discrimination different from prejudice and stereotyping?
Discrimination is unfair treatment to different categories of people based on many things including race, religion, culture, orientation, and so on. Prejudice is, in my terms, judging someone without actually knowing anything about them. Stereotyping is very similar to prejudice but it is widely known groups that people are placed in like jock or nerd. Discrimination is different because you are acting on the hatred you have for people instead of just thinking about it.
For example, it is the difference between thinking about killing someone and actually doing it. Discrimination is probably the most hurtful because you are being open about it to someone instead of thinking it to yourself.
• What are the causes of discrimination?
Many things can cause discrimination. The main thing, I would say, is it is a learned behavior. This means these people who discriminate were probably raised to feel this way towards a certain group of people.
What you learn growing up can stay with you for the rest of your life. Here you are a defenseless child who knows nothing but are told to hate a certain group of people, you are going to listen because you were raised to. Then, when you’re older, you will automatically discriminate against this group because you were told to. There is discrimination against people of other races because they have a different skin color which makes them “different.” There is discrimination against gay people because we don’t understand why they “choose” to be that way. These are just a couple examples of what causes discrimination.
• How is discrimination faced by one identity group (race, ethnicity, religious beliefs, gender, sexual
orientation, age, or disability) the same as discrimination faced by another? How are they different? I think discrimination is faced by all groups the same in one way, they are all getting treated unfairly because of their race, ethnicity, religion, etc. I don’t think there is a single person in life, even a white male, who has not experienced some form of discrimination in their lifetime. However, other than that one fact, I think everyone faces discrimination differently. People of different races deal with being called a lot of names. Also, people with different religions get made fun of for what they believe. Women receive a lot of negativity when they try to move up in a company because “the man” is supposed to. Gay men and women are frequently told they are going to Hell and God doesn’t approve. The funny this is, most gay people I know believe in God and go to church regularly! Discrimination is faced by many different groups in very different ways. |
While initially believed to have a circular orbit, the orbit of the Earth is actually elliptical. In 365 days, the Earth makes a complete orbit around the sun. During the Earth's one year period, the closet distance the Earth gets to the sun is 147 million km, while the greatest distance is 152 million km.
Amazing But True
- According to Newton's law, every object in the universe attracts every other object with a force that is directly proportional to the square of the distance that separates them. Newton determined that the force between two bodies is given as:
where is the universal gravitational constants, and are the masses of the objects. Since the Earth is in orbit around the sun, we can define the force on the earth to be a centripetal force:
if we rewrite the speed of the orbit in terms of its period we get,
where is the radius for a circular orbit or for the length of a semi major axis for an elliptical orbit.
- Learn more about the Earth's elliptical orbit: http://hyperphysics.phy-astr.gsu.edu/hbase/orbv.html
Show What You Know
Using the information provided above, answer the following questions.
- What force is keeping the Earth in orbit around the Sun?
- From the link provided, determine the reduced mass if and .
- If the Earth had a uniform density, how would you expect the force of gravity to change as you get closer and closer to the center of the earth? |
It is 1929 and the misery that had aided the efforts of Weimar’s enemies in the early 20s has been relieved by five years of economic growth and rising incomes. Germany has been admitted to the League of Nations and is once more an accepted member of the international community. Certainly the bitterness at Germany's defeat in the World War I and the humiliation of the Treaty of Versailles had not been forgotten but most Germans appeared to have come to terms with the new Republic and its leaders.
Gustav Stresemann has just died. Germany has, in part, as a result of his efforts become a respected member of the international community again. Stresemann often spoke before the League of Nations. With his French and American counterparts Auguste Briand and Frank Kellog, he had helped negotiate the Paris Peace pact which bore the name of his fellow diplomats Kellog-Briand. Once again Gustav Stresemann had decided to take on the arduous job of leading a battle for a policy he felt was in his nation’s vital interest even though he was tired and ill and knew that the opposition would be stubborn and vitriolic. Stresemann was the major force in negotiating and guiding the Young Plan through a plebiscite. This plan although opposed by those on the right-wing won majority approval and further reduced Germany’s reparations payments.
How had Weimar Germany become by 1929 a peaceful relatively prosperous and creative society given its chaotic and crisis-ridden beginnings? What significant factors contributed to the survival and success of the Republic? What were the Republic’s vulnerabilities, which would allow its enemies to undermine it in the period between 1929 and 1933?
The Weimar Republic was a bold experiment. It was Germany's first democracy, a state in which elected representatives had real power. The new Weimar constitution attempted to blend the European parliamentary system with the American presidential system. In the pre-World War I period, only men twenty-five years of age and older had the right to vote, and their elected representatives had very little power. The Weimar constitution gave all men and women twenty years of age the right to vote. Women made up more than 52% of the potential electorate, and their support was vital to the new Republic. From a ballot, which often had thirty or more parties on it, Germans chose legislators who would make the policies that shaped their lives. Parties spanning a broad political spectrum from Communists on the far left to National Socialists (Nazis)on the far right competed in the Weimar elections. The Chancellor and the Cabinet needed to be approved by the Reichstag (legislature) and needed the Reichstag's continued support to stay in power.
Although the constitution makers expected the Chancellor to be the head of government, they included emergency provisions that would ultimately undermine the Republic. Gustav Stresemann was briefly Chancellor in 1923 and for six years foreign minister and close advisor to Chancellors. The constitution gave emergency powers to the directly elected President and made him the Commander-in-Chief of the armed forces. In times of crisis, these presidential powers would prove decisive. During the stable periods, Weimar Chancellors formed legislative majorities based on coalitions primarily of the Social Democrats, the Democratic Party, and the Catholic Center Party, all moderate parties that supported the Republic. However, as the economic situation deteriorated in 1930, and many disillusioned voters turned to extremist parties, the Republic's supporters could no longer command a majority. German democracy could no longer function as its creators had hoped. Ironically by 1932, Adolf Hitler, a dedicated foe of the Weimar Republic, was the only political leader capable of commanding a legislative majority. On January 30, 1933, an aged President von Hindenburg reluctantly named Hitler Chancellor of the Republic. Using his legislative majority and the support of Hindenburg's emergency presidential powers, Hitler proceeded to destroy the Weimar Republic.
Germany emerged from World War I with huge debts incurred to finance a costly war for almost five years. The treasury was empty, the currency was losing value, and Germany needed to pay its war debts and the huge reparations bill imposed on it by the Treaty of Versailles, which officially ended the war. The treaty also deprived Germany of territory, natural resources, and even ships, trains, and factory equipment. Her population was undernourished and contained many impoverished widows, orphans, and disabled veterans. The new German government struggled to deal with these crises, which had produced a serious hyperinflation. By 1924, after years of crisis management and attempts at tax and finance reform, the economy was stabilized with the help of foreign, particularly American, loans. A period of relative prosperity prevailed from 1924 to 1929. This relative "golden age" was reflected in the strong support for moderate pro-Weimar political parties in the 1928 elections. However, economic disaster struck with the onset of the world depression in 1929. The American stock market crash and bank failures led to a recall of American loans to Germany. This development added to Germany's economic hardship. Mass unemployment and suffering followed. Many Germans became increasingly disillusioned with the Weimar Republic and began to turn toward radical anti-democratic parties whose representatives promised to relieve their economic hardships.
Class and Gender
Rigid class separation and considerable friction among the classes characterized pre-World War I German society. Aristocratic landowners looked down on middle and working class Germans and only grudgingly associated with wealthy businessmen and industrialists. Members of the middle class guarded their status and considered themselves to be superior to factory workers. The cooperation between middle and working class citizens, which had broken the aristocracy's monopoly of power in England, had not developed in Germany. In Weimar Germany, class distinctions, while somewhat modified, were still important. In particular, the middle class battled to preserve their higher social status and monetary advantages over the working class. Ruth Fischer wanted her German Communist party to champion the cause of the unemployed and unrepresented.
Gender issues were also controversial as some women's groups and the left-wing political parties attempted to create more equality between the sexes. Ruth Fischer struggled to keep the Communist party focused on these issues. As the Stalinists forced her out of the party the Communists lost this focus. Other women's groups, conservative and radical right-wing political parties, and many members of the clergy resisted the changes that Fischer and her supporters advocated. The constitution mandated considerable gender equality, but tradition and the civil and criminal codes were still strongly patriarchal and contributed to perpetuating inequality. Marriage and divorce laws and questions of morality and sexuality were all areas of ferment and debate.
Weimar Germany was a center of artistic innovation, great creativity, and considerable experimentation. In film, the visual arts, architecture, craft, theater, and music, Germans were in the forefront of the most exciting developments. The unprecedented freedom and widespread latitude for varieties of cultural expression led to an explosion of artistic production. In the Bauhaus arts and crafts school, in the studios of the film company UFA, in the theater of Max Rinehardt and the studios of the New Objectivity (Neue Sachlickeit) artists, cutting edge work was being produced. While many applauded these efforts, conservative and radical right-wing critics decried the new cultural products as decadent and immoral. They condemned Weimar Germany as a new Sodom and Gomorrah and attacked American influences, such as jazz music, as contributors to the decay.
Weimar Germany had a population that was about 65% Protestant, 34 % Catholic and 1%Jewish. After German unification in 1871, the government had strongly favored the two major Protestant Churches, Lutheran and Reformed, which thought of themselves as state-sponsored churches. At the same time, the government had harassed and restricted the Catholic Church. Although German Catholics had only seen restrictions slowly lifted in the pre-World War I period, they nevertheless demonstrated their patriotism in World War I. German Jews, who had faced centuries of persecution and restriction, finally achieved legal equality in 1871. Jews also fought in record numbers during World War I and many distinguished themselves in combat. Antisemites refused to believe the army’s own figures and records and accused the Jews of undermining the war effort. The new legal equality of the Weimar period did not translate into social equality, and the Jews remained the "other" in Germany.
Catholics and Jews both benefited from the founding of the Weimar Republic. Catholics entered the government in leadership positions, and Jews participated actively in Weimar cultural life. Many Protestant clergymen resented the loss of their privileged status. While many slowly accepted the new Republic, others were never reconciled to it. Both Protestant and Catholic clergy were suspicious of the Socialists who were a part of the ruling group in Weimar and who often voiced Marxist hostility toward religion. Conflicts over religion and education and religion and gender policies were often intense during the Weimar years. The growth of the Communist Party in Germany alarmed Protestant and Catholic clergy, and the strong support the Catholic Center Political Party had given to the Republic weakened in the last years of the Republic. While Jews had unprecedented opportunities during the Weimar period, their accomplishments and increased visibility added resentment to long-standing prejudices and hatreds and fueled a growing antisemitism. |
What two countries contributed to the American victory in the American Revolution?
I assume that this question refers to the American victory in the Revolutionary War. This is the most prominent war in which exactly two countries helped America to win. I have changed the question to show that.
In the Revolutionary War, the colonies achieved victory with help from Spain and from France. Both of these countries were motivated mainly by their desire to see Great Britain weakened. Both Spain and France had much more absolute monarchies than Britain did, so they were clearly not helping the American colonies on the grounds that democracy was the best form of government. Instead, they were enemies of Britain and wanted that country to be weakened.
Of these two countries, by far the more important was France. It was French aid that truly allowed the Americans to win the war. Without French money, arms, and soldiers, the colonists would have been much less likely to win. Without French naval help, the final battle at Yorktown could not have been won.
The French, then, were the main source of foreign aid for the colonies, but Spain helped as well. |
Fetal hiccups can show up as early as the first trimester, but they usually show up around the second or third trimester. When a fetus hiccups, the mother feels little spasms in her belly that are different from other pregnancy movements. Almost all women will feel their fetus hiccuping at least once during the pregnancy, if not more. Some babies will hiccup on a daily basis and others even more frequently.
A contracting diaphragm can trigger hiccups in mature fetuses. In order for a fetus to hiccup in the womb, its central nervous system must be complete. The central nervous system gives the fetus the ability to breathe in amniotic fluid. A hiccup results when the fluid enters and exits the fetus’s lungs, causing the diaphragm to contract rapidly. Fetal hiccups are quite common and can often be seen on an ultrasound as jumping or rhythmic movements. Fetal hiccups are reflexive and do not appear to cause discomfort. In addition, hiccups prepare the fetus’s lungs for healthy respiratory function after birth and they help regulate the baby's heart rate during the third trimester.
Occasionally fetal hiccups can occur when the fetus is not getting enough air. If a woman notices a sudden decrease in the frequency, intensity or length of the fetal hiccups than she seek medical attention right away to check for an umbilical cord compression. A cord compression occurs when the umbilical cord wraps around the fetus’s neck, cutting off the air supply. When the cord wraps around the fetus’s neck, the fetal heart rate increases and blood flow from the umbilical cord to the fetus declines. Cord compression is rare and usually happens over time as the fetus moves within the womb.
A mature fetus may hiccup as he or she develops the reflex that will allow him or her to suckle or drink from the mother’s breast after birth. Once the fetus is born, the suckling reflex will prevent milk from entering the baby’s lungs. |
This book attempts the discussion of two very important problems in primary education. First, the oral work in the handling of stories, and second, the introduction to the art of reading in the earliest school work. The very close relation between the oral work in stories and the exercises in reading in the first three years in school is quite fully explained. The oral work in story-telling has gained a great importance in recent years, but has not received much discussion from writers of books on method.
for things worth reading, and then incorporate these and similar stories into the regular reading exercises as far as possible.
In accordance with this plan, children, by the time they are nine or ten years old, will become heartily acquainted with three or four of the great classes of literature, the fables, fairy tales, myths, and such world stories as Crusoe, Aladdin, Hiawatha, and Ulysses. Moreover, the oral treatment will bring these persons and actions closer to their thought and experience than the later reading alone could do. In fact, if children have reached their tenth year without enjoying those great forms of literature that are appropriate to childhood, there is small prospect that they will ever acquire a taste for them. They have passed beyond the age where a liking for such literature is most easily and naturally cultivated. They move on to other things. They have passed through one great stage of education and have emerged with a meagre and barren outfit.
The importance of ora |
Reducing disparities (1)
Reducing disparities with free trade:
Free trade: The exchange of good and/or services among countries that occurs without barriers such as tariffs or quotas
Gives local companies a chance to become global companies (TNC) i.e. Pollo Campero
Countries who participate in free trade grow faster (Mexico has increased its exports since joining NAFTA)
It makes products less expensive and can encourage citizens to buy them
Improvement to local infrastructure
Attracts foreign direct investment (jobs are created for local workers_
TNC's may take over local producers (i.e. Walmart moving into El Salvador and taking over local supermarkets)
Workers are often exploited by TNC's and paid low wages for long hours
Increases transport costs and contributes to global warming (to ship the products abroad)
Reducing disparities (1 bis)
Trade that attempts to be economically, socially and environmentally responsible.
For example, People Tree is a company that follows the principles of fair trade as set out by the WTFO. Working closely with 50 fair trade groups in 15 countries, People Tree is a textiles company aiming to bgring benefits to people at every step of the production process, therefore helping to allievate poverty in some of the world's most marginalized communities.
It reduces disparities by providing a better standard of living, and money to reinvest in farms.
There are conflicts of interest: consumers want to pay the lowest price, but goods have to be expensive in order for the product to be fair (farmers have to send their kids to school, etc.)
Reducing disparities (2)
Reducing disparities with market access:
Many MEDC's protect their economies with import tariffs, subsidies and export quotas. Trade isn't free because of existing trade block. The WTO wants to eliminate these measures.
If countries can trade freely, MEDC's will probably specialize in what they are good at: knowledge-intensive products, and LEDC's will probably increase their share of trade.
It links the labour markets of MEDC's to the labour markets of LEDC's. This integration had many benefits:
-raised average living standards in MEDC's and accelerating development in LEDC's.
However, it has hit unskilled workers in MEDC's, reducing their wages and pushing them out of jobs. Governments must take action, or MEDC's will continue to suffer from rising inequality and mass unemployment.
Over the past few decades, LEDC's have ceased to be merely export primary products, and their exports of manufactured goods increased massively. They are now a substantial exporter of services such as shipping or tourism.
Reducing disparities (3)
Reducing disparities with debt relief and SAP's
It is a program for Heavily Indebted Poor Countries (created by the IMD and the World Bank) which relieved 36 countries of part of their debt
It is the poorest countries who have to spend a greater percentage of their GDP on debt repayment.
After decolonisation, the countries received loans. The borrowing of money didn't lead to the expected growth and soon many countries had mountains of debt. As interest payments rise, many countries are unable to pay back their debt.
SAP's (structural adjustment programs) were designed to cut government expednditure, reduce the amount of state intervention in the economy, and promote liberization and international trade
Reducing disparities (types of aid)
Reducing disparities with aid: Emergency, short-term aid: aid that is needed immediatly after a disaster such as an earthquake or hurricane. Could be in the form of emergency accomodation, food or clothing (i.e. somalia famine 2010, haiti eartquake 2010)
Bilateral aid: When one country donates money or resources to another (may be in the form of tied aid, where conditions are attached) It is very political and often doesn't reach people that need it most
Charitable aid: funded by donations from the public through charitable organisations
Long-term aid: aims to help the country develop sustainably in the future, through introducing schemes to improve education and health care systems in developing countries
Multilateral aid; Involved government giving money to a central inter.organization such as the World Bank, sho then decide how the money will be spent.
Bottom-up aid: small-scale, targeting specific groups of people - tries to mobilize and empower people - involved direct assistance without government assistance - people are seen as actors in development, local people in charge - women are key producers & make decision
Reducing disparities (top-down aid)
They are large-scale development projects that tend to be imposed from above.
It focuses on providing services through government or charities.
The donors are in charge.
Women are seen as a vulnerable group, passive receivers of aid.
They may be large infrastructure projects or national health/education programs (i.e. Free education program in Kenya)
Reducing disparities (pros & cons of aid)
Aid in general:
-After a natural disaster, aid can be vital in saving lives (it can’t always be provided by the government)
-It can help to build expensive infrastructure that wouldn’t normally be built (new roads, ports or irrigation stations)
-can help build schools and hospitals that improve the health and education of locals.
-local workers are employed. It teaches new skills and builds technical expertise (especially in bottom-up aid)
-many charities provide education about hygiene, diet and health. They improve the well-being of societies
-countries can become dependent on money given by foreign donors, instead of developing their own economy to become independent
-food aid can depress local agricultural prices, which can take local farmers out of business and resulting in greater poverty in rural areas, which increases the risk of famine
-aid may stop because of political changes in donor/receiving country (also, risk of corruption)
-aid might fund innapropriate and/or harmful technologies that can not be sustained after aid has been removed (i.e. nuclear power)
-projects like roads/dam scan cause large-scale environmental problems
Reducing disparities (remittances)
Remittances: the transfer of money and/or goods by foreign workers to their home countries
The money enters at a local level/community : it provides basic needs
It spurs investment
It can make a significant contribution to many countrie's overall income (i.e. El Salvador received the equivalent of 20% of its GDP from Salvadorians living abroad, mainly in the US)
Reduces pressure on schools, hospitals and infrastructure (house, water, transport, electricity)
Migrants return with new skills (language, ICT)
There is dependency on the sender
It isn't stable, and sensitive for recession (families in Mexico suffered when the crisis hit)
Brain drain -> usually, the youngest, most educated and skilled choose to leave
Creates family division & familiy pressure (the need to provide)
Increased dependency ration in losing country, placing pressure on the government
Reducing disparities (8)
development takes place via a fixed linear path which exists |
|This article needs additional citations for verification. (February 2012)|
A bimetallic strip is used to convert a temperature change into mechanical displacement. The strip consists of two strips of different metals which expand at different rates as they are heated, usually steel and copper, or in some cases steel and brass. The strips are joined together throughout their length by riveting, brazing or welding. The different expansions force the flat strip to bend one way if heated, and in the opposite direction if cooled below its initial temperature. The metal with the higher coefficient of thermal expansion is on the outer side of the curve when the strip is heated and on the inner side when cooled
The sideways displacement of the strip is much larger than the small lengthways expansion in either of the two metals. This effect is used in a range of mechanical and electrical devices. In some applications the bimetal strip is used in the flat form. In others, it is wrapped into a coil for compactness. The greater length of the coiled version gives improved sensitivity.
The earliest surviving bimetallic strip was made by the eighteenth-century clockmaker John Harrison who is generally credited with its invention. He made it for his third marine chronometer (H3) of 1759 to compensate for temperature-induced changes in the balance spring. It should not be confused with his bimetallic mechanism for correcting for thermal expansion in the gridiron pendulum. His earliest examples had two individual metal strips joined by rivets but he also invented the later technique of directly fusing molten brass onto a steel substrate. A strip of this type was fitted to his last timekeeper, H5. Harrison's invention is recognized in the memorial to him in Westminster Abbey, England.
Mechanical clock mechanisms are sensitive to temperature changes which lead to errors in time keeping. A bimetallic strip is used to compensate for this in some mechanisms. The most common method is to use a bimetallic construction for the circular rim of the balance wheel. As the spring controlling the balance becomes weaker with increasing temperature, so the balance becomes smaller in diameter to keep the period of oscillation (and hence timekeeping) constant.
In the regulation of heating and cooling, thermostats that operate over a wide range of temperatures are used. In these, one end of the bimetal strip is mechanically fixed and attached to an electrical power source, while the other (moving) end carries an electrical contact. In adjustable thermostats another contact is positioned with a regulating knob or lever. The position so set controls the regulated temperature, called the set point.
Some thermostats use a mercury switch connected to both electrical leads. The angle of the entire mechanism is adjustable to control the set point of the thermostat.
The electrical contacts may control the power directly (as in a household iron) or indirectly, switching electrical power through a relay or the supply of natural gas or fuel oil through an electrically operated valve. In some natural gas heaters the power may be provided with a thermocouple that is heated by a pilot light (a small, continuously burning, flame). In devices without pilot lights for ignition (as in most modern gas clothes dryers and some natural gas heaters and decorative fireplaces) the power for the contacts is provided by reduced household electrical power that operates a relay controlling an electronic ignitor, either a resistance heater or an electrically powered spark generating device.
A direct indicating dial thermometer (such as a patio thermometer or a meat thermometer) uses a bimetallic strip wrapped into a coil. One end of the coil is fixed to the housing of the device and the other drives an indicating needle. A bimetallic strip is also used in a recording thermometer. Breguet's thermometer consists of a tri-metallic helix.
Bimetal strips are used in miniature circuit breakers to protect circuits from excess current. A coil of wire is used to heat a bimetal strip, which bends and operates a linkage that unlatches a spring-operated contact. This interrupts the circuit and can be reset when the bimetal strip has cooled down.
Bimetal strips are also used in time-delay relays, lamp flashers, and fluorescent lamp starters. In some devices the current running directly through the bimetal strip is sufficient to heat it and operate contacts directly.
Curvature of a Bimetallic Beam:
Where and are the Young's Modulus and height of Material One and and are the Young's Modulus and height of Material Two. is the misfit strain, calculated by:
Where α1 is the Coefficient of Thermal Expansion of Material One and α2 is the Coefficient of Thermal Expansion of Material Two. ΔT is the current temperature minus the reference temperature (the temperature where the beam has no flexure).
- Nitinol - a shape-memory alloy
- Video of a circular bimetalic wire powering a small motor with iced water. Accessed February 2011.
- Sobel, Dava (1995). Longitude. London: Fourth Estate. p. 103. ISBN 0-00-721446-4. "One of the inventions Harrison introduced in H-3... is called... a bi-metallic strip."
- Clyne, TW. “Residual stresses in surface coatings and their effects on interfacial debonding.” Key Engineering Materials (Switzerland). Vol. 116-117, pp. 307-330. 1996
- Timoshenko, J. Opt. Soc. Am. 11, 233 (1925) |
65 to 60 million years ago: As dinosaurs die out, shallow inland sea drains away. Rockies and Black Hills emerge. Jungle growth transforms dark mud and shale of former sea bed to yellow soil.
37 to 23 million years ago: Periodic floods deposit layers of mud and volcanic ash. Many species of mammals roam the area, their fossils leaving a clear record of the Oligocene Epoch (``golden age of mammals'').
500,000 years ago: Erosion from wind, rain, frost carves the soft rocks into unusual formations, a process that continues.
12,000 years ago: First humans (``the mammoth hunters'') arrive.
1700s A.D.: Lakota establish first long-term settlement.
1890: Army kills 200 Lakota followers of Chief Big Foot at Wounded Knee.
Early 1900s: Homesteaders and ranchers settle the area.
1939: Federal government designates Badlands a national monument as last of the settlers move on.
1976: Government acquires 133,300 more acres in the area, including part of the Pine Ridge Indian Reservation.
1978: Badlands declared a national park. |
Biogeochemist: "... perhaps most likely explanation is that increasing temperatures have increased rates of decomposition of soil organic matter, which has increased the flow of CO2. If true, this is an important finding: that a positive feedback to climate change is already occurring at a detectable level in soils."
One of the single greatest concerns of climate scientists is that human-caused warming will cause amplifying feedbacks in the carbon-cycle. Such positive feedbacks, whereby an initial warming releases carbon into the air that causes more warming, would increase both the speed and scale of climate change, greatly complicating both mitigation and adaptation.
The most worrisome amplifying feedback is the defrosting of the tundra (see “Science stunner: Vast East Siberian Arctic Shelf methane stores destabilizing and venting). Another major, related feedback now appears to be soil respiration, whereby plants and microbes in the soil give off more carbon dioxide as the planet warms.
As Nature reports (article here, study here, subs. req’d), a review of 439 studies around the world — including 306 performed from 1989 to 2008 — found “soil respiration had increased by about 0.1% per year between 1989 and 2008, the span when soil measurement techniques had become standardized.” Physorg.com interviewed the lead author, who said bluntly:
“There’s a big pulse of carbon dioxide coming off of the surface of the soil everywhere in the world,” said ecologist Ben Bond-Lamberty of the Department of Energy’s Pacific Northwest National Laboratory. “We weren’t sure if we’d be able to measure it going into this analysis, but we did find a response to temperature.”
The increase in carbon dioxide given off by soils — about 0.1 petagram (100 million metric tons) per year since 1989 — won’t contribute to the greenhouse effect unless it comes from carbon that had been locked away out of the system for a long time, such as in Arctic tundra. This analysis could not distinguish whether the carbon was coming from old stores or from vegetation growing faster due to a warmer climate. But other lines of evidence suggest warming is unlocking old carbon, said Bond-Lamberty, so it will be important to determine the sources of extra carbon.
Indeed the study itself concludes:
The available data are, however, consistent with an acceleration of the terrestrial carbon cycle in response to global climate change.
Moreover, a major study in the February issue of the journal Ecology by Finnish researchers, “Temperature sensitivity of soil carbon fractions in boreal forest soil,” has a similar conclusion. The Finnish Environment Institute, which led the study, explained the results in a release, “Soil contributes to climate warming more than expected – Finnish research shows a flaw in climate models“:
According to the results, the climatic warming will inevitably lead to smaller carbon storage in soil and to higher carbon dioxide emissions from forests. These emissions will further warm up the climate, and as a consequence the emissions will again increase. This interaction between the carbon dioxide emissions from soil and the warming of climate will accelerate the climate change.
The present climate models underestimate the increase of carbon dioxide emissions from soil in a warmer climate. Thereby they also underestimate the accelerating impact of the largest carbon storage in forests on the climate change. This result is also essential with respect to the climate policy measures concerning forests. The carbon storage of forests is, more than previously assumed, sensitive to climatic warming, and the carbon sink capacity of forests is endangered. To maintain the carbon storage, the accumulation of organic material in forests should increase. However, this is not compatible with the present bioenergy goals for forests and with the more and more intensive harvesting of biomass in forests.
Returning to the Nature study, the review was quite comprehensive:
They compiled data about how much carbon dioxide has leaked from plants and microbes in soil in an openly available database. To maintain consistency, they selected only data that scientists collected via the now-standard methods of gas chromatography and infrared gas analysis. The duo compared 1,434 soil carbon data points from the studies with temperature and precipitation data in the geographic regions from other climate research databases.
After subjecting their comparisons to statistical analysis, the researchers found that the total amount of carbon dioxide being emitted from soil in 2008 was more than in 1989. In addition, the rise in global temperatures correlated with the rise in global carbon flux.
And the study also confirmed worries about the unlocking of carbon in the permafrost:
Previous climate change research shows that Arctic zones have a lot more carbon locked away than other regions. Using the complete set of data collected from the studies, the team estimated that the carbon released in northern — also called boreal — and Arctic regions rose by about 7 percent; in temperate regions by about 2 percent; and in tropical regions by about 3 percent, showing a trend consistent with other work.
The researchers made clear that more research needs to be done to make definitive conclusions about exactly what is happening to soils around the world. Yet as the Nature story notes:
“There are a few plausible explanations for this trend, but the most tempting, and perhaps most likely explanation is that increasing temperatures have increased rates of decomposition of soil organic matter, which has increased the flow of CO2,” says Eric Davidson, a biogeochemist at the Woods Hole Research Center in Falmouth, Massachusetts. “If true, this is an important finding: that a positive feedback to climate change is already occurring at a detectable level in soils.”
As I noted in the methane post, the National Science Foundation press release (click here), warned “Release of even a fraction of the methane stored in the shelf could trigger abrupt climate warming.” The NSF is normally a very staid organization. If they are worried, everybody should be.
We are simply playing with nitroglycerin to risk crossing tipping points that could accelerate multiple amplifying feedbacks:
UPDATE: I would note that we’ve only warmed about 1°F over the past half-century (and indeed, far less than that over the time span of the 306 recent studies the form the basis of the primary conclusion). We are headed to 9°F warming on our current emissions path. The few studies that look at such emissions paths and attempt to model carbon cycle feedbacks including soil find they can add as much as 250 ppm and 2.7°F warming this century (see “Acceleration of global warming due to carbon-cycle feedbacks in a coupled climate model,” subs. req’d). Indeed, one very recent analysis of a high emissions, high feedback scenario finds impacts that are almost unimaginable by mid-century (see UK Met Office: Catastrophic climate change, 13-18°F over most of U.S. and 27°F in the Arctic, could happen in 50 years, but “we do have time to stop it if we cut greenhouse gas emissions soon”).
It is increasingly clear that if the world strays significantly above 450 ppm atmospheric concentrations of carbon dioxide for any length of time, we will find it unimaginably difficult to stop short of 800 to 1000 ppm, which would inflict on countless future generations Hell and High Water. |
4 Formation of the Frolich electron pairs
Important numerical experiments carried out by Greenspan provide strong confirmation of this magnetic interaction and the attraction it produces between anti-parallel electron pairs
In fact, the new order is also present when the material is superconducting; it had been overlooked before, masked by the behavior of superconducting electron pairs
The new calculations show in detail what other theorists previously sketched: that magnesium diboride contains two distinct families of electron pairs
, one in which the electrons are weakly coupled and one in which they're strongly joined.
In superconducting materials, electrons form pairs, called Cooper pairs, below a critical temperature and these electron pairs
Therefore, the choice for the energy fluctuation time during a spontaneous symmetry breaking phase transition of electron pairs
is taken to be the Planck time [T.
Compared with transitions of electron pairs
in super-conducting metals, which suddenly fall apart at certain temperatures, the rare earth transitions are "smeared," he says.
Some theorists have proposed that magnetic interactions between the electrons and copper atoms play a key role in forging electron pairs
In this case, the wave function is spherical, indicating that the electron pairs
have an equal chance of moving in any direction.
The researchers determined the binding force between superconducting electron pairs
by measuring differences in the energy and direction of electrons emitted from a material in its normal and superconducting states.
Although theorists are certain that pairing occurs, they have so far been unable to agree on what mechanism leads to the formation of electron pairs
in these materials.
Showing how the electronic structure affects kink mobility enabled Gilman to calculate the amount of stress needed to form the kinks, break up electron pairs
, and move the kinks. |
USDA Hardiness Zone 5 has cold winters with temperatures reaching between -10 to -20 Fahrenheit on a regular basis. The growing season measured from the average date of the last frost in spring to the average date of the first frost in fall is shorter than in zone 6 and higher but longer than in zones 1 through 4. Vegetables grow and, for the most part, have to ripen to maturity within the frost-free time period. Some vegetables such as cabbages, carrots, other root vegetables and Brussels sprouts will tolerate light frosts.
Start Seeds Inside
Begin cool season veggies such as kale, spinach, chard and lettuces inside to get a jump start on spring. The plants will be ready to pop in the garden as soon as the chance of frost is over. Start the cool season vegetables six weeks before the average last date of frost. Harden them off by leaving them outside for increasingly longer periods of time before planting in the ground.
Start warm season vegetables inside as soon as the cold season ones are ready to plant in the garden. Warm season veggies need nights above 60 degrees and days above 70 degrees to do well.
Use fresh potting soil when starting seeds to avoid dampening off, a fungus in the soil that kills seedlings. The seedlings will flop over and die after they're an inch or so high. Don't over-water the seedlings to avoid dampening off.
Cold frames are boxes with sides about 12 to 18 inches high with tops of glass. The glass increases the temperature inside the box much in the same way the windows of a car make the car hotter during summer. Cold frames are used to begin the growing season earlier before the last date of frost in the spring and to extend it beyond the first frost in fall.
Seedlings in pots are placed inside the cold frame. Since the cold frame is warmer than the outside, the seedlings grow faster. When the weather is warm enough, transplant the seedlings to the garden. Keep the glass open for a few days before transplanting to get the seedlings used to the cooler temperatures.
Extend the growing season by planting leafy greens in pots inside the cold frame in late August. They'll have a chance to grow to harvest size before it gets too cold.
Keep the frames warmer by placing gallon jugs of water wrapped in black plastic inside the cold frame. The jugs will absorb heat during the day and release it at night.
Short Maturity Warm Season Vegetables
Zone 5 has a shorter growing season. Choose warm season vegetables that have a shorter time to maturity. If the growing season is only 100 days long, it doesn't make sense to plant a variety of corn that takes 90 days to mature. The corn can't be planted when the ground is cold in early spring. It won't germinate. That means it has to be planted later in the spring, which shaves a few weeks off the season. Other warm season vegetables include tomatoes, eggplant, peppers, beans and squash.
Black or Clear Plastic
Accelerate the germination of vegetables that don't take well to transplanting such as corn, peas and beans by laying clear or black plastic over the row after it has been planted. Even clear plastic will keep the ground warmer, which helps seeds germinate faster. |
Definition of Trope
A trope is any word used in a figurative sense (i.e., a figure of speech) or a reoccurring theme or device in a work of literature. The first definition of trope can refer to numerous types of figures of speech, which we explore below. The second definition of trope can be slightly derogatory in that a reoccurring theme in a certain genre can become cliché, and thus stale and overused. In this sense, a trope is similar to a convention of a genre, such as the common theme of a “dark lord” in the genre of fantasy or the appearance of a literal ticking bomb in an action or adventure story. The majority of this article will delve into the first definition of trope and the way that different tropes function in literature.
The word trope comes from the Greek word τρόπος (tropos), in which it means “a turn, direction, or way.” The word came to mean “a figure of speech” in Latin in the 1530s, as it developed the denotation of turning a word from its literal meaning to a figurative one.
Types of Tropes
There are many different figures of speech. The following is an incomplete list of trope examples:
- Allegory: An allegory is a work of art, such as a story or painting, in which the characters, images, and/or events act as symbols.
- Antanaclasis: Antanaclasis is to repeat a word or phrase but with a different meaning than in the first case.
- Euphemism: A euphemism is a polite or mild word or expression used to refer to something embarrassing, taboo, or unpleasant.
- Irony: Irony is a contrast or incongruity between expectations for a situation and what is reality.
- Meiosis: Meiosis is a figure of speech that minimizes the importance of something through euphemism.
- Metaphor: A metaphor is a rhetorical figure of speech that compares two subjects without the use of “like” or “as.”
- Metonymy: Metonymy is a figure of speech in which something is called by a new name that is related in meaning to the original thing or concept.
- Synecdoche: Synecdoche is a figure of speech in which a word or phrase that refers to a part of something is substituted to stand in for the whole, or vice versa.
The American literary theorist Kenneth Burke described “the four master tropes” to be metaphor, metonymy, synecdoche, and irony.
Common Examples of Trope
There are many different examples of tropes that we use in common speech. For instance, there are many pun examples which contain antanaclasis, such as the famous one-liner “Time flies like an arrow; fruit flies like a banana.”
Here are some other humorous quotes to demonstrate different types of tropes:
CUSTOMER: He’s not pinin’! He’s passed on! This parrot is no more! He has ceased to be! He’s expired and gone to meet his maker!He’s a stiff! Bereft of life, he rests in peace! If you hadn’t nailed him to the perch he’d be pushing up the daisies!
His metabolic processes are now history! He’s off the twig!
He’s kicked the bucket, he’s shuffled off his mortal coil, run down the curtain and joined the bleeding choir invisible!!
- Metaphor in the “Dead Parrot Sketch” from Monty Python
[after slicing one of the Black Knight’s arms off]
King Arthur: Now, stand aside, worthy adversary!
Black Knight: ‘Tis but a scratch!
King Arthur: A scratch? Your arm’s off!
King Arthur: [after Arthur’s cut off both of the Black Knight’s arms] Look, you stupid bastard, you’ve got no arms left!
Black Knight: Yes I have.
King Arthur: Look!
Black Knight: It’s just a flesh wound.
- Verbal irony in Monty Python and the Holy Grail
Significance of Trope in Literature
Trope examples are both very prevalent and very important in literature. Figurative language is a huge part of all forms of literature, whether poetry, prose, or drama. The goal of a writer using figurative language is to push the reader or listener’s understanding of a certain word or words. This makes the language used more memorable and more unique. Writers use different figures of speech for many different reasons and in many different ways, as we will see below.
When considering the second definition of trope, i.e., a reoccurring theme or device in a work of literature, authors will often choose to use a trope to establish which genre they are working in. Even though a certain theme might be overused in fantasy, it can be helpful to use these same themes to make the reader aware of what kind of book he or she is reading. For example, dragons, royalty, and magic are common in fantasy stories, and yet they continue to be used so as to place a narrative in that fantasy realm.
Examples of Trope in Literature
Example #1: Irony
ANTONY: The noble Brutus
Hath told you Caesar was ambitious.
If it were so, it was a grievous fault,
And grievously hath Caesar answered it.
Here, under leave of Brutus and the rest—
For Brutus is an honorable man;
So are they all, all honorable men—
Come I to speak in Caesar’s funeral.
He was my friend, faithful and just to me.
But Brutus says he was ambitious,
And Brutus is an honorable man.
(Julius Caesar by William Shakespeare)
In his eulogy for Caesar, the character Antony repeatedly says that “Brutus is an honorable man.” This is a clear case of verbal irony from William Shakespeare’s tragedy Julius Caesar because Brutus was one of Caesar’s friends to stab him. Antony does not consider Brutus to be honorable; in fact, he thinks anything but. Therefore, this is an example of trope because Antony is twisting language and meaning.
Example #2: Antanaclasis
OTHELLO: It is the cause, it is the cause, my soul,–
Let me not name it to you, you chaste stars!–
It is the cause. Yet I’ll not shed her blood;
Nor scar that whiter skin of hers than snow,
And smooth as monumental alabaster.
Yet she must die, else she’ll betray more men.
Put out the light, and then put out the light:
(Othello by William Shakespeare)
When Othello considers killing his wife Desdemona, he uses an example of antanaclasis with the word “light.” In this case, he will literally put out the lights in her room, then figuratively “put out the light” by killing her.
Example #3: Synecdoche
The party preserved a dignified homogeneity, and assumed to itself the function of representing the staid nobility of the countryside—East Egg condescending to West Egg, and carefully on guard against its spectroscopic gayety.
(The Great Gatsby by F. Scott Fitzgerald)
The theme of class and wealth is integral to the chief conflict in F. Scott Fitzgerald’s The Great Gatsby. In the above excerpt Fitzgerald uses a synecdoche example by referring to different groups of people just by the place they live: East Egg and West Egg. These place names stand in for the whole.
Example #4: Euphemism
The Ministry of Truth, which concerned itself with news, entertainment, education, and the fine arts. The Ministry of Peace, which concerned itself with war. The Ministry of Love, which maintained law and order. And the Ministry of Plenty, which was responsible for economic affairs. Their names, in Newspeak: Minitrue, Minipax, Miniluv, and Miniplenty.
(1984 by George Orwell)
In George Orwell’s famous dystopia 1984, there is purposeful euphemism on the part of the government. The four main branches of government are given names directly opposite to their true purpose. The euphemisms conceal their actual doings and paper over the truth. This is a more sinister twisting of language.
Example #5: Metaphor
He says, you have to study and learn so that you can make up your own mind about history and everything else but you can’t make up an empty mind. Stock your mind, stock your mind. You might be poor, your shoes might be broken, but your mind is a palace.
(Angela’s Ashes by Frank McCourt)
From Frank McCourt’s memoir Angela’s Ashes we find a beautiful metaphor example. The family is quite poor, but the father reassures his children that “your mind is a palace.” This important metaphor is meant to reassure them that earthly goods do not determine their true worth, and that the mind is a far more precious treasure.
Test Your Knowledge of Trope
1. Which of the following is not a trope definition?
A. A figure of speech.
B. A reoccurring theme or convention.
C. An unexpected turn in a conversation.
|Answer to Question #1||Show>|
2. Which of the following tropes appears in the following quote from William Shakespeare’s Hamlet?
MARCELLUS: Something is rotten in the state of Denmark.
|Answer to Question #2||Show>|
3Which of the following is not a type of trope?
|Answer to Question #3||Show>| |
Information about the Chippewa Indians such as history, language, and culture.
The Chippewa Indians are one of the largest Native American groups in North America. Over the years, the first nation of America has seen a rapid decrease in the number of pure breed Indians. Assimilation into American life and culture contributes to the reduction of Indians. The Chippewa Indians primarily inhabited the Northern regions of the United States. They could be found in states such as Minnesota, Wisconsin, and Michigan. Additionally, a few bands of the Chippewa tribe inhabit parts of southern Canada. Together, there are approximately 150 different bands or groups of Chippewa Native Americans.
Today, Chippewas Indians are organized into communities, and each individual community resides on their own reservation in the United States or Canada. Because each tribe is individually governed by its own government, these communities have their own school systems, law enforcement officers, etc. In essence, the reservation is like a small, independent country.
To ensure that Indians receive equal treatment, several communities have established coalitions. For example, the American Indian Movement founded in Minneapolis, MN in 1968 fought for better rights for Native Americans. Since then, several smaller organizations have materialized. These brought about improved living conditions, better schools, and protection against abuse.
The majority of Chippewa Indians speak the English language. Nonetheless, a large number of Chippewa also speak their native tongue – the Ojibway language. Modern day Chippewa Indians live like other people. For example, young children attend school and are required to complete chores around the home. It is the responsibility of the father to train their children in how to hunt and fish. Husbands and fathers are the hunters, and their primary responsibility involves protecting the family. Wives normally work in the fields, care for the children, cook, and take care of the home. Both men and women participate in the harvest work.
More on this subject: Chippewa Indians |
The Effects: Human Health
Nutrient pollution and harmful algal blooms create toxins and compounds that are dangerous for your health. There are several ways that people (and pets) can be exposed to these compounds.
Direct exposure to toxic algae
Stomach or liver illness
Nitrates in drinking water
Nitrate, a compound found in fertilizer, often contaminates drinking water in agricultural areas. Infants who drink water too high in nitrates can become seriously ill and even die. Symptoms include shortness of breath and blue-tinted skin, a condition known as blue baby syndrome.
A 2010 report on nutrients in ground and surface water by the U.S. Geological Survey found that nitrates were too high in 64 percent of shallow monitoring wells in agricultural and urban areas.
Byproducts of water treatment
Stormwater runoff carries nutrients directly into rivers, lakes and reservoirs which serve as sources of drinking water for many people. When disinfectants used to treat drinking water react with toxic algae, harmful chemicals called dioxins can be created. These byproducts have been linked to reproductive and developmental health risks and even cancer. |
Lesson 6: Presenting and Playing the Hole
Guide students through the steps to create and deliver an oral presentation, and end with playing their golf hole.
Download Lesson 6 (68KB)
This last lesson helps students reflect upon and show off all the things they’ve learned. Here, students will create and present their golf-hole designs -- either to a select group of people or to the class. Finally, if resources are available, students will set up and play their course for the ultimate satisfaction.
Lesson Objectives and Materials
- To practice presentation delivery skills using an appropriate volume and tone, making proper eye contact and gestures, and with good posture
- To practice presentation skills such as ordering information logically and coherently, using appropriate language, and working with a visual aid
- To practice analytical and interpersonal skills by giving and receiving critiques
- PowerPoint software
- items for constructing the course
Get your students interested in the lesson by asking them the following questions:
- What characteristics make for a strong speech? For example, what qualities do you like in teachers or other speakers when they present something to you?
- What have you learned about writing English papers, such as how to create a coherent essay, use transitions, and build unified themes?
- How do you translate a good paper into a good speech?
Project Application: Oral Presentation and Critique
Tell students that they’ll be presenting their projects to an audience (for example, to members of the community or to the class, depending on the final outcome of the project). Students will need several class sessions to prepare by getting feedback and practicing their presentations.
Get the ball rolling by asking your students to follow these steps:
1. Have them brainstorm as a group about what they should include in their presentations. Then fill in the gaps. The presentation should include the original presentation board and animation. Talking points might include
- Why the student picked his or her theme
- Challenges and how the student overcame them
- The math or design techniques the student used
- The outside research the student did
- Experiences with partners (classmates or outside mentors)
- Points of pride in the design
- The presentation’s length -- about five to fifteen minutes
- 2. Have students establish, individually or in a group, a logical order of topics (general to specific, first step to last step, etc.).
- 3. Ask students to draft an outline and to practice delivering the presentation to others. Put students in pairs or in small groups, and have them work on critiquing one another by offering constructive criticism.
- 4. Determine what information students can put on PowerPoint slides to make the presentation more effective.
- 5. Have students create a PowerPoint presentation (if the software is available), but urge them to limit the text on any one slide to three lines at the most, with about five words per line. Have students link their presentations to their 3D computer model.
- 6. Let students practice delivering the final presentation in small or large groups. Encourage feedback loops.
- 7. Finally, have students make their final presentations.
At the end of this lesson, you should have a good idea of each student’s skills in creating a cohesive presentation, presenting it, and in the other concepts covered. Here are some guiding points to help assess each student.
Download Grading Rubric (364KB)
The student’s mastery of the subject matter is
Excellent: Students present with a strong volume and an enthusiastic tone. They explain their project clearly and persuasively, discussing both the process and product. Students incorporate presentation software such as PowerPoint to highlight main ideas or provide extra visuals, and they’ve done outside research. Students participate in the critique by covering the strengths and weaknesses of others’ projects, and their criticism shows an awareness of the goals of the project and presentation.
Good: Students present using strong volume and a good tone. They explain their project clearly, discussing both the process and product. Students incorporate presentation software such as PowerPoint to highlight main ideas or provide extra visuals, but do so inexpertly. Students participate in the critique, and their criticism shows some awareness of the goals of the project and presentation.
Fair: Students explain their project. They might lack adequate content and resources, such as presentation software. They participate only briefly in the critique, but their criticism is on topic.
Poor: Students are difficult to understand due to one or more of the following issues: poor language or annunciation, a low volume, or incoherence. Students fail to participate in the critique or do so disruptively.
Project Wrap-Up: Build Out and Play
If the resources and time are available, wrap up the project by building the actual course and letting the students play it. In the original program, the school partnered with local architects and contractors to construct the course. However, you can build student holes through different means.
Here are some tips for building the course:
- Work with the school’s art department or woodshop or with parent groups to collect the proper materials.
- Choose which holes to construct based on a class or school vote, or ask participating partners to judge the best holes.
- Build the course off-site, such as at a fundraising event, or onsite in the school gym, parking lot, multipurpose room, or classroom.
For more tips on adapting the whole project, visit the Troubleshooting page. |
Every once in a while, science news seems to walk a fine line between fantastic and fantasy. Em Drive, the topic of our latest podcast, is a recent example of exactly that!
Em Drive (pronounced “M” drive) also known as RF resonant cavity thruster is a hypothetical propulsion mechanism designed in part for space travel. Em Drive was proposed by Roger Shawyer, a British aerospace engineer who has a background in defense work as well as experience as a consultant on the Galileo project (a European version of the GPS system).
Em Drive “uses a magnetron to produce microwaves which are directed into a metallic, fully enclosed conically tapered high Q resonant cavity with a greater area at one end of the device, and a dielectric resonator in front of the narrower end. Shawyer claims that the device generates a directional thrust toward the narrow end of the tapered cavity. The device (engine) requires an electrical power source to produce its reflecting internal microwaves but does not have any moving parts or require any reaction mass as fuel.” In layman’s terms, the Em Drive bounces microwaves around in a metal ice cream cone which produces directional energy without using fuel.
To learn more about Em Drive, and why you should be skeptical, have a listen to the podcast! Thanks for listening. |
Ecologists are increasingly using drones to gather data, using remotely piloted aircraft to reach otherwise inaccessible places, and take samples or record information for later processing.
The latest set of experiments, based around the recording and counting of wildlife populations, have shown that drone-gathered data has a clear advantage over traditional boots-on-the-ground methods.
Not only does this data bode well for environmentalists, but their methods of data collection provide yet another valuable case study in the ongoing testing and development of remote monitoring, an area of great interest to large-scale mining operations.
To prove the viability of drone data, Ecologists from the University of Adelaide in South Australia created the #EpicDuckChallenge, which involved deploying thousands of plastic replica ducks on an Adelaide beach, and then testing various methods of tallying them up.
Assessing the accuracy of wildlife count data is hard. It is difficult to be sure of the true number of animals present in a group of wild animals. To overcome this uncertainty, the team created life-sized, replica seabird colonies, each with a known number of individuals.
From the optimum vantage and in ideal weather conditions, experienced wildlife spotters independently counted the colonies from the ground using binoculars and telescopes. At the same time, a drone captured photographs of each colony from a range of heights. Citizen scientists then used these images to tally the number of animals they could see.
Counts of birds in drone-derived imagery were better than those made by wildlife observers on the ground. The drone approach was more precise and more accurate – it produced counts that were consistently closer to the true number of individuals.
The difference between the results was significant. Drone-derived data were between 43% and 96% more accurate than ground counts, depending on the height (and thus image quality) of the drone.
The use of drones in mining activities takes advantage of similar features to wildlife ecology; the vantage point and speed of drones allows them to gather data faster and more efficiently than any ground-based measures, ensuring companies can secure areas and ensure worker safety before detonations or heavy equipment movement.
As drone technology improves, companies are hoping to be able to produce comprehensive 3D scans of entire mining sites, allowing for better modeling and monitoring. |
Encouraged by the government to “back their men at the front,“ millions of women either entered the workforce for the first time during World War II or used wartime opportunities to move into better paying or higher skilled jobs. Although they enjoyed the higher pay of wartime industries, women often faced opposition from unions and male workers, and were paid lower wages than their male counterparts in sex-segregated jobs. Still, working in wartime industries expanded the horizons and enhanced the self-confidence of millions of women. The women and men in this picture are seen beginning to unwind at the end of their shift in a Richmond, California, shipyard.
Source: Dorothea Lange—Prints and Photographs Division, Library of Congress. |
yangbanArticle Free Pass
yangban, (Korean: “two groups”), the highest social class of the Yi dynasty (1392–1910) of Korea. It consisted of both munban, or civilian officials, and muban, or military officials. The term yangban originated in the Koryŏ dynasty (935–1392), when civil service examinations were held under the two categories of munkwa (civilian) and mukwa (military). By the Yi dynasty, the term came to designate the entire landholding class. The Yi dynasty had a rigidly hierarchical class system composed broadly of four classes: yangban, chungin (intermediate class), sangmin (common people), and ch’ŏnmin (lowborn people).
The yangban were granted many privileges by the state, including land and stipends, according to their official grade and status. They alone were entitled to take civil service examinations and were exempt from military duty and corvée labour. They were even permitted to have their slaves serve their own terms of punishment.
The rules to which the yangban were subjected were severe. Unless at least one of their family members within three successive generations was admitted to the officialdom, they were deprived of their yangban status. They were expected always to exhibit courtesy and righteousness and to be prepared to sacrifice their lives for a greater cause. No matter how poor, they were not supposed to show a shred of meanness in their behaviour.
The yangban system, corrupted and deemed pernicious to social development, was discarded in 1894, when a series of modern reforms were effected.
What made you want to look up yangban? |
Early Childhood Care Providers and Educators play a crucial role in the development of the children they care for. Recent research studies have shown that early childhood experiences help determine brain structure, shaping thinking and learning. In addition, children who are introduced to books and early literacy skills from a young age are more likely to succeed when they go to school.
Many raising readers books have an activity page printed into the back of the book. These activity pages support early literacy skill-building and were developed by Susan Bennett-Armistead, PhD, the Correll Professor of Early Literacy at the University of Maine.
Click the links below to download the calendar pages for each month/featured book. If you'd like a copy of the calendar, please email your name and mailing address to: [email protected].
January - Setting the Scene for Literacy
18 months: Time For a Hug
February - Creating a Love of Literacy
Newborn: A Kiss Means I Love You
March - Promoting Oral Language
4 months: Baby Look
April - Building Vocabulary
4 Year: Harold Finds A Voice
May - Learning How Text Works
2 months: Look Look Outside
June - Playing with Sounds of Language
24 months: Around The Neighborhood
July - Building World Knowledge
15 months: Who Am I? Farm Animals
August - Promoting Letter Recognition
9 months: Big Fat Hen
September - Connecting Letters and Sounds
12 months: Baby's Busy World
October - Building Genre Awareness
36 months: There Was A Tree
November - Building Comprehension
6 months: Peek-a-Who?
December - Promoting Writing
60 months: Tales For Growing Tall: A raising readers Collection for the Children of Maine
Note: Distribution of each title may not coincide with the month in which it is featured. raising readers books turn over on a rolling basis.
CURRENT raising readers Books
ACTIVITIES for raising readers Books
READING TIPS in English
Download READING TIPS in other languages
Download the "Emergent Literacy - Developmental Assessment Tool"
See Kids Health for information on early childhood development and children's health issues. |
You may have come across references to government “Green Papers” or “White Papers” and wondered what these are. What follows is a brief overview of both.
Green Papers, like white papers, originated in Great Britain. The term was coined by London newspapers from the colour of the covers of this type of document. The first “green” paper was introduced in the British House of Commons in 1967. It was “a statement by the Government not of policy already determined but of propositions put before the whole nation for discussion”. This document set the example for the continuing use of discussion or consultative papers.
In Great Britain, these documents are easily identifiable. They are published in green covers with the generic name “Green Paper” and their titles are listed in The Stationary Office (TSO) (formerly HMSO) Catalogues.
The difficulties in compiling a list of green papers put out by the Canadian federal government are numerous. In Canada, green covers are not used consistently so the colour of the cover cannot be used as a guide. Furthermore, the name “Green Paper” does not always appear in the title nor are these documents always tabled in the House of Commons or listed in any official source.
A green paper is taken to be an official document sponsored by Ministers of the Crown which is issued by government to invite public comment and discussion on an issue prior to policy formulation. A Green Paper is usually the first stage of setting new laws that the Government wants to bring in. Green Papers usually ask some big questions about policy direction and often give a broad indication of what the Government wants to achieve. They also provide an opportunity for the public to say what they think – they usually include some questions that the Government would like people to answer.
A recent example of a Green Paper is the UK Government’s Green Paper on Parliamentary Privilege, which you can download as a PDF here. If you take the time to do just that, you will see quite clearly how the Government has put forward some of the current issues surrounding parliamentary privilege, and asks a series of questions on each section in an attempt to solicit input from interested persons. These questions usually ask what people think or believe about the subject matter of the Paper.
A White Paper is the usual next step after the Green Paper consultation. The Government will review the submissions it received in response to its Green Paper, and then publish a White Paper. White Papers contain more detailed proposals about what the Government wants to achieve and proposals for Government legislation. They are usually based on a mixture of feedback from the Green Paper and additional research that Government departments have done. There are usually opportunities for the public to comment on White Papers. At this stage, the questions asked might focus more on the process of how to bring about the changes needed. Why are they called White Papers? Simply because they used to be printed on white paper. Now they tend to be more attractive and printed in a glossy format.
In Canada, the term white paper is more commonly applied to official documents presented by Ministers of the Crown which state and explain the government’s policy on a certain issue. They are not consultative documents. A recent example would be this White Paper explaining Canada’s Cyber Security Strategy (PDF).
A White Paper is not the same thing as a Draft bill. A Draft bill is a proposed bill and is usually accompanied by an extensive briefing paper explaining the rational behind the bill. However, rather than being introduced directly in the House for first reading, a draft bill is published to enable consultation and pre-legislative scrutiny. This allows proposed changes to be made before the Bill’s formal introduction. By doing so, the Government may hope that this will make it easier for the bill to be adopted. Almost all Draft Bills are Government Bills. Government departments produce Draft Bills and issue them to interested parties. MPs and Lords can also consider them in committees. After consultation and pre-legislative scrutiny has taken place, the Government will review the recommendations of the committee which studied the bill, and incorporate some or all of these. The new bill will then be introduced formally in the House of Commons or the House of Lords. The draft bill is a further step between a White Paper and the actual legislation, but a draft bill can also be produced on its own, without the Green and White Paper steps. Similarly, a White Paper can lead to a bill rather than a draft bill, since the Government may feel that it has already done enough consultation.
The Parliament of Canada does not do draft bills.
To summarise, Green Papers usually put forward ideas for future government policy that are open to public discussion and consultation. White Papers generally state more definite intentions for government policy, and are also open to public consultation. Draft bills are actual pieces of legislation which are submitted to a committee before being introduced in the House for consultation and pre-legislative scrutiny, and may or may not be preceded by a Green and White Paper |
Breakfast is a crucial part of every child's day. It affects everything from memory to creativity in the classroom. Children who go hungry are more inclined to become distracted from learning at school, or give up more easily when faced with challenges. Healthy, well-rounded breakfasts are best for a child's maximum performance level.
Breakfast and Learning
Breakfast has been linked, extensively, to better performance by children in the classroom, according at least to two studies performed by Tufts University psychologists on school children. Eating breakfast also helps control appetite, keeping kids focused on learning throughout the day. It’s believed children’s smaller stature causes them to be more susceptible to the effects of overnight fasting -- including decreases in mental acuity -- and they may even be in greater need of breakfast than adults.
Kinds of Learning Impacted
According to a 2005 study published in "Physiology and Behavior," eating a healthy breakfast in the morning has beneficial effects on memory -- particularly short-term -- and attention, allowing children to more quickly and accurately retrieve information. Children who eat breakfast perform better on reading, arithmetic and problem-solving tests. Eating breakfast also positively affects endurance and creativity in the classroom, reports Abdullah Khan in his 2006 dissertation for Murdoch University on the relationship between breakfast and academic performance.
Breakfast affects more than direct learning; it also impacts behaviors surrounding learning in the classroom environment, reported Khan in his research. Consistent breakfast consumption is linked to better attendance and better classroom behavior and vigilance, which facilitates learning. Children may give up more easily in school if they’re feeling the negative effects of skipping breakfast. Khan's study showed young children are more susceptible to these effects than adolescents.
What kids eat in the morning also makes a difference. Psychologists at Tufts University, who published their findings in 2005 in "Physiology and Behavior," conducted experiments to determine how a variety of breakfast foods affected children’s performance at school. While children who had no breakfast at all performed worse on a series of tests, children who had whole grains with milk outperformed those who had low-fiber, high glycemic cereal -- like many popular, child-oriented cereals -- with milk in tests all across the board. Researchers pointed to the importance of the mixture of protein, fiber and complex carbohydrates in the morning.
- Breakfast image by Vanessa van Rensburg from Fotolia.com |
A coronal mass ejection as viewed by NASA's Solar Dynamics Observatory on June 7, 2011.
Though pictures of the sun sure look fiery, the sun isn't on fire the way you might think, as when paper burns.
When a piece of paper is set on fire with a match, the atoms (mostly carbon, hydrogen, and oxygen) in the chemical compounds in the paper combine with the molecules of oxygen in the atmosphere to produce the chemical compounds carbon dioxide and water and to release heat and light. This is a chemical reaction that we call combustion.
The sun is carrying out a much different process called nuclear fusion. Each second the sun converts 700,000,000 tons of the element hydrogen into 695,000,000 tons of the element helium. This releases energy in the form of gamma rays. The gamma rays are mostly converted to light eventually. This process does not require oxygen. It does require incredibly high temperatures and pressures.
The temperature at the core of the sun is about 15,600,000 degrees on the Kelvin temperature scale. The sun is 4.5 billion years old and has used up about one half of its hydrogen fuel supply.
SpaceKids on SPACE.com provides simple, straightforward answers to really big cosmic questions. See more SpaceKids questions. |
A guest post by Ray Kinney
From ridge tops to reefs, environmental degradation has caused many salmon populations to decline to one to ten percent of former numbers. Young salmon survival in freshwater is only 2 to 5% from egg to smolt phase just before entering the ocean phase of their life cycle. Many causative effects for this decline are known, but many remain to be clarified. Politics often prevents adequate investigation of contaminant effects for water quality. Chronic low dose accumulative effects of toxic contaminants take a toll that is generally unrecognized by fisheries managers.
Our benevolent rainfall flows down out of the Coast Range to become, once again, part of the sea and the productivity of the salmon cycle of the near-shore ocean. Nutrients from the ocean, in the form of salmon and lamprey spawner carcasses, had fertilized our forests, streams, and rivers like an incoming tide for thousands of years. Our forest garden grew rich because of this tide of nutrients. Reduced numbers means reduced nutrients, which reduces development, growth, and survival abilities of the fish.
The land also nourishes the sea. Freshwater flows down out of the mountains, past our farms and towns, through the jetties, and out over the continental shelf. These nutrient tides over land and sea have been shaping salmon for thousands of years, providing diversity, fitness, and resilience to the young fish and other stream organisms that support the salmon cycle complexity.
For many hundreds of years humans have increasingly affected the quality of this complexity in ways that have stressed the fish. In the last two hundred years we have greatly increased pollution. Fish harvest levels increased unsustainably, while beaver and timber harvests altered the landscape stressing the salmon cycle. Increasing pollutants have contaminated the flow to the sea. Copious leaching rainfall and snowmelt dissolve and transport nutrients and contaminants down the river out of the Coast Range. Calcium and iron ride the waters downstream and out over the shelf during the winter, enriching the sea floor mud.
As upwelling conditions increase in the summer, much of this iron distributes northward with the currents and combines with nitrates to fertilize plankton blooms that feed the food chain for the salmon. Iron and nitrate are in shorter supply over much of the ocean and limit productivity in many parts of the ocean. Here, off of the Oregon coast, the iron leached from our soils provides an important key to salmon ocean productivity.
Large quantities of nitrate ride downstream through the freshwater, from red alder tree vegetation cover concentrations in our timberland. The red alder ‘fix’ nitrogen out of the air providing fertilizer to nearby trees increasing productivity. A large concentration of alder can produce too much nitrogen for close by trees to use and can then ‘spill over’ into the streams altering stream chemistry. This process can increase soil and stream acidity to accelerate leaching of exchangeable calcium and essential micronutrients out of the system too fast, resulting in deficiencies in the adjacent soils and water. These localized deficiencies can negatively affect sensitive forest and aquatic life which can then have detrimental impacts on salmon habitat and salmon population recovery. The helpful forest management of red alder needs to include improved ‘best management practices’ to avoid harmful ‘spillover’ of nitrate and hydrogen ions into our salmon habitat waters. Too much of a good thing can become more of a problem than a help to forest health and aquatic health. Human-caused increases of acidification pressures are increasing pollutant effects in salmon habitat more frequently and severely than the prehistoric aquatic conditions that the salmon ecology has been adapted to. The increased stresses pressure the salmon to change too rapidly for their plasticity and resilience to keep up with. The pace of our pollution has been too fast. Monitoring of contaminants is very important for understanding salmon decline pressures.
If we continue to allow toxic lead from lost fishing sinkers, bullets, boat anchors, and huge quantities of degrading lead paint flakes from bridges to flow down out of our low calcium watersheds, to contaminate the productivity of our freshwater, near-shore marine habitat, and marine protected areas, we will only continue to harm salmon in our waters from ridge tops to reefs. Sinkers get ground up in bedrock riverine potholes, exponentially increasing surface area for dissolving in the slightly acidic water.
If we continue to allow agricultural lands to add herbicides and other pesticide contaminants to our waters, and unfiltered stormwater pollutants from road runoff, we will continue to degrade the aquatic health and salmon recovery. If we don’t continually improve our questions, we are destined to have to live with decreasing quality of life. Does the affinity of lead for iron in the water cause them to bind together to carry the lead as colloid out to the iron-rich productive areas, causing toxic effects? Does the lead colloid passing down the river expose salmon gill and gut acidic microenvironments to increased uptake and toxic effects in these sensitive tissues? Lead exposure reduces fitness of salmon to survive in the ocean phase of their life cycle. Does the lead affect calcium utilization in pteropods and cocolithophores to make it even harder for these prey species to grow and reproduce in the face of increasing freshwater and ocean acidification?
We need to carefully monitor what contaminants flow out between the jetties to pollute the productivity of the near-shore ocean that we rely on for the economic, ecologic, and human health of our mid coast watersheds. And, we need to pointedly investigate contaminant effects in the high risk freshwater developmental life phase of the young salmon.
See: Concentrations of Metals in Water, Sediment, Biofilm, Benthic Macroinvertebrates, and Fish in the Boulder River Watershed, Montana, and the Role of Colloids in Metal Uptake: Aida M. Farag, David A. Nimick, Briant Kimball, Stanley Church, David Harper, William G. Brumbaugh (all with USGS). Arch. Environ. Contam. Toxicol. 52, 397-409, 2007
About the Author: Ray Kinney is a director for salmon habitat environmental assessment and restoration in the coastal mountains of Oregon. He has a driving interest in providing mountains and rivers for all of our children to play and learn in. His contributions to The Global Fool — posts and comments — are intended to represent his own personal views as a private resident of the Siuslaw watershed, and do not necessarily represent the views of either the Siuslaw Watershed Council or the Siuslaw Soil and Water Conservation District.
Author photo is by Bob Walter. |
Keep on the grass
07 December 2006
US researchers have found it is possible to grow crops for fuel in a way that results in a net reduction of CO2 in the atmosphere. Most biofuel systems are carbon neutral - the amount of CO2 emitted when energy is released from the crop is balanced by the uptake of CO2 during the crop's growth. The latest work shows that when native grasses are grown for fuel on poor soils, the overall impact is carbon-negative - more carbon is locked away into the soil than is released when the grasses are harvested, processed and used for fuel. The carbon remains sequestered, the researchers say, for more than a century.
Biofuel prospectors graze the native grasslands
David Tilman and colleagues at the University of Minnesota planted various mixtures of native grasses on poor soil to find out how much energy could be derived from the plants as biofuel and how much CO2 was taken out of the atmosphere. Over a period of a decade the researchers showed that a mixture of grasses produced more energy per unit area of land, less pollution and more greenhouse gas reduction than corn grain ethanol or soybean diesel.
One of the main attractions of the system is that the grasses can grow on degraded land that would otherwise be useless for growing food crops. A big worry about biofuels is that they usually require fertile soil, competing with food production. Furthermore, the grasses require little in the way of fertiliser, herbicide or irrigation, thereby reducing energy costs and pollution.
Even after the grass has been burned as an energy source or otherwise processed to release energy, there is a net reduction in CO2 emissions from the fuel. As they grow, the grasses sequester CO2 in their roots, which remain resistant to degradation for many decades.
'When prairie plants grow, about half to two-thirds of the carbon that they fix goes into the roots, which are tough, recalcitrant and resistant to digestion by bacteria or fungi,' Tilman told Chemistry World. 'It takes between one and two centuries to break down this matter and release the carbon back into the atmosphere.' According to his team's findings, this locks up more than four tonnes of CO2 annually per hectare of land.
The researchers calculate that grass-based fuel grown on poor soils could provide around 13 per cent of global petroleum requirements for transport and 19 per cent of global electricity production.
Iain Donnison of the Institute of Grassland and Environmental Research in the UK said that the development of energy crops that grow on marginal land not required for food production was important. 'Such crops must be sustainable and not require high levels of energy-intensive inputs such as cultivation, fertilisers and pesticides,' he said. 'The advantages of such an approach are maintenance of current agricultural practices, particularly in environmentally sensitive areas such as national parks where the visible landscape is important.'
ReferencesD Tilman et al, Science, 2006, 314, 1598
Also of interest
Sugar and vegetable oil are all you need to make biodiesel, say researchers in Japan.
Could straw furnish the fuel of the future?
Comment on this story at the Chemistry World blog
Read other posts and join in the discussion
Institute of Grassland and Environmental Research
The IGER looks for viable options for grassland-dominated landscapes
External links will open in a new browser window |
From Wikipedia, the free encyclopedia - View original article
|This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (August 2010)|
In electricity supply systems, an earthing system or grounding system is circuitry which connects parts of the electric circuit with the ground, thus defining the electric potential of the conductors relative to the Earth's conductive surface. The choice of earthing system can affect the safety and electromagnetic compatibility of the power supply. In particular, it affects the magnitude and distribution of short circuit currents through the system, and the effects it creates on equipment and people in the proximity of the circuit. If a fault within an electrical device connects a live supply conductor to an exposed conductive surface, anyone touching it while electrically connected to the earth will complete a circuit back to the earthed supply conductor and receive an electric shock.
Regulations for earthing system vary considerably among countries and among different parts of electric systems. Most low voltage systems connect one supply conductor to the earth (ground).
A protective earth (PE), known as an equipment grounding conductor in the US National Electrical Code, avoids this hazard by keeping the exposed conductive surfaces of a device at earth potential. To avoid possible voltage drop no current is allowed to flow in this conductor under normal circumstances. In the event of a fault, currents will flow that should trip or blow the fuse or circuit breaker protecting the circuit. A high impedance line-to-ground fault insufficient to trip the overcurrent protection may still trip a residual-current device (ground fault circuit interrupter or GFCI in North America) if one is present. This disconnection in the event of a dangerous condition before someone receives a shock, is a fundamental tenet of modern wiring practice and in many documents is referred to as automatic disconnection of supply (ADS). The alternative is defence in depth, where multiple independent failures must occur to expose a dangerous condition - reinforced or double insulation come into this latter category.
In contrast, a functional earth connection serves a purpose other than shock protection, and may normally carry current. The most important example of a functional earth is the neutral in an electrical supply system. It is a current-carrying conductor connected to earth, often, but not always, at only one point to avoid flow of currents through the earth. The NEC calls it a groundED supply conductor to distinguish it from the equipment groundING conductor. Other examples of devices that use functional earth connections include surge suppressors and electromagnetic interference filters, certain antennas and measurement instruments.
In low-voltage distribution networks, which distribute the electric power to the widest class of end users, the main concern for design of earthing systems is safety of consumers who use the electric appliances and their protection against electric shocks. The earthing system, in combination with protective devices such as fuses and residual current devices, must ultimately ensure that a person must not come into touch with a metallic object whose potential relative to the person's potential exceeds a "safe" threshold, typically set at about 50 V.
In most developed countries, 220/230/240 V sockets with earthed contacts were introduced either just before or soon after World War II, though with considerable national variation in popularity. In the United States and Canada, 120 volt power outlets installed before the mid-1960s generally did not include a ground (earth) pin. In the developing world, local wiring practice may not provide a connection to an earthing pin of an outlet.
In the absence of a supply earth, devices needing an earth connection often used the supply neutral. Some used dedicated ground rods. Many 110 V appliances have polarized plugs to maintain a distinction between "live" and "neutral", but using the supply neutral for equipment earthing can be highly problematical. "Live" and "neutral" might be accidentally reversed in the outlet or plug, or the neutral-to-earth connection might fail or be improperly installed. Even normal load currents in the neutral might generate hazardous voltage drops. For these reasons, most countries have now mandated dedicated protective earth connections that are now almost universal.
If the fault path between accidentally energized objects and the supply connection has low impedance, the fault current will be so large that the circuit over current protection device (fuse or circuit breaker) will open to clear the ground fault. Where the earthing system does not provide a low-impedance metallic conductor between equipment enclosures and supply return (such as in a TT separately earthed system), fault currents are smaller, and will not necessarily operate the over current protection device. In such case a residual current detector is installed to detect the current leaking to ground and interrupt the circuit.
The first letter indicates the connection between earth and the power-supply equipment (generator or transformer):
The second letter indicates the connection between earth and the electrical device being supplied:
In a TN earthing system, one of the points in the generator or transformer is connected with earth, usually the star point in a three-phase system. The body of the electrical device is connected with earth via this earth connection at the transformer.
The conductor that connects the exposed metallic parts of the consumer's electrical installation is called protective earth (PE; see also: Ground). The conductor that connects to the star point in a three-phase system, or that carries the return current in a single-phase system, is called neutral (N). Three variants of TN systems are distinguished:
|TN-S: separate protective earth (PE) and neutral (N) conductors from transformer to consuming device, which are not connected together at any point after the building distribution point.||TN-C: combined PE and N conductor all the way from the transformer to the consuming device.||TN-C-S earthing system: combined PEN conductor from transformer to building distribution point, but separate PE and N conductors in fixed indoor wiring and flexible power cords.|
It is possible to have both TN-S and TN-C-S supplies from the same transformer. For example, the sheaths on some underground cables corrode and stop providing good earth connections, and so homes where "bad earths" are found get converted to TN-C-S.
In a TT earthing system, the protective earth connection of the consumer is provided by a local connection to earth, independent of any earth connection at the generator.
The big advantage of the TT earthing system is that it is clear of high and low frequency noises that come through the neutral wire from connected equipment. TT has always been preferable for special applications like telecommunication sites that benefit from the interference-free earthing. Also, TT does not have the risk of a broken neutral.
In locations where power is distributed overhead and TT is used, installation earth conductors are not at risk should any overhead distribution conductor be fractured by, say, a fallen tree or branch.
In pre-RCD era, the TT earthing system was unattractive for general use because of its worse capability of accepting high currents in case of a live-to-PE short circuit (in comparison with TN systems). But as residual current devices mitigate this disadvantage, the TT earthing system becomes attractive for premises where all AC power circuits are RCD-protected.
The TT earthing system is used throughout Japan, with RCD units in most industrial settings. This can impose added requirements on variable frequency drives and switched-mode power supplies which often have substantial filters passing high frequency noise to the ground conductor.
In an IT network, the electrical distribution system has no connection to earth at all, or it has only a high impedance connection. In such systems, an insulation monitoring device is used to monitor the impedance.
|Earth fault loop impedance||High||Highest||Low||Low||Low|
|Need earth electrode at site?||Yes||Yes||No||No||No|
|PE conductor cost||Low||Low||Highest||Least||High|
|Risk of broken neutral||No||No||High||Highest||High|
|Safety||Safe||Less Safe||Safest||Least Safe||Safe|
|Safety risks||High loop impedance (step voltages)||Double fault, overvoltage||Broken neutral||Broken neutral||Broken neutral|
|Advantages||Safe and reliable||Continuity of operation, cost||Safest||Cost||Safety and cost|
While the national wiring regulations for buildings of many countries follow the IEC 60364 terminology, in North America (United States and Canada), the term "equipment grounding conductor" refers to equipment grounds and ground wires on branch circuits, and "grounding electrode conductor" is used for conductors bonding an earth ground rod (or similar) to a service panel. "Grounded conductor" is the system "neutral". Australian and New Zealand standards use a modified PME earthing system called Multiple Earthed Neutral (MEN). The neutral is grounded(earthed) at each consumer service point thereby effectively bringing the neutral potential difference to zero along the whole length of LV lines.
|This section requires expansion. (October 2013)|
In medium-voltage networks (1 kV to 72.5 kV), which are far less accessible to the general public, the focus of earthing system design is less on safety and more on reliability of supply, reliability of protection, and impact on the equipment in presence of a short circuit. Only the magnitude of phase-to-ground short circuits, which are the most common, is significantly affected with the choice of earthing system, as the current path is mostly closed through the earth. Three-phase HV/MV power transformers, located in distribution substations, are the most common source of supply for distribution networks, and type of grounding of their neutral determines the earthing system.
There are five types of neutral earthing:
In solid or directly earthed neutral, transformer's star point is directly connected to the ground. In this solution, a low-impedance path is provided for the ground fault current to close and, as result, their magnitudes are comparable with three-phase fault currents. Since the neutral remains at the potential close to the ground, voltages in unaffected phases remain at levels similar to the pre-fault ones; for that reason, this system is regularly used in high-voltage transmission networks, where insulation costs are high.
In unearthed, isolated or floating neutral system, as in the IT system, there is no direct connection of the star point (or any other point in the network) and the ground. As result, ground fault currents have no path to be closed and thus have negligible magnitudes. However, in practice, the fault current will not be equal to zero: conductors in the circuit — particularly underground cables — have an inherent capacitance towards the earth, which provides a path of relatively high impedance.
Systems with isolated neutral may continue operation and provide uninterrupted supply even in presence of a ground fault. However, while the fault is present, the potential of other two phases relative to the ground reaches of the normal operating voltage, creating additional stress for the insulation; insulation failures may inflict additional ground faults in the system, now with much higher currents.
Presence of uninterrupted ground fault may pose a significant safety risk: if the current exceeds 4–5 A an electric arc develops, which may be sustained even after the fault is cleared. For that reason, they are chiefly limited to underground and submarine networks, and industrial applications, where the reliability need is high and probability of human contact relatively low. In urban distribution networks with multiple underground feeders, the capacitive current may reach several tens of amperes, posing significant risk for the equipment.
The benefit of low fault current and continued system operation thereafter is offset by inherent drawback that the fault location is hard to detect. |
The Neolithic British Isles refers to the period of British, Irish and Manx history that spanned from c. 4000 to c. 2,500 BCE. The final part of the Stone Age in the British Isles, it was a part of the greater Neolithic, or “New Stone Age“, across Europe.
Humans first settled down and began farming. They continued to make tools and weapons from flint. Some tools stayed the same from earlier periods in history, such as scrapers for preparing hides.
But the Neolithic also saw the introduction of new stone tool. First there was a movement away from using microliths to make spears and arrows as composite weapons and instead the universal adoption of flint arrow heads.
Neolithic tools were often retouched all over, by pressure flaking, giving a characteristic appearance and were often laboriously polished, again giving them a distinctive look.
Pottery also developed in this period and there are examples of Neolithic Pottery recorded in this collection
N286 – Neolithic Tranchet Spear / Arrow (British Find) (British Find)
Provenance – Found near the River Ter, Chelmsford, Essex
Description – N286 – Neolithic Tranchet Spear / Arrow with visible signs of work and a thinned out blade, made with a brown flint that feels as though it’s been heat treated
Size – 3.3 cm x 4.1 cm
Weight – 6g
Age / Period – Neolithic 4000 BCE – 2500 BCE |
No matter where you live, the conservation and wise use of water in our gardens and landscapes is important. Sustainable water use helps grow beautiful gardens while conserving water and helping reduce water pollution and stormwater overflows.
Amend the soil with compost or other organic matter to increase the soil’s ability to absorb and retain rain and irrigation water. More water is absorbed by the amended soil so less runs off your landscape and into the street. This means less fertilizer and pesticides wash into nearby storm sewers, rivers, and lakes.
Cover bare soil with a layer of organic mulch. It conserves moisture so you water less, prevents erosion and helps suppress weeds. As the mulch decomposes, it improves the soil by adding organic matter and nutrients.
Use rain barrels to capture rainwater that drains off the roof. Purchase a rain barrel or make your own from a recycled food grade container. Evaluate the functional design, appearance and space needed when making your selection. Use the rainwater for watering gardens and containers. Start with a call to your local municipality as some have restrictions on water harvesting, while others encourage this practice and even offer rebates.
Use drip irrigation or soaker hoses for applying water right to the soil where it is needed. You’ll lose less water to evaporation and overspray. Avoiding overhead watering helps reduces the risk of disease. Irrigation systems also reduce your time spent watering and are especially helpful for container gardens and raised beds.
Plant native plants suited to your growing conditions and landscape design whenever possible. These deeply rooted plants help keep rainwater where it falls, reducing the risk of basement flooding and overwhelming storm sewers. The plants slow the flow of water, helping keep it on your landscape for the plants to use. Their deep roots create pathways for rainwater to enter and travel through the soil. Plant roots and soil help remove impurities from the water before it enters the groundwater and aquifers.
When adding walks, patios or other hard surfaces to your landscape consider enlisting permeable options. Permeable pavers allow water to infiltrate the surface rather than run off into the street and storm sewer.
Steppingstones placed in mulched pathways or surrounded by groundcovers make an attractive walkway or patio. Plant groundcovers suited to the growing conditions and those that tolerate foot traffic. The planted spaces between the hard surfaces allow water to move into and through the soil.
Implementing just a few of these changes in your landscape design and water management can help increase your landscape’s sustainability while reducing your workload.
Melinda Myers has written more than 20 gardening books, including Small Space Gardening. She hosts the “How to Grow Anything” DVD series and the Melinda’s Garden Moment TV & radio segments. Her website is MelindaMyers.com. |
Mindfulness Training Might Improve Need Satisfaction and Anxiety in Children with Learning Disabilities
By John M. de Castro, Ph.D.
“Mindfulness is a practice that can help children with LD manage stress and anxiety • Daily meditation gives children a relaxation tool they can call upon when stress levels rise.” – Marcia Eckerd
Learning disabilities are quite common, affecting an estimated 4.8% of children in the U.S. These disabilities present problems for the children in learning mathematics, reading and writing. These difficulties, in turn, affect performance in other academic disciplines. The presence of learning disabilities can have serious consequences for the psychological well-being of the children, including their self-esteem and social skills. In addition, anxiety, depression, and conduct disorders often accompany learning disabilities.
Mindfulness training has been shown to lower anxiety and depression and to improve self-esteem and social skills, and to improve conduct disorders. It has also been shown to improve attention, memory, and learning and increase success in school. So, it would make sense to explore the application of mindfulness training for the treatment of children with severe learning disabilities.
In today’s Research News article “Impact of a Mindfulness-Based Intervention on Basic Psychological Need Satisfaction and Internalized Symptoms in Elementary School Students With Severe Learning Disabilities: Results From a Randomized Cluster Trial.” (See summary below or view the full text of the study at: https://www.frontiersin.org/articles/10.3389/fpsyg.2019.02715/full?utm_source=F-AAE&utm_medium=EMLF&utm_campaign=MRK_1184693_69_Psycho_20191217_arts_A), Malboeuf-Hurtubise and colleagues recruited children with severe learning disabilities who were 9 to 12 years of age and attended a special education class. They received an 8-week training program that met once a week for 60 minutes. One group received mindfulness training, including body scan, walking, and breath meditations. The second group received social skills development training, including finding purpose in life, becoming responsible and engaged citizens, and developing a sense of belonging to the school and community. The children were measured before and after training and 3 months later for anxiety, depression, and need satisfaction, including autonomy, competence, and relatedness.
They found that in comparison to baseline both groups had significant improvements in competence and significant decreases in anxiety. There were no significant differences between the mindfulness and social skills groups. Because there wasn’t a no-treatment condition present it is not possible to discern if both conditions produced the observed improvements or that they were due to a contaminating factor such as participant of experimenter bias, Hawthorn effects, or simply time-based effects. But mindfulness training has been repeatedly found in highly controlled experiments to reduce anxiety. So, it is likely that the change observed in this study was due to the mindfulness training.
This is a very vulnerable group of children and improvements in emotions and feelings of competence are potentially very significant for the improvement of their lives. So, further research is warranted.
“mindful meditation decreases anxiety and detrimental self-focus, which, in turn, promotes social skills and academic success for students with learning disabilities.” – Kristine Burgess
CMCS – Center for Mindfulness and Contemplative Studies
This and other Contemplative Studies posts are also available on Google+ https://plus.google.com/106784388191201299496/posts and on Twitter @MindfulResearch
Malboeuf-Hurtubise C, Taylor G and Mageau GA (2019) Impact of a Mindfulness-Based Intervention on Basic Psychological Need Satisfaction and Internalized Symptoms in Elementary School Students With Severe Learning Disabilities: Results From a Randomized Cluster Trial. Front. Psychol. 10:2715. doi: 10.3389/fpsyg.2019.02715
Background: Mindfulness is hypothesized to lead to more realistic appraisals of the three basic psychological needs, which leads people to benefit from high levels of need satisfaction or helps them make the appropriate changes to improve need satisfaction. Mindfulness-based interventions (MBIs) have also shown promise to foster greater basic psychological need satisfaction in students with learning disabilities (LDs).
Objective: The goal of the present study was to evaluate the impact of a MBI on the satisfaction of the basic psychological needs and on internalized symptoms in students with severe LDs. A randomized cluster trial was implemented to compare the progression of need satisfaction, anxiety, and depression symptoms in participants pre- to post-intervention and at follow-up.
Method: Elementary school students with severe LDs (N = 23) in two special education classrooms took part in this study and were randomly attributed to either an experimental or an active control group.
Results: Mixed ANOVAs first showed that the experimental condition did not moderate change over time such that similar effects were observed in the experimental and active control groups. Looking at main effects of time on participants’ scores of autonomy, competence, and relatedness across time, we found a significant within-person effect for the competence need (p = 0.02). Post hoc analyses showed that for both groups, competence scores were significantly higher at post-intervention (p = 0.03) and at follow-up (p = 0.04), when compared to pre-intervention scores. A significant main effect was also found for anxiety levels over time (p = 0.008). Post hoc analyses showed that for both groups, scores were significantly lower at post-intervention (p = 0.01) and at follow-up (p = 0.006), when compared to pre-intervention scores.
Conclusion: Although the MBI seemed useful in increasing the basic psychological need of competence and decreasing anxiety symptoms in students with severe LDs, it was not more useful than the active control intervention that was used in this project. Future studies should verify that MBIs have an added value compared to other types of interventions that can be more easily implemented in school-based settings. |
Storytelling is an ancient human activity, present from oral traditions thousands of years old, through medieval theatre and the invention of the printing press, to the latest e-book. With the advent of computers, however, there has been a desire to change, not only the medium through which we share narratives, but their very nature. We want to make narratives interactive. We want to blur the line between game and narrative, creating stories that change based on their readers' choices.
However, before we can accomplish this effectively, we need to first understand the nature of narrative. Then we can explore how to graft interactive controls to a story and still produce a satisfying result.
To begin our study of narrative, we will review of a number of established works. However, each of these works has a particular focus stemming from its author's historical context.
Aristotle, the famous Greek philosopher, lived in the 4th century B.C.E. A student of Plato and a tutor of Alexander the Great, Aristotle wrote on everything from logic to biology to ethics. His works have been lost, found, interpreted, and translated. Sections have been interpolated, and most were only lecture notes to begin with. Over the years, a large body of neo-Aristotlean thought has been established. This body consists of reinterpretations, extensions, and commentary that, though based on Aristotle's philosophical frameworks and still associated with his name, were not part of his original works.
In Poetics, Aristotle aims to understand the nature of the poetic arts--epic and tragic poetry, comedy, dithyramb, and music. However, at least in the sections of his work that have survived, he focuses his analysis primarily on tragedy.
Only in past 200 hundred years has serious consideration been given to the source and context of Greek tragedy. Greek tragedies were performed during annual festivals dedicated to Dionysus. Each day, a different poet would demonstrate three tragic plays, followed by a burlesque play. These plays developed gradually from religious hymns to Dionysus. Over two centuries, the standard number of actors increased from one to three. These plays maintained their lyric nature, focusing on inner feeling and motive more than simple action. They were serious works, conveying both historic and legendary stories, as well as philosophical and religious ideals.
In his introduction to Aristotle, Fergusson explores the effects of these dithyramb roots more closely. The tragic effect--the purgation of emotion--likely comes from its religious history. Also, many of the mythic elements present in dithyrambs have carried over into tragedy. Citing the work of Gilbert Murray, Fergusson shows that both tragedy and dithyramb tend to convey a contest between opposites (Agon), leading to a ritual sacrifice or death (Pathos), usually announced by a Messenger. This leads to a lamentation (Threnos) filled with contrary emotions--the death of the old is also the triumph of the new. The discovery or recognition of the slain hero (Anagnorisis), followed by his resurrection or apotheosis, brings a change from grief to joy (Preipeteia). This pattern of struggle, suffering, sacrifice, and rebirth heavily influences most classic Greek tragedy.
Despite focusing primarily on such a rigid, classical form of tragedy, Aristotle has still provided the basis for many later theories. His view (perhaps slightly expanded) remains one of the most widely accepted views of drama.
Gustav Freytag was born in 1816. During his lifetime, he was a scholar, poet, novelist, critic, playwright, editor, and publicist. He wrote Technique of the Drama in 1863, and it went through six German editions before being translated into English in 1895.
Freytag builds most of his discussion on the foundations laid by Aristotle. Though he does not deal solely with tragedy, he does restrict himself to "serious drama", with classic Greek plays, Shakespeare, and some German plays as examples. Freytag held that a good theory of drama could provide both a guide for creation and a context for criticism. Rather than rigid laws that constrain creativity, rules of drama should be like craftsmen's traditions that let the playwright benefit from the wisdom of the great artisans that preceded him.
Educated in theatre, Brenda Laurel worked as a software engineer and programmer writing educational programs and interactive fairy tales for children. She also worked for Atari. Returning to academia, she proposed using theories of drama and narrative (particularly Aristotle and Freytag) as a framework for designing human-computer interfaces. Though her model may not have won wide support in the field of HCI, her work combining narrative and interaction could be very valuable in the study of interactive narrative. Computers as Theatre was published in 1991.
Mark Stephens Meadows has a background in image composition and animation, as well as graphic, interface, character, environment and information design. Rather than looking at drama, he explores narrative elements in images, comics, animation, software, and interactive computer games. Pause and Effect: The Art of Interactive Narrative was published in 2003.
Keith Johnstone has worked as a teacher and playwright. He worked, wrote, taught, and performed with a number of theatre studios and performance troupes, including the Royal Court Theatre. He has been exploring creativity and improvisation since the late 1950s. His book, Impro: Improvisation and the Theatre, published in 1981, explores a number of games and techniques for reawakening spontaneous creativity. Some of these techniques include group storytelling a word a time, automatic writing, and wearing masks. His work is well-regarded in the field of improv.
Aristotle holds that art is "imitation". Art forms differ in their objects, medium, and mode of imitation. "Tragedy is an imitation, not of men, but of an action and of life, and life consists in action" (Poetics VI). Thus, the primary "object" of tragedy is action. This action is not only physical events, but "praxis", which includes the motivation or rational purpose behind these events. To witness this motivation, we must also have characters who convey their thoughts to us. Thus, the three objects of tragedy are action, character, and thought.
The medium of tragedy is song and diction--that is, melody and spoken works. Its manner is spectacle. That is, tragedy is enacted before an audience. Though epic poetry shares the same medium, its manner is different--it is performed as simple narration.
Freytag essentially reiterates Aristotle's definition:
In an action, through characters, by means of words, tones, gestures, the drama presents those soul-processes which man experiences from the flashing up of an idea, to passionate desire and to a deed, as well as those inward emotions which are excited by his own deeds and those of others (Freytag, p104).
Also important to Freytag is the "idea" of the play. The playwright's "idea" is what directs the molding and selecting of material into the single unified action of the play. (Both Aristotle and Freytag describe reworking existing historical or legendary material, rather than creating entirely new stories.)
As Aristotle does, Freytag stresses that, for serious drama, we must understand the hero's motivation and inner passions, and not simply witness his actions. (Freytag goes on to provide many more guidelines for the construction of serious drama, such as the appropriate source of material, the affect of an audience on the experience of a drama, the place of the fanciful and magical, and the respectability of the subject matter and characters.)
Laurel also builds on Aristotle (in a very neo-Aristotlean way, based on the works of Sam Smiley) by supposing formal and material causes between the elements of tragedy. Aristotle lists the elements in order of principle importance: action, character, thought, diction (prose or verse), song (melody), spectacle (performance). Laurel reformulates these slightly as follows:
A formal cause is the form of something--what it is trying to be. The form of a house is the architect's design of it. (Or, more generally, the abstract form of "house-ness", of which a particular house takes part.) A material cause is the substance that comprises a thing. In our house example, the material cause would be the brick, mortar, and wood used in its construction.
So, in Laurel's model, the progression of formal causes is something like an abstract process of construction. We determine that the action necessitates particular characters, which then espouse certain thoughts, which are given in particular utterances of language. Patterns are slightly less clear, but seem to be any meaning-carrying signs. Aristotle claims music and harmony is an instinctual pleasure for humans; Laurel claims we find the same joy in recognizing patterns. Enactment would be all the sensory elements presented to the audience during the performance of the play.
When we make sense of a play as an audience, we extract patterns from the experience of the enactment. We discern meaningful language, from which we determine the character's thoughts and motives, which gives us an insight into the nature of those characters. The events these characters are involved in serves as the action of the play.
I mildly disagree with this model, however. First of all, Aristotle only listed the first three (action, character, and thought) of these elements as the objects of imitation; the others belong to the medium and the manner. If a story were pantomimed, Diction/Language and Song/Pattern would become Gesture, though the manner would still be Spectacle/Enactment. If the same story were written as text, the medium would be Diction/Language, but the manner would be different--Narration or Reading, perhaps. It seems that Language, Pattern, and Spectacle are not as essential to narrative as its objects.
Also, I think adding the causes constrains the elements to only affecting those element immediately above or below them in the stack. For example, in Laurel's model, characters are dictated solely by the action, which does not seem to leave room for background characters that provide setting or extended the development of other characters. In her model, language is seemingly the only way we can know a character's thoughts or motivation, when in practice we can tell much by their dress, gesture, and mannerisms. Either these other signs are a type of Language, or else we are determining character based on Pattern. Also, there are many "patterns"--such as background scenery, lighting, staging, literary references, etc--that enrich our experience of a play and add (metaphorical) meaning directly to the action as a whole.
Adding Freytag's notion of a play's seminal "idea", we could reformulate Laurel's model. We could assume that the playwright is attempting to convey a full "idea" (or experience or theme or perspective) to the audience. The principle means to do this is still through the action. However, all the other elements can directly reflect the central idea to some degree. For instance, the lighting may grow increasingly stark during a play to emphasize the polarization between characters and their increasing alienation from each other. Yet these patterns need not be part of the language or thought of any character. We may enjoy a particularly well-wrought line of dialogue, yet admit that it is more beautiful than it needs to be to simply advertise a character's internal state.
A modified version of Laurel model.
Johnstone too implies that action is key to narrative. However, he doesn't explore any of Aristotle's other elements. Instead, he holds that if you pay attention to the structure of the narrative, the content will take care of itself.
Not all authors hold that the primacy of narrative is action, however. Meadows claims that the essential component of narrative is perspective--choosing what to tell, as well as how to tell it and in what order. A football game may have all the elements of a drama--action, characters (players), thought (strategy), etc.--yet still seem a weak narrative, or none all. Meadows points out that this is because we lack a perspective. We are spectators to a series of events, not the audience of a story:
A football game quickly becomes a story when a sports-caster is there to narrate, providing opinion on who is doing what and why at what times. This is the role of the sports-caster; to insert opinion where there was only action (Meadows, p224).
Up to this point, I have be using drama and narrative synonymously. I feel justified in doing this because I consider narrative to refer to any kind of story, regardless of medium. All of the following are (usually) narratives in this sense: a tragic drama, an epic poem, a fantasy novel, a co-worker's tale of his weekend fishing trip, a comic book, a role-playing computer game. (Since we are using narrative in its broadest sense, the terms audience, reader, player, and user are largely synonymous here. All are recipients of a narrative, albeit through different mediums.)
This is not the only meaning of narrative in circulation, however. Laurel uses narrative to refer to a spoken or written account, as compared to drama, which relies on enactment or performance. Though the key difference is whether the story is narrated or enacted, Laurel claims that there are other common differences. Dramas tend to portray events in real-time (or in less time, if some of the events are omitted). Narratives can extend time by taking pages to describe a couple seconds of action. Drama tends to have a tighter unity of action, largely because it is constrained to a performance duration of only two to three hours. Narratives (such as books) can include action that is episodic, thematic, or tangential to the central action (Laurel 94-95).
Certainly, this is a good reminder that the medium makes a difference. However, in this review, we are looking for elements of narrative (once again in the broadest sense of narrative) that are common across mediums--whether movies, plays, or novels.
By common agreement, the essence of narrative is a cohesive action, a series of events, a plot. This action will likely need characters to bring it about, and it may not be a narrative until shaped through a certain perspective. But still, it is the action that is most essential. So what are the features of this action?
Aristotle defines a plot in general as "the arrangement of the incidents" (Poetics VI). With respect to tragedy in particular, a plot is
an imitation of an action that is complete, and whole and of a certain magnitude; for there may be a whole that is wanting in magnitude. A whole is that which has a beginning, a middle and an end (Poetics VII).
He goes on to explain that the plot should not start or end haphazardly. The internal parts should have an orderly arrangement. The whole should not be so large that spectators cannot easily grasp the unity and order it contains. Yet it must still be large enough to admit a change in fortune.
A tragedy should be but a single, unified action. (This is not true of epics, which are much longer and can include a number of connected but separate stories. Also, Aristotle admits the existence of tragedies with a "double thread of plot" (Poetics XIII), but holds these in lower esteem. An example from today would be soap operas, which are very long, contain multiple related plots, and are also held in low esteem as examples of the perfect narrative.) The parts of a plot should be connected through necessary and probable causes, "such that, if any one of them is displaced or removed, the whole will be disjointed and disturbed" (Poetics VII). Causes are probable when they are logical to us based on the characters' motives.
As already stated, a tragedy should include a change in fortune. The section of the tragedy from its beginning to this change in fortune is called the Complication, and the part after it is the Unraveling (or Denouement). Other parts of a tragedy that do not generalize as well to other forms of drama include Recognition and Reversal of Situation. These are present in "complex" plots, but not in "simple" plots. A tragedy should have consistent, true-to-life characters with good purpose and propriety. And a tragedy should bring about a feeling of fear and pity in its audience.
Freytag divided drama into its play and counter-play. The play is the part of the drama when the hero affects the world--when his will and desire prompt him to an action that changes the world around him. The counter-play is the part of the drama when the hero is plagued by his opponents or his environment--when the world about him generates feelings within him and drives his actions. Either the play or the counter-play can come first. The transition from one to the other is the climax of the drama. (This is clearly influenced by Aristotle's complication, change of fortune, and denouement.)
Within this basic structure, a plot can be further broken down into 5 parts and 3 crises.
This "triangle" is what Freytag is best known for. He gives no axes for this diagram. He describes "rising" and "falling" movements, but does not describe precisely what is being increased or decreased. Presumably, since the two sides comprise the play and the counter-play, we see the world-affecting action of the hero and the hero-affecting action of the world in return. Moving along this plot triangle, we encounter the following plot elements:
Freytag covers some additional plot details, such as how these element correspond to scenes of a drama, but most of these details are particular to his focus on the dramas of his time.
Freytag's notion of graphing the plot has inspired later critics, who have added to his model and disagreed over many of his labels. Laurel offers a modern version of Freytag's triangle. Though no longer a triangle, it still bears his name.
A modern version of "Freytag's Triangle."
In fact, many Freytag Triangles now include risings and fallings within each segment, making for jagged rising actions and jagged falling actions. This is because when we are engaged in a narrative, we do not experience a smooth rise to the climax. The plot takes many twists and turns, some questions are answered on the way, and various small problems are encountered and resolved before the climax.
One part of Freytag's model no longer present in this modern reformulation is the play and counter-play. While these may not be the most useful concepts in themselves, the notion that there is a tension or conflict between opposing forces in a narrative may be a key attribute. However, none of the other authors reviewed mention conflict as a integral part of narrative.
And an important question still remains in this updated model. What exactly is this "complication" that increases during the course of the plot? Surely this is the secret to how plot structure relates to the audience's experience, since we feel something increasing and decreasing during the performance of a drama. We will explore the nature of this audience response in the next section.
Laurel presents another model of plot which she calls the "flying wedge". In this model, a drama opens with (practically) unlimited potential. As the first scene progresses, certain possibilities are eliminated based on the setting, character's motives, etc. As the plot progresses, the audience begins to suspect certain outcomes to be more probable than others. Eventually, we reach the climax--the moment when all other possible story paths are eliminated--and we are left with a single, necessary outcome.
Laurel's Flying Wedge
Thinking of plot in terms of decreasing the possible/probable alternatives may be a helpful model for computer-generated plots. It may be possible to start with a tree or graph of possible plot lines and gradually eliminate improbable paths until we reach a climax. In any case, it is a reminder that Freytag's Triangle is not the only model of plot structure.
It is important to distinguish between a finished narrative and the process of creating it. Freytag and Aristotle imply that an author starts with the "idea" of a drama, and then conceives its full and complete action, and then determines the characters. While some may create narratives this way, it may be the exception rather than the rule. Many authors start with only a single image or a good character, and the plot develops as they write. They may not know the ending when they start, or their originally intended ending may change.
Another example of this sort of plot creation is Johnstone's improvisational theatre. In improv, there is no single, unified action or idea at the outset of the performance. Instead, the focus in on generating only the next successive event or line of dialog. Yet a basic plot can still emerge. Johnstone claims that the trick is to interrupt routines. Performers can establish a scenario with a usual, predictable sequence of events, and then try to break (or "tilt") that routine. This will lead another routine. As long as the performers don't "cancel" their current routine (bring it to a close before they can successfully tilt it to generate another routine), they can generate a series of events indefinitely.
However, Johnstone also points out that narratives are not a simply a series of events. If you reincorporate past events, you will gradually find resolutions for previous routines. Eventually, the performers can end all the routines and bring the story to a satisfying conclusion. This is what Johnstone meant by claiming that we only need to mind the structure, and the content will take care of itself. Following simple rules of routine creation, tilting, and re-incorporation can still generate workable storylines.
Part of Aristotle's definition of a tragedy includes its effect on the audience: "...through pity and fear affecting the proper purgation of these emotions" (Poetics VI). Freytag reiterates this point: though the technical nature of tragedy has changed over the centuries, the emotion it produces is the same. This brings us to perhaps the most important part of a narrative--the response it instills in the audience.
Disregarding specific emotions, such as fear, pity, joy, despair, etc., there is still a great variety of terms used to express what an audience experiences while in the grip of a good narrative. Here are some of the more common terms used by the authors we've covered so far, mapped by their seeming relationship to each other.
A mapping of potential audience responses to a "good" narrative.
At the center, we have engagement. This seems to be the key response to a good narrative--we are "sucked in". The reasons for this are scattered around the edges of the map. To the left, we see the emotional responses: empathizing with the characters, feeling vicarious emotions, and possible "purging" those emotions. In the lower-right corner we see the "rational" responses--speculation on the future of the plot and waiting for more information on what will happen next. This feeds into the top-right corner--the tension of a particularly gripping tale. And at the top, we see the results of a good narrative completed--satisfaction and enjoyment.
Though there are some natural groupings, this map is still a tangled weave. Empathizing with characters can increase tension if harm might come to those characters. If you are a particularly cerebral person, stimulation of thought and imagination might make for the greatest satisfaction.
Somewhere in this tangle is the meaning of the "complication" axis of our plot diagram. As the plot continues, our "tension" or "interest" rises and falls. Laurel holds that this is related to generated and satisfied information needs. Questions concerning the current state of affairs and potential futures actions are constantly being raised and answered. We are interested in learning significant facts.
But what makes these facts significant? Why do we care? Perhaps because we empathize with hero, and want to know what will happen to him. We are in a state of doubt about his future because there is tension or conflict in the story--will he overcome the challenges he faces, and if so, how?
Clearly, there exists this tangled semantic web of terms because they all related to each other. (And none of the authors reviewed have offered viable models to explain the relationships present here.) But if we can call all this tension/interest/wondering our "engagement" to the story, there still seems to be two phases of user response. First of all, we want to be "engaged" during the story. But, when the end comes, we also want to be "satisfied" after the story. It is possible to be engaged for the length of a narrative, but yet, when we find the end is weak and improbable, we can still be unsatisfied with the performance overall.
This confusion between how, exactly, plot structure causes user response demonstrates the tension between objective rules of a science and the subjective results of an art. Though we may determine the "craftsman's traditions" of how to compose a story, it does not mean we will achieve an engaging and satisfying result. We may find that we can break the rules--start the story with the climax, include two plotlines--and still produce a great story. And we can follow the rules and produce a dull narrative.
Yet we know this much: for whatever reason, a well-crafted narrative will keep us engaged and leave us satisfied.
A topic covered by none of these authors is the role of meta-narrative on audience response. From Aristotle, we see that, even two and half millennia ago, there were different kinds of narrative--tragedy, epic, comedy. And these different kinds, even when sharing the same medium and similar manners, had different rules concerning their construction and different audience expectations. For instance, characters should have noble purpose in tragedy but could be acceptably ignoble in comedy. Tragedy should restrict itself to a single action, but it is acceptable for epics to comprise of multiple (through related) episodic actions.
As Freytag stated, rules of structure provide a framework for both creation and criticism. Meta-narrative cues give the audience a clue as to what sort of narrative they are about to experience. Some of these depend on the medium: as Laurel pointed out, narratives differ whether they are meant to be performed or read. And even in the same medium, narratives can have different manners: audience expectations are very different for improvisational theatre than they are for scripted comedy.
Among all the narratives of a particular medium and manner, we often differentiate in terms of genre. Genres can provide very specific constraints on a narrative. Some examples are:
Sometimes even an author is established enough to have associated "genre" information:
Genres set up audience expectations. This can help the audience approach the narrative properly. For example, they know not to get too attached to the characters in a horror film. It can also sometimes be used to surprise the audience by breaking the rules--such as killing the characters off in the "wrong" order in a horror film.
A narrative may have a theme or issue it wishes the audience to consider. This is very closely related to point-of-view and Meadows's notion that perspective is essential to all narratives. We may enjoy narratives for technical reasons--it has our favorite actress or great cinematography. In ancient Greek tragedies, roles in the drama were assigned to different actors based on the thematic connection between the roles and prestige of the actor. These are all further example of meta-narrative shaping our experience of a narrative.
We have now explored narrative at some length. But we began this review in the hopes of learning how to produce satisfying interactive narratives.
By interactive narrative, I mean a narrative in which the audience can affect a significant change on the narrative.
Audiences can already affect traditional narratives in small ways. Their responses affect actors performing a drama on stage. They can choose how fast they read a book or when they put it down. They choose which movies they go to see at the cinema. But we usually intend a higher degree of interactivity than this when discussing interactive narrative.
The most common assumption behind interactive narrative is that the audience will be able to direct the plot as it unfolds, and so this will be our main interest. However, it should be remembered that narratives can be made interactive to a smaller degree. This can be as minor as the audience selecting the initial parameters, such as the setting or genre of the story. They may justify character actions, or choose different points-of-view from which to view the action.
Meadows explores the spectrum of current plot structures common in today's computer game interactive narratives. They range from the "impositional", in which the user uncovers a single plotline, to "expressive" structures, in which users are free to generate their own plots in an open, simulated environment.
A nodal plot structure gives an author the most narrative control. It can have a proper, pre-constructed plot structure. The difference from traditional narrative is that there are decision points where the user needs to complete some task in order to advance the story. Frequently, these are "do-or-die" decision points. If the user fails, his character dies, and he reloads a saved game and tries the task again.
A modulated plot structure has multiple plotlines. Decisions the user makes at certain points will result in a different sequence of events. However, though there may be multiple possible endings, all the possible storylines are known by the game author. By playing multiple times, the user can eventually experience all the different plot lines.
An open plot structure has no discernable story arc. It provides a world for users to explore, and they must provide their own justification for their actions and create their own narrative in this way.
These three interactive plot diagrams demonstrate the difficulty present in today's approaches to interactive narrative). A nodal plot structure has a strong narrative, but the interaction does not change the story--it's all narrative. An open structure leaves the narrative creation entirely up the user--it's all interaction. The modulated structure is little better. Because all the plots already exist, it is like a collection of overlapping nodal structures glued together at the starting point. There are still a limited number of pre-existing possible narratives.
This is not to say that these three structures cannot provide very engaging and extremely satisfying experiences. But it seems that there must be other approaches.
Laurel insightfully reveals the problem with existing approaches to interactive narratives (and many HCI systems in general). We tend to start with a narrative, and then add the interaction. It is like performing a play with the audience milling around among the actors. The subtle, but key, difference Laurel calls for is to make the audience actors too. Instead of worrying about how having the audience on stage is going to mess up the planned script, start with the audience on the stage as actors and then generate the script together!
But there is still a problem of balance here. Treating the audience like actors does not make them behave like actors. As Johnstone explains, new improv students often have difficulty keeping up an extended improv because they tend to "block" offers. That is, when offered a potential plotline opportunity, they will decline it and keep with the current routine. As mentioned before, this can cancel the current routine--it doesn't go anywhere. (Part of the reason for this blocking is related to attempts by novices to maintain control and higher status than the other actors.) A collaboration between a user and a system to generate a narrative cannot leave too much up to the user. This is precisely why existing open plot structure games can be unsatisfying--they leave all the narrative work to the player.
Even if we are continually creating new plots for the user, we must be cautious not to err in the opposite direction of leaving the player too few choices. Users frequently refuse to go along with a plot. This may be because they are obstinate, bored, or simply curious about the abilities of the system to adapt to their actions. The system may be poorly designed, so they may not know what they need to do, or how to accomplish it even if they do know. Or the system may simply not support the choices they want to make. Laurel explores how we can use constraints, both intrinsic and extrinsic to the narrative context, to attempt to guide users into choosing allowable actions.
If we have been generating narratives and then trying to add interaction, starting with interaction and then trying to generate a narrative will be a new approach. It could reduce the current tension between authors and players over control of a pre-written plot. But a new approach does not mean a cure-all. The same problems of balance--making the plot flexible enough but still substantial--remains to be dealt with. And it raises new issues of how best to generate narratives. This is hard enough for a computer to do alone, but it must also collaborate with the user. Improv and one-step-at-a-time plot creation technique may well be the key. However, unlike humans, with a long history of "instinctual" storytelling and meta-narrative knowledge, computers may well need to mind the structure and the content.
"Poetry in general seems to have sprung from two causes, each of them lying deep in our nature. First, the instinct of imitation... Next, there is the instinct for 'harmony' and rhythm" (Aristotle, Poetics IV).
Story-telling--the "imitation of action"--is indeed a deep-seated human activity. Like all "instinctual" activities, it becomes very hard to tease apart the different components and rules that comprise the activity. Yet, based on this review of five authors, we have learned a few things that might help us add interaction to our narratives.
First, action is the first principle of narrative. In order to produce this action, characters are needed. And in order to understand those characters, we must be aware of the thoughts that motivate their actions.
The produced action should be complete and connected. The included events should seem probable, or even necessary, based on the motives and abilities of the characters. All included events should be necessary to furthering the action. In this way, the action becomes a unified plot. There may be multiple plotlines, but this is generally not encouraged.
There is a general structure to a plot. The setting and main characters should be introduced early, then the focus should be on bringing the rising action to a climax, and finally all the loose threads (unfinished "routines") should be tied off in the falling action. In this way, a plot has a clear beginning, middle, and end.
The structure of the plot is directly related to the audience's response. During the narrative, the audience should be "engaged". Events in the plot can increase and decrease this engagement. Engagement is related to a number of factors, including empathy with the characters, enjoyment of the emotions generate, suspense, and a simple desire to know what happens next. At least part of this tension likely stems from the conflict or tension faced by the characters within the plot itself. If the audience is engaged throughout the narrative, and the plot is unified and complete, they will likely be satisfied at the conclusion.
The enjoyment of narrative is influenced by a number of meta-narrative factors, most notable of which are the medium, manner, and genre. The genre, in particular, inform the audience what to expect and how to judge the narrative.
And we must remember why we tell narratives in the first place--to convey a perspective.
We have succeeded in synthesizing a basic outline of narrative theory. We can use this as a foundation for future critiques of generated narratives. As a potential avenue for generating satisfying interactive narratives, more research should be done into automatically and collaboratively generating stories. Improv is likely a fruitful source of techniques apropos to this sort of creation--creating routines, breaking them, reincorporating them, completing the created patterns to form a plot. Computers may be at a disadvantage, lacking the (repressed) human capacity for spontaneous creation. But this may be outweighed by limiting creation to a tight domain, using meta-narrative constraints, and relying on some collaboration from the user. Other potential sources of techniques are table-top role-playing games and the study of tropes, or story elements.
This review is only the first step.
Aristotle. Poetics. Translated: S. H. Butcher. Introduction: Francis Fergusson. New York: Hill and Wang, 1961.
Freytag, Gustav. Technique of the Drama. Translated: Elias J. MacEwan, from 6th German edition. Chicago: S.C. Griggs & Company, 1895.
Johnstone, Keith. Impro: Improvisation and the Theatre. NY: A Theatre Arts Book/Routledge, 1979.
Laurel, Brenda. Computers as Theatre. Reading, Mass: Addison-Wesley Publishing Co., 1991.
Meadows, Mark Stephens. Pause & Effect: The Art of Interactive Narrative.
Indianapolis, IN: New Riders, 2003.
|Last Edited: 13 May 2005|
©2005 by Z. Tomaszewski. |
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. |
Endodontics is a specialized branch of dentistry that deals with the complex structures found inside the teeth. The Greek word “Endodontics” literally means “inside the tooth” and relates to the tooth pulp, tissues, nerves, and arterioles. Endodontists receive additional dental training after completing dental school to enable them to perform both complex and simple procedures, including root canal therapy.
Historically, a tooth with a diseased nerve would be removed immediately, but endodontists are now able to save the natural tooth in most cases. Generally, extracting the inner tooth structures, then sealing the resulting gap with a crown restores health and functionality to damaged teeth.
Signs and symptoms of endodontic problems:
- Inflammation and tenderness in the gums.
- Teeth that are sensitive to hot and cold foods.
- Tenderness when chewing and biting.
- Tooth discoloration.
- Unexplained pain in the nearby lymph nodes.
Reasons for endodontic treatment
Endodontic treatment (or root canal therapy) is performed to save the natural tooth. In spite of the many advanced restorations available, most dentists agree that there is no substitute for healthy, natural teeth.
Here are some of the main causes of inner tooth damage:
Bacterial infections – Oral bacteria is the most common cause of endodontic problems. Bacteria invade the tooth pulp through tiny fissures in the teeth caused by tooth decay or injury. The resulting inflammation and bacterial infection jeopardize the affected tooth and may cause an abscess to form.
Fractures and chips – When a large part of the surface or crown of the tooth has become completely detached, root canal therapy may be required. The removal of the crown portion leaves the pulp exposed, which can be debilitating painful and problematic.
Injuries – Injuries to the teeth can be caused by a direct or indirect blow to the mouth area. Some injuries cause a tooth to become luxated or dislodged from its socket. Root canal therapy is often needed after the endodontist has successfully stabilized the injured tooth.
Removals – If a tooth has been knocked clean out of the socket, it is important to rinse it and place it back into the socket as quickly as possible. If this is impossible, place the tooth in special dental solution (available at pharmacies) or in milk. These steps will keep the inner mechanisms of the tooth moist and alive while emergency dental treatment is sought. The tooth will be affixed in its socket using a special splint, and the endodontist will then perform root canal therapy to save the tooth.
What does an endodontic procedure invlove?
Root canal therapy usually takes between one and three visits to complete. Complete X-rays of the teeth will be taken and examined before the treatment begins.
Initially, a local anesthetic will be administered, and a dental dam (protective sheet) will be placed to ensure that the surgical area remains free of saliva during the treatment. An opening will be created in the surface of the tooth, and the pulp will be completely removed using small handheld instruments.
The space will then be shaped, cleaned, and filled with gutta-percha. Gutta-percha is a biocompatible material that is somewhat similar to rubber. Cement will be applied on top to ensure that the root canals are completely sealed off. Usually, a temporary filling will be placed to restore functionality to the tooth prior to the permanent restoration procedure. During the final visit, a permanent restoration or crown will be placed.
If you have questions or concerns about endodontic procedures, please contact our office. |
Bestelle Marken-Computer günstig im NBB.com online Shop! Jede Woche neue Computer Angebote. Nur solange der Vorrat reicht Vom Einsteiger-PC bis zum High-End Gaming-PC - finden Sie jetzt Ihren Wunsch-PC! Entscheiden Sie selbst: Lange vergleichen oder direkt beim Günstigsten kaufen. Jetzt hier . The transfer of new information into a register is referred to as loading the register. If all...
Registers hold an important position in computer architecture. These are the temporary storage area in the computer, where the newly fetch data is stored. A In computer architecture, a processor register is a quickly accessible location available to a digital processor's central processing unit (CPU). Registers usually In computer engineering, a register-memory architecture is an instruction set architecture that allows operations to be performed on (or from) memory, as well as In computer architecture, registers are typically addressed by mechanisms other than main memory, but may in some cases be assigned a memory address e.g. DEC PDP-10
MIPS is a load/store architecture (also known as a register-register architecture); except for the load/store instructions used to access memory, all instructions Computer Registers Types. There are many different types of registers that are used today. Thy perform certain specific functions in the computer system. The When you use a computer you need hard drive or storage drive to use your computer on which everything loads and works As you know that CPU runs the computer it Register-register reference architecture (CPU with more register) - In this organisation ALU operations are performed only on a register data. So operands are required
4 PART OF THE PICTURE: Computer Architecture or operating-system mode. Of course, different machines will have different register organizations and use different Registers are the top of the memory hierarchy and are the fastest way for the system to manipulate data.Registers are normally measured by the number of bits they can
GATE Computer science and engineering subject Computer Organization and Architecture (Register Transfer Language) from morris mano for computer science and The base and limit registers aren't accessed like normal registers, but are stored in a special data structure called a segment descriptor, which is loaded with the A memory buffer register (MBR) is the register in a computer's processor, or central processing unit, CPU, that stores the data being transferred to and from the #Registers#COA#COD Full Course of Computer Architecture:https://www.youtube.com/playlist?list=PLxCzCOWd7aiHMonh3G6QNKq53C6oNXGrXOther subject playlist Link:-.. computer registers in computer architecture - YouTube
A common way to divide computer architectures is into Complex Instruction Set Computer (CISC) and Reduced Instruction Set Computer (RISC). Note in the first Chapter 1. An Introduction to Computer Architecture Each machine has its own, unique personality which probably could be defined as the intuitive sum total of - Register - register, where registers are used for storing operands. Such architectures are in fact also called load - store architectures, as only load and
We can perform arithmetic operations on the numeric data which is stored inside the registers. Example : R3 <- R1 + R2. The value in register R1 is added to the Introduction to Computer Architecture Unit 2: Instruction Set Architecture CI 50 (Martin/Roth): Instruction Set Architectures 2 Instruction Set CI 50 An instruction set is the aspects of a computer architecture visible to a programmer, including the native datatypes, instructions, registers, addressing modes, memory architecture, interrupt and exception handling and external input/output. Registers can also be classified into general purpose and special purpose types
components as registers, decoders, arithmetic elements, and control logic. The various modules are interconnected with common data and control paths to form a digital computer system. Register Transfer Language Digital modules are best defined by the registers they contain and the operations that are performed on the data stored in them GATE Computer science and engineering subject Computer Organization and Architecture (Register Transfer Language) from morris mano for computer science and information technology students doing B.E, B.Tech, M.Tech, GATE exam, Ph.D
Example architecture: VMIPS Loosely based on Cray-1 Vector registers Each register holds a 64-element, 64 bits/element vector Register file has 16 read ports and 8 write ports Vector functional units Fully pipelined Data and control hazards are detected Vector load-store unit Fully pipelined One word per clock cycle after initial latency Scalar. Register-transfer level - Wikipedia RTL is a very low-level hard hardware design methodology that assumes you have clocked logic with banks of registers to store state and combinational logic between them. As an implementation methodology it proba.. Computer Organization and Architecture Tutorial | COA Tutorial with introduction, evolution of computing devices, functional units of digital system, basic operational concepts, computer organization and design, store program control concept, von-neumann model, parallel processing, computer registers, control unit, etc
Computer architects make usage of different types of computers in order to design new type of computers. In computer architecture, the main emphasis is on the logical pattern, computer pattern, and the system pattern. In the same way, it is mainly concerned with the behavior as well as the structure of the computer as seen by the user Memory Buffer Register (MBR) is the register in a computers processor, or central processingunitCPU, that stores the data being transferred to and from the immediate access store. It acts as a buffer allowing the processor and memory units to act independently without being affected by minor differences in operation Major parts of a CPU . Below we see a simplified diagram describing the overall architecture of a CPU. You must be able to outline the architecture of the central processing unit (CPU) and the functions of the arithmetic logic unit (ALU) and the control unit (CU) and the registers within the CPU.. Do I understand this, part one [ Computer Architecture:Introduction 2. Instruction Set Architecture 3. using the MIPS architecture as a case study and discuss the basics of microprogrammed control. The two registers to be read are always specified by the Rs and Rt fields,.
RISC & CISC MCQs : This section focuses on RISC & CISC of Computer Organization & Architecture. These Multiple Choice Questions (MCQ) should be practiced to improve the Computer Organization & Architecture skills required for various interviews (campus interview, walk-in interview, company interview), placements, entrance exams and other competitive examinations Computer Organization and Architecture MCQs Set-21 If you have any Questions regarding this free Computer Science tutorials ,Short Questions and Answers,Multiple choice Questions And Answers-MCQ sets,Online Test/Quiz,Short Study Notes don't hesitate to contact us via Facebook,or through our website.Email us @ [email protected] We love to get feedback and we will do our best to make you happy This is because the registers are the 'fastest' available memory source. The registers are physically small and are placed on the same chip where the ALU and the control unit are placed on the processor. The RISC instructions operate on the operands present in processor's registers. Below we have the block diagram for the RISC architecture In this article, we are going to perform the computer architecture tutorial which will help the computer science student and also help those students who are preparing for GATE(CS/IT) and UGC Net (CS) exam.. I have prepared a tutorial on some important topics of computer organization and architecture. These tutorials are linked here in this main computer organization tutorial page This set of computer organization and architecture objective questions include the collections of objective questions on computer organization and architecture. Skip to Main Content. The register that keeps track of the instructions in the program stored in memory is. A) Control register B).
I assume you mean What is a register in computer system architecture? A register is just a collection of bits. Nothing more. It is frequently manipulated as a unit - copied from place to place, used in arithmetic, used for temporary storage, use.. A processor (in other words, the CPU) register is a special high speed memory location to read, write and operate on data. It is not cache memory, which is used to make main memory operations effectively quicker. See Why don't modern computers use.. • The clock doesn't arrive at all registers at the same time • Skew is the difference between two clock edges • Examine the worst case to guarantee that the dynamic discipline is not violated for any register - many registers in a system
Registers. Usually, the register is a static RAM or SRAM in the processor of the computer which is used for holding the data word which is typically 64 or 128 bits. The program counter register is the most important as well as found in all the processors. Most of the processors use a status word register as well as an accumulator For example it forces a register to be 0 on RISC architectures without a hardware zero register. x86 register names on it are also consistent across 16, 32 and 64-bit x86 architectures with operand size indicated by mnemonic suffix. That means ax can be a 16, 32 or 64-bit register depending on the instruction suffix. If you're curious about it rea Instruction set architecture In a CPU we distinguish between I Instruction set architecture, that is externally visible aspects like the supported data types (e.g. 32 bit Ints, 80 bit floats etc), instructions, number and kinds of registers, addressing modes, memory architecture, interrupt etc. This is what the programmer uses to get the CPU. instructions and register sets combined with machine level programming, to provide the maximum in technologies were very immature. These facts drove computer architectures, which used very simple In the earliest computer systems, both density and speed were quite modest; and software are plotted over time. Source; IBM and
Registers Memory Arithmetic operations BTW: We're through with Y86 for a while, and starting the x86. We'll come back to the Y86 later for pipelining. CS429 Slideset 7: 2 Instruction Set Architecture II Intel x86 Processors x86 processors totally dominate the laptop/desktop/server market. Evolutionary Design Starting in 1978 with 808 Computer architectures represent the means of interconnectivity for a computer's hardware components as well as the mode of data transfer and processing exhibited. Different computer architecture configurations have been developed to speed up the movement of data, allowing for increased data processing Von Neumann architecture provides the basis for the majority of the computers we use today. The fetch-decode-execute cycle describes how a processor functions A common way to divide computer architectures is into Complex Instruction Set Computer (CISC) and Reduced Instruction Set Computer (RISC). Note in the first example, we have explicitly loaded values into registers, performed an addition and stored the result value held in another register back to memory In this tutorial you will learn about Computer Architecture, various Instruction Codes, Storage units, Interrupts and Input/Output devices or channels
Computer Architecture and the Fetch-Execute Cycle 4.3.1 Von Neumann Architecture John Von Neumann introduced the idea of the stored program. In order to do this, the processor has to use some special registers. These are Register Meaning PC Program Counter CIR Current Instruction Register MAR Memory Address Register This page contains pdf note of Computer Architecture. You can read in site or download for offline purpose
Computer Architecture Lecture No. 2 Reading Material Vincent P. Heuring&Harry F. Jordan Chapter 2,Chapter3 Computer Systems Design and Architecture 2.1, 2.2, 3.2 Summary 1) A taxonomy of computers and their instruction A register is a temporary storage area built into a CPU.Some registers are used internally and cannot be accessed outside the processor, while others are user-accessible.Most modern CPU architectures include both types of registers.. Internal registers include the instruction register (IR), memory buffer register (MBR), memory data register (MDR), and memory address register (MAR)
Computer Architecture - A Quantitative Approach , John L. Hennessy and David A. Patterson, 5th Edition, Morgan Kaufmann, Elsevier, 2011. Dynamic scheduling - Example by Dr A. P. Shanthi is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted Computer Instructions | Computer Organization and Architecture Tutorial with introduction, evolution of computing devices, functional units of digital system, basic operational concepts, computer organization and design, store program control concept, von-neumann model, parallel processing, computer registers, control unit, etc 6.823 Computer System Architecture Bus-Based MIPS Implementation Last Updated: 9/22/2005 7:31 PM General Overview Figure H5-A shows a diagram of a bus-based implementation of the MIPS architecture. In this architecture, the different components of the machine share a common 32-bit bus through which they communicate Show the contents of the two memory words at locations 1000 and 1004 after the word Computer has been entered. Values correspond to the character below it. 2.4 Registers R4 and R5 contain the decimal numbers 2000 and 3000 before each of the following addressing modes is used to access the memory operand
My computer architecture books explains that Since writes to the register file are edge-triggered, our design can legally read and write the same register within a clock cycle: the read will get the value written in an earlier clock cycle, while the value written will be available to read in a subsequent clock cycle Computer Organization and Architecture Micro-Operations • Execution of an instruction (the instruction cycle) has a number of smaller units 4 Registers • Memory Address Register (MAR) —Connected to address bus —Specifies address for read or write o Kristen R. Walcott-Justice, in Advances in Computers, 2014 3.1 Hardware Monitoring on Commodity Devices. Nearly all microprocessors used in computers, tablets, and mobile devices today contain hardware counters. Hardware counters are a set of special-purpose registers built into modern microprocessors to store the counts of hardware-related activities within computer systems Computer organization refers to the operational unit and their interconnection that realise the architectural specification. Computer organization deals with how different part of a computer are organised and how various operations are performed between different part to do a specific task. The organization of the computer is defined by its internal registers ,timing and contro How is Instruction Buffer Register (computer architecture) abbreviated? IBR stands for Instruction Buffer Register (computer architecture). IBR is defined as Instruction Buffer Register (computer architecture) very rarely
Registers are an essential part of the ISA Visible to the hardware and to the programmer Registers are Used for high speed storage for operands. For example, if variables a,b,c are in registers 8,9,10 respectively add $8,$9,$10 # a = b + c Easy to name (most computers have 32 registers visible t The architecture does not use explicit address-space identifiers; the segment registers ensure address space protection. If two processes duplicate an identifier in their segment registers, they share that virtual segment by definition
Instruction Cycle | Computer Organization and Architecture Tutorial with introduction, evolution of computing devices, functional units of digital system, basic operational concepts, computer organization and design, store program control concept, von-neumann model, parallel processing, computer registers, control unit, etc View CPU Registers (1st lecture).docx from CS 102 at Army Public Degree College, Sargodha. Chapter # 5: Computer Architecture Overview: 1951 Van Neumann and his team proposed a design of store Computer Organization and Architecture MCQs Set-21 If you have any Questions regarding this free Computer Science tutorials ,Short Questions and Answers,Multiple choice Questions And Answers-MCQ sets,Online Test/Quiz,Short Study Notes don't hesitate to contact us via Facebook,or through our website.Email us @ [email protected] We love to get feedback and we will do our best to make you happy Harvard architecture is named after the Harvard Mark I relay based computer, which was an IBM computer in the University of Harvard. The computer stored instructions on punched tape (24 bits wide), furthermore the data was stored in electro mechanical counters
4. Registers. The last page about the Von Neumann architecture made a passing reference to registers. But what are these registers? A register is a discrete memory location within the CPU designed to hold temporary data and instructions.A modern CPU will hold a number of registers Computer architects use specialized knowledge of computer software and hardware structure to improve the performance of computer systems. To increase your chances of getting hired, you need to prepare for the interview. In this article, we take a look at some of the common computer architecture interview questions, including their answers The Current Instruction Register (CIR) contains the current instruction during processing. Related Content: Fetch Execute CycleVon Neumann Architecture The Industry Standard Architecture (ISA) bus is one of the oldest buses still in use. Even though it's been replaced with faster buses, ISA still has a lot of legacy devices that connect to it like cash registers, Computer Numerical Control (CNC) machines, and barcode scanners. Figure 1: 8 and 16-bit ISA expansion slot Although the term computer architecture sounds very complicated, its definition is easier than one might think. Computer architecture is a science or a set of rules stating how computer software and hardware are joined together and interact to make a computer work. It not only determines how the computer works but also of which technologies [ |
Protecting and restoring our region’s diverse and vulnerable ecosystems helps to increase the health of our waterways, biodiversity, and communities.
There are at least 45 distinct ecosystem types in the Wellington region. These range from alpine tussock lands and native forests, to wetlands, estuaries and floodplains, and the coastal rocky cliffs and sandy beaches.
Some ecosystem types are particularly at risk, such as coastal dunes and wetlands.
About our ecosystems
The climate, soil and land forms of our region gave rise to the diverse range of forests that once covered 782,000 ha of the region.
Waterways in our region are interconnected. Small streams and powerful rivers are all important to the health of our freshwater, environment, and our communities.
In the Wellington region, we have a number of different types of wetlands which are home to different ecosystems.
Sand dunes are important natural areas, not only for their ecological significance but also because they protect our beaches and coastal areas from erosion. |
Click on the image above to watch a video about the differences and similarities of Fables and Fairy Tales.
Fables are stories that are passed down, with a good lesson or moral to be learned, and are about animals, plants, or forces of nature that are humanlike. Fairy tales are stories that often involve magical characters, have good and evil characters, and generally start with “once upon a time.”
Click on the video below to hear a story. After you’ve listened to the story, scroll down to answer a couple of questions.
Answer the following questions in the comments below (don’t forget to include your first name and last initial and your school):
- Was this a fable or a fairy tale?
- How do you know? Use reasons from the 1st video. |
A wave is a repeating pattern of motion that transfers energy from place to place. When waves come in contact with an object, a few things can happen. The wave can be transmitted, which means to pass through the object. It can be absorbed, in which the wave is converted to thermal energy, or it can be reflected (sent off in a new direction).
To better understand wave reflection, absorption & transmittance…
LET’S BREAK IT DOWN!
Waves have properties.
A wave is a repeating pattern of motion that transfers energy from place to place. All waves have properties such as amplitude, wavelength, and frequency. These properties can be used to describe the wave. The amplitude of a wave determines how loud a sound is and how bright light is. Wavelength is the length of one wave. Frequency is how many waves occur in 1 second. The wavelength and frequency of a light wave determine color. A light wave is measured in nanometers. Different wavelengths produce different colors. Higher wavelengths produce brighter colors such as red, and lower wavelengths produce darker colors such as violet.
Sound waves travel through matter.
Sound waves need matter to travel through, but light waves do not. When sound travels through matter, they can be absorbed, reflected, or transmitted depending on the waves’ properties. Higher amplitude sound waves are more likely to be transmitted through matter instead of reflected. Lower amplitude sound waves are more likely to be reflected or absorbed, resulting in a lack of an echo. Sound can travel through air, water, and solids. The type of medium sound is traveling through, along with the properties of the sound wave, will determine if it is absorbed, reflected, or transmitted. |
To the ancient Egyptians, iron was known as the “metal of heaven,” says the University College London. “In the hieroglyphic language of the ancient Egyptians it was pronounced ba-en-pet, meaning either stone or metal of Heaven.” For thousands of years before they learned to smelt iron ore, Egyptians were crafting beads and trinkets from it, harvesting the metal from fallen meteorites. The rarity of the metal gave it a special place in Egyptian society, says Nature: “Iron was very strongly associated with royalty and power.”
For the past century, researchers have been locked in debate over whether the iron in a set of 5,000 year-old beads, dating back to ancient Egypt, came from a meteorite or was crafted as the byproduct of accidental smelting. A new study, says Nature, has confirmed that the iron beads hail from the heavens. The beads contain high concentrations of nickel and show a distinct crystal structure known as a Widmanstätten pattern, says New Scientist, both evidence that the iron came from a meteor.
According to Cardiff University’s Paul Nicholson in his 2000 book, Ancient Egyptian Materials and Technology, “the availability of iron on anything but a fortuitous or sporadic scale had to await the development of iron smelting.”
The relatively late adoption of this technology owes more to the complexitities of the processes than to a lack of supplies, since iron ores are actually abundant world-wide. Iron production requires temperatures of around 1,100—1,150 °C.
Iron smelting didn’t appear in Egypt until the 6th century B.C., 2700 years after the estimated date of the iron beads.
More from Smithsonian.com: |
NCASE Resource Library
This toolkit provides parents and other caregivers ways to support their children’s learning outside of the classroom. It was updated March 13, 2020. It offers a range of different types of free online resources that can help children continue to build critical literacy skills at home or in group care, especially while schools are closed. Targeted ages vary by resource.
With an emphasis on equity and inclusion, this white paper outlines promising practices for engaging families in STEM as a means of increasing youth participation and retention in STEM pathways. Parents play a critical role in engaging youth in STEM activities and careers, especially for girls, youth of color, low-income youth, and youth with disabilities.
This issue brief identifies high impact strategies for actively co-creating opportunities for family engagement to support learning across the age continuum, both in school and during out-of-school time.
This toolkit summarizes best practice tools and strategies for fostering family engagement in Out-of-School Time (OST) programs. Developed by BOSTnet years ago based on a four-year initiative aimed at improving youth outcomes through family involvement, this classic tool is still relevant today.
Child Care Aware created this web-based school-age program checklist to help families select a high quality school-age program. It has questions on topics like health and safety, indoor and outdoor environment, caregiver-child interactions, staff qualifications, and parent partnerships. There is a link to print out the five-page checklist. |
Hematology is a scientific study of blood. Blood is a circulating tissue composed of fluid plasma and cells.
There are 3 kinds of cells which are: red blood cells (erythrocytes), white blood cells (leucocytes) and platelets (thrombocytes).
Each kind of blood cells have its specific role in human body.
- White blood cells have the role of body defense against bacterial, parasitic and viral infections.
- Red blood cells transports oxygen form the lungs to the cells of the body and carbon dioxide from the cells to the lungs. They also carry nutrients from the gastrointestinal tract to the cells
- And platelets have the role of hemostasis or preventing blood loss from hemorrhage. When blood vessels are injured.
In this Hematology lab we teach the students how to analyse the structure or morphology of blood cells, number of all kinds of blood cells because there are the normal ranges or reference ranges for each kind of blood cell in a normal or healthy people.
And if there are changes in morphology or number of blood cells, we can know the real etiology of diseases based also on functions of those blood cells.
And in immunohematology lab, we teach the students how they can know the blood type of a people (A,B, AB and O) blood group and also the rhesus (positive or negative).
Students of all levels (level I, level II, level III and level IV) of Biomedical Laboratory sciences department (divided in a group of 10-15 students) use this lab in order of put in practice what they have learnt theorically in class in the following modules:
|II||HAEMATOLOGY II AND IMMUNOHAEMATOLOGY I|
|III||HAEMATOLOGY III AND IMMUNOHAEMATOLOGY II|
|IV||HAEMATOLOGY IV AND IMMUNOHAEMATOLOGY III|
Main tasks or tests taught and performed in Hematology and Immunohematology lab
- Complete or full blood count using Humacount Automate machine
- Red and white blood cells count manually using Hemocytometer (improver Neubauer counting chamber); turk and hayem solution as reagents
- ESR (Erythrocytes Sedimentation Rate)
- To make thin blood smears then stain them
- Differential leucocytes count using microscope, slides and may grunward stain
- ABO Blood grouping using Beth-Vincent method with anti sera.
- Hematocrit measurement using Humax machine
- ELISA: In our lab, the ELISA (Enzyme-Linked Immunosorbent Assay) is performed to diagnose the diseases. We do ELISA of Hepatitis C, Rubella, free prostate specific antigen (fPSA), Hormone like cortisol, T3, T4, TSH.
Purposes or objectives of Hematology and immunohematology lab
- To teach the students the principle, procedures and results interpretation of all tests analyzed or performed in Hematology and Immunohematology lab.
- To teach the students how to put in practice what they have leant theoretically in class by performing hematological tests (i.e., CBC, Differential leucocytes count, Blood grouping, ESR, HCT)
- To teach the students and explain standard operating procedures and quality control in the hematology laboratory.
- To familiarize and adapt to the hematology laboratory working environment and biosafety and biosecurity procedures.
- To teach the students how to prepare and store reagents and how to stain blood films with routine and special stains.
- To teach the students how to use personal protective equipment (PPE) for self protection and first aid kit and eye wash in case of accident.
- To teach the students wastes management of all wastes in the lab |
Chapter 4. Quicksort
In this chapter
- You learn about divide-and-conquer. Sometimes you’ll come across a problem that can’t be solved by any algorithm you’ve learned. When a good algorithmist comes across such a problem, they don’t just give up. They have a toolbox full of techniques they use on the problem, trying to come up with a solution. Divide-and-conquer is the first general technique you learn.
- You learn about quicksort, an elegant sorting algorithm that’s often used in practice. Quicksort uses divide-and-conquer.
You learned all about recursion in the last chapter. This chapter focuses on using your new skill to solve problems. We’ll explore divide and conquer (D&C), a well-known recursive technique for solving problems.
This chapter really gets into the meat of algorithms. After all, an algorithm isn’t very useful if it can only solve one type of problem. Instead, D&C gives you a new way to think about solving problems. D&C is another tool in your toolbox. When you get a new problem, you don’t have to be stumped. Instead, you can ask, “Can I solve this if I use divide and conquer?”
At the end of the chapter, you’ll learn your first major D&C algorithm: quicksort. Quicksort is a sorting algorithm, and a much faster one than selection sort (which you learned in chapter 2). It’s a good example of elegant code. |
How do you play visual memory games?
The idea is that you turn the cards face down, mix them up. Then each player takes a turn turning over two cards at a time. When you get a matching pair, you can keep them. The memory aspect comes in when you remember where the matching card is that someone turned over earlier.
How can I improve my visual memory game?
The Following Activities Will Promote Visual Memory Skills:
- Copy patterns using various media, including beads, pegs, blocks, letters or numbers. …
- Play memory games. …
- Play “I-Spy” with your child. …
- Play the game “What’s Different.” Place three objects on the table.
What is visual memory Game?
Fun puzzle to build visual memory. Work to remember where colored circles are located on a grid of nine blank circles. Players are given five seconds to memorize where the circles are located then are asked to drag the circles to the position where they saw them. One is easy.
What is an example of visual memory?
Visual memory skills in the academic realm include reading comprehension, spelling, and recognition of letters and numbers. Visual memory skills helpful for everyday use include recalling where you put the car keys or being able to give directions to a specific place.
What is visual memory occupational therapy?
Visual memory focuses on one’s ability to recall visual information that has been seen. Visual memory is a critical factor in reading and writing. When a child is writing a word, he must recall the formation of parts of the letter from memory.
What are visual memory skills?
Visual Memory is the skill that requires a student to remember or recall items, numbers, objects, letters, figures, and/or words which have been previously seen.
How can I help my child with poor visual memory?
You can help your child improve working memory by building simple strategies into everyday life.
- Work on visualization skills. …
- Have your child teach you. …
- Try games that use visual memory. …
- Play cards. …
- Encourage active reading. …
- Chunk information into smaller bites. …
- Make it multisensory. …
- Help make connections.
What causes visual memory loss?
Visual memory-deficit amnesia is a new classification of amnesia that, unlike most forms of amnesia, is caused by a loss of posterior neocortex rather than damage to the medial temporal or diencephalic region.
How can I improve my auditory and visual memory?
Improving Auditory and Visual Memory
- Reciting: Recite action rhymes, songs and jingles. …
- Recall simple sequences: Simple sequences from personal experiences and events can be shared with the group.
- Recall verbal messages: Listen to a variety of verbal messages. …
- Instructions: Listen to the instructions given.
How does visual memory work?
Attention and working memory impose capacity limitations in cognitive processing. Visual working memory allows us to hold a visual picture in our mind for a few seconds after it disappears from our sight. During this time span of a few seconds, a small subset is transferred into visual working memory.
Why is visual memory important?
The ability to remember what we see is important to process short-term memory into long-term memory. Visual memory is necessary for most academic tasks, including reading, spelling, reading comprehension, math and copying from a board to a notebook. When a child has poor visual memory, school can become difficult.
Why is Kim’s game called Kim’s game?
Origin of Kim’s Game Kim’s Game is a game played by Scouts. The game develops a person’s capacity to observe and remember details. The name is derived from Rudyard Kipling’s 1901 novel Kim, in which the hero Kim plays the game during his training as a spy.
What are visual discrimination activities?
Visual discrimination activities include those related to identifying opposites, sorting cards, doing puzzles, and ordering blocks. Matching cards, taking nature walks, and picking out an image or object that is not like the others in a group are also visual discrimination activities.
What are the visual perceptual skills?
Visual perceptual skills are the brain’s ability to make sense of what the eyes see….Some examples of activities to encourage visual perceptual include:
- Paper mazes and marble mazes.
- Connect the dot activities.
- Hidden pictures.
- Copying pictures or forms. …
- Wooden blocks.
- Matching and sorting.
How do you work on visual attention?
What activities can help improve visual perception?
- Hidden pictures games in books such as “Where’s Wally”.
- Picture drawing: Practice completing partially drawn pictures.
- Dot-to-dot worksheets or puzzles.
- Review work: Encourage your child to identify mistakes in written material. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.