content
stringlengths
275
370k
Genes, the units of heredity in living organisms, are encoded in an organism's genetic material (DNA). They exert a central influence on the organism's physical aspects and are passed on to succeeding generations through the reproduction process. Genetic material can also be passed between unrelated individuals on viruses or through the process of transfection used in genetic engineering. Common usage of the word "gene" reflects its meaning in molecular biology, namely the segments of DNA that cells transcribe into either RNA that is translated into proteins (DNA=>RNA=>protein) or RNA used for direct purposes (DNA=>RNA). The Sequence Ontology project, a consortium of several centers of genomic studies, defines a gene as: "A locatable region of genomic sequence, corresponding to a unit of inheritance, which is associated with regulatory regions, transcribed regions, and/or other functional sequence regions." The definition reflects the full complexity that has come to be associated with the term gene. Genes encode the information necessary for constructing the multitude of proteins and RNA units needed to maintain an organism's existence, growth, action, and multiplication. Each gene that serves as the first step in protein formation is a region of DNA comprising a mixture of some sections (exons) that code for proteins, others (introns) that have no apparent function, and still others that define the beginning and end of the gene or the conditions in which the gene will be expressed or not expressed. Although the human genome comprises roughly 25,000 genes carrying codes for proteins, each human cell has the potential of making about 100,000 different proteins. Further complexity lies in the additional 10,000 or so genes used for making RNA that directly serves such cellular functions as structure, catalysis, and regulation of gene expression. The proteins and RNAs all share in the tasks of maintaining the cell, one of which is the continual fine tuning of the exact selection of genes being expressed according to the cell's function and its continually changing environment. The ongoing discovery of so much functional RNA in the cell, much of it related to the expression of genes, is taken be some as a sign that RNA may deserve a co-equal billing with DNA in terms of overall contribution to the cellular function. Genes are of central importance to the physical aspect of a living organism: A person's eye color, the breed of a dog, the gender of a horse. Mouse DNA yields a mouse, not an elephant. However, the impact of genes is sometimes extrapolated to the view that genes control everything about human lives and destiny. This is the concept of genetic determinism whereby human behavior, intelligence, emotions and attitudes, and health are fixed by genetic makeup and thus unchangeable. Such a misconception has at times been used as a base for explaining away racial prejudices, addictions, and criminal behavior, and seeking solutions to social problems by turning to genetic engineering as the ultimate solution. The more balanced and generally recognized view is that biological contributions to solving social problems must be sought through a biology that takes into account the influence of social and cultural factors in human physical development and behavior. In molecular biology, a gene is considered to comprise both a coding sequence—the region of DNA (or RNA, in the case of some viruses) that determines the structure of a protein—and a regulatory sequence—the region of DNA that controls when and where the protein will be produced. The genetic code determines how the coding DNA sequence is converted into a protein sequence (via transcription and translation). The genetic code is essentially the same for all known life, from bacteria to humans. Through the proteins they encode, genes govern the cells in which they reside. In multicellular organisms, much of the development of the individual, as well as the day-to-day functions of the cells, is tied to genes. The genes' protein products fulfill roles ranging from mechanical support of the cell structure to the transportation and manufacture of other molecules and the regulation of other proteins' activities. Due to rare, spontaneous changes (e.g. in DNA replication) mutations in the sequence of a gene may arise. If these mutations occur in the germ line cells, they may be passed on to the organism's offspring. Once propagated to the next generation, this mutation may lead to variations within the population of a species. Variants of a single gene are known as alleles, and differences in alleles may give rise to differences in traits, for example, eye color. A gene's most common allele is called the wild type allele, and rare alleles are called mutants. The genotype of an individual organism is its specific genetic makeup (the specific genome). The phenotype of an individual organism is determined to some extent by the genotype, or by the identity of the alleles that an individual carries at one or more positions on the chromosomes. A phenotype is either the organism's total physical appearance and constitution or a specific manifestation of a trait, such as size, eye color, or behavior that varies between individuals. Many phenotypes are determined by multiple genes and influenced by environmental factors. In most cases, RNA is an intermediate product in the process of manufacturing proteins from genes. However, for some gene sequences, the RNA molecules are the actual functional agents. For example, RNAs known as ribozymes are capable of enzymatic function, and small interfering RNAs have a regulatory role. The DNA sequences from which such RNAs are transcribed are known as genes for non-coding RNA or RNA. Most living organisms carry their genes as, and transmit them to offspring as, DNA, but some viruses carry only RNA. Because these viruses use RNA, their cellular hosts may synthesize the viral proteins as soon as they are infected and without the delay in waiting for transcription. On the other hand, RNA retroviruses, such as HIV, require the reverse transcription of their genome from RNA into DNA before their proteins can be synthesized. In common speech, "gene" is often used to refer to the hereditary cause of a trait, disease, or condition—as in "the gene for obesity." Speaking more precisely, a biologist might refer to an allele or a mutation that "has been implicated in" or "is associated with" obesity. This is because biologists know that many factors other than genes decide whether a person is obese or not: Eating habits, exercise, prenatal environment, upbringing, culture, and the availability of food, for example. Moreover, it is highly unlikely that variations within a single gene, or single genetic locus, would fully determine an individual's genetic predisposition for obesity. Rather, the norm with regard to many and perhaps most ("complex" or "multi-factoral") traits is that they reflect the combined effects of several factors including inheritance, interplay between genes and environment, and the combined influence of many genes. The term phenotype refers to the physical characteristics result from the interplay of all of these factors. This table gives typical numbers of genes and genome size for some organisms. Estimates of the number of genes in an organism are somewhat controversial because they depend on the discovery of genes, and no techniques currently exist to prove that a DNA sequence contains no gene. (In early genetics, genes could be identified only if there were mutations, or alleles.) Nonetheless, estimates are made based on current knowledge. |Human, mouse or rat||25,000||3×109| For each known human gene, the HUGO Gene Nomenclature Committee (HGNC) approves a gene name and symbol (short-form abbreviation) and stores all approved symbols in the HGNC Database. Each symbol is unique and each gene is given only one symbol. This protocol greatly facilitates clear and precise gene identifications in communications and in electronic data retrieval from publications. By convention, symbols for the different genes within a gene family all share a certain parallelism of construction. The symbols for human genes can also be applied to congruent genes in other species, such as the mouse. The word "gene" was coined in 1909 by Danish botanist Wilhelm Johannsen for the fundamental physical and functional unit of heredity. The word gene was derived from Hugo De Vries' term pangen, itself a derivative of the word pangenesis, which Darwin (1868) had coined. The word pangenesis is made from the Greek words pan (a prefix meaning "whole," "encompassing") and genesis ("birth") or genos ("origin"). The existence of genes was first suggested by Gregor Mendel, who, in the 1860s, studied inheritance in pea plants and hypothesized a factor that conveys traits from parent to offspring. Although he did not use the term "gene," he explained his results in terms of inherited characteristics. Mendel was also the first to hypothesize independent assortment (the idea that pairs of alleles separate independently during meiosis), the distinction between dominant and recessive traits, the distinction between a heterozygote and homozygote (an organism with different or the same alleles, respectively, of a certain gene on homologous chromosomes), and the difference between what would later be described as genotype (specific genetic make-up) and phenotype (physical manifestation of the genetic make-up). Mendel's concept was finally named when Wilhelm Johannsen coined the word "gene" in 1909. In the early 1900s, Mendel's work received renewed attention from scientists. In 1910, Thomas Hunt Morgan showed that genes reside on specific chromosomes. He later showed that genes occupy specific locations on the chromosome. With this knowledge, Morgan and his students began the first chromosomal map of the fruit fly Drosophila. In 1928, Frederick Griffith showed that genes could be transferred. In what is now known as Griffith's Experiment, injections into a mouse of a deadly strain of bacteria that had been heat-killed transferred genetic information to a safe strain of the same bacteria, killing the mouse. In 1941, George Wells Beadle and Edward Lawrie Tatum showed that mutations in genes caused errors in certain steps in metabolic pathways. This showed that specific genes code for specific proteins, leading to the "one gene, one enzyme" hypothesis. Oswald Avery, Collin Macleod, and Maclyn McCarty showed in 1944 that DNA holds the gene's information. In 1953, James D. Watson and Francis Crick demonstrated the molecular structure of DNA, a double-helix. Together, these discoveries established the central dogma of molecular biology, which states that proteins are translated from RNA which is transcribed from DNA. This dogma has since been shown to have exceptions, such as reverse transcription in retroviruses. The term "gene" is shared by many disciplines, including classical genetics, molecular genetics, evolutionary biology, and population genetics. Because each discipline models the biology of life differently, the usage of the word gene varies between disciplines. It may refer to either material or conceptual entities. Broadly defined, evolution is any heritable change in a population of organisms over time. As noted by Curtis & Barnes (1989), The changes in populations that are considered evolutionary are those that are inheritable via the genetic material from one generation to another. As such, evolution can also be defined in terms of allele frequency, with allele being alternative forms of a gene, such as an allele for blue eye color versus brown eye color. Two important and popular evolutionary theories that address the pattern and process of evolution are the Theory of descent with modification and the theory of natural selection. The theory of descent with modification, or the "theory of common descent" deals with the pattern of evolution and essentially postulates that all organisms have descended from common ancestors by a continuous process of branching. The theory of modification through natural selection, or the "theory of natural selection," deals with mechanisms and causal relationships, and offers one explanation for how evolution might have occurred—the process by which evolution took place to arrive at the pattern. According to the modern evolutionary synthesis, which integrated Charles Darwin's theory of evolution by natural selection with Gregor Mendel's theory of genetics as the basis for biological inheritance and mathematical population genetics, evolution consists primarily of changes in the frequencies of alleles between one generation and another as a result of natural selection. Natural selection has traditionally been viewed as acting on individual organisms, but has also been seen as working on groups of organisms. An alternative model, the gene-centered view of evolution, sees natural selection as working on the level of genes. The gene-centered view of evolution, gene selection theory, or selfish gene theory, holds that natural selection acts through differential survival of competing genes, increasing the frequency of those alleles whose phenotypic effects successfully promote their own propagation. According to this theory, adaptations are the phenotypic effects through which genes achieve their propagation. The view of the gene as the unit of selection was mainly developed in the books Adaptation and Natural Selection, by George C. Williams, and also in The Selfish Gene and The Extended Phenotype, both by Richard Dawkins. Essentially, this view notes that the genes in existence today are those that have reproduced successfully in the past. Often, many individual organisms share a gene; thus, the death of an individual need not mean the extinction of the gene. Indeed, if the sacrifice of one individual enhances the survivability of other individuals with the same gene, the death of an individual may enhance the overall survival of the gene. This is the basis of the selfish gene view, popularized by Richard Dawkins. He points out in his book, The Selfish Gene, that to be successful, genes need have no other "purpose" than to propagate themselves, even at the expense of their host organism's welfare. A human that behaved in such a way would be described as "selfish," although ironically a selfish gene may promote altruistic behaviors. According to Dawkins, the possibly disappointing answer to the question "what is the meaning of life?" may be "the survival and perpetuation of ribonucleic acids and their associated proteins." However, a number of prominent evolutionists, including Ernst Mayr and Stephen Jay Gould, who do recognize selection at levels other than the individual, nonetheless strongly reject the selfish gene theory. Mayr (2001) states that "the reductionist thesis that the gene is the object of selection" is "invalid." Gould (2002)calls the theory a "conceptual error" that sidetracked the profession, and "inspired both a fervent following of a quasi-religious nature" and "strong opposition from many evolutionists." A DNA molecule or strand comprises four kinds of sequentially linked nucleotides, which together constitute the genetic alphabet. A sequence of three consecutive nucleotides, called a codon, is the protein-coding vocabulary. The sequence of codons in a gene specifies the amino acid sequence of the protein it encodes. In most eukaryotic species, very little of the DNA in the genome actually encodes proteins, and the genes may be separated by vast sequences of so-called "junk DNA." Moreover, the genes are often fragmented internally by non-coding sequences called introns, which can be many times longer than the coding sequence. Introns are removed on the heels of transcription by splicing. In the primary molecular sense, however, they represent parts of a gene. All the genes and intervening DNA together make up the genome of an organism, which in many species is divided among several chromosomes and typically present in two or more copies. The location (or locus) of a gene and the chromosome on which it is situated is, in a sense, arbitrary. Genes that appear together on the chromosomes of one species, such as humans, may appear on separate chromosomes in another species, such as mice. Two genes positioned near one another on a chromosome may encode proteins that figure in the same cellular process or in completely unrelated processes. As an example of the former, many of the genes involved in spermatogenesis reside together on the Y chromosome. Many species carry more than one copy of their genome within each of their somatic cells. These organisms are called diploid if they have two copies, or polyploid if they have more than two copies. In such organisms, the copies are practically never identical. With respect to each gene, the copies that an individual possesses are liable to be distinct alleles, which may act synergistically or antagonistically to generate a trait or phenotype. The ways that gene copies interact are explained by chemical dominance relationships. For various reasons, the relationship between a DNA strand and a phenotype trait is not direct. The same DNA strand in two different individuals may result in different traits because of the effect of other DNA strands or the environment. This complex process helps explain the different meanings of "gene": The latter meaning of gene is the result of a more "material entity" than the first one. Just as there are many factors influencing the expression of a particular DNA strand, there are many ways to have genetic mutations. For example, natural variations within regulatory sequences appear to underlie many of the heritable characteristics seen in organisms. The influence of such variations on the trajectory of evolution may be as large as or larger than variation in sequences that encode proteins. Thus, though regulatory elements are often distinguished from genes in molecular biology, in effect they satisfy the shared and historical sense of the word. Indeed, a breeder or geneticist, in following the inheritance pattern of a trait, has no immediate way of knowing whether this pattern arises from coding sequences or regulatory sequences. Typically, he or she will simply attribute it to variations within a gene. Errors during DNA replication may lead to the duplication of a gene, which may diverge over time. Though the two sequences may remain the same, or be only slightly altered, they are typically regarded as separate genes (i.e. not as alleles of the same gene). The same is true when duplicate sequences appear in different species. Yet, though the alleles of a gene differ in sequence, nevertheless they are regarded as a single gene (occupying a single locus). When the Human Genome Project began in 1990 scientists' estimates of the number of genes they would find was roughly 100,000-150,000, largely because of the number of different kinds of proteins found in the body and the assumption that one gene coded for one protein. By the end of the project in 2003, the estimate was 20,000 to 25,000 genes that coded for proteins, which was taken to mean that many genes must be coding for two, three, or four, or perhaps more different kinds of proteins. This marked the beginning of a shift from the sense that DNA and the genes carried on it exercise singular influence and control in shaping the physical potentials of an individual. If one gene makes more than one protein, then the mechanism deciding which protein is produced from a given gene would be critical to the shaping of the individual's physical potentials. Of comparable importance to the question of the centrality of the genes is that of how a given human cell selects a subset of the 20-25,000 genes that will ultimately yield the 10,000 or so proteins that the cell needs out of the roughly 100,000 proteins available to it. After the several decades in which DNA and the genes carried on it have been widely treated as the "stars" of the cellular world, new candidates are challenging for coequal or even perhaps primary recognition in terms of central importance to the cellular function. One, tied to the RNA World model of the origins of life, notes the growing number of identified types of non-coding functional RNA, many of which play a role in the regulation of gene expression. In this view, DNA is a passive, unchanging repository of information, whereas RNA is the active information agent even influencing which segments of DNA are expressed. This view suggests that RNA must deserve at least a co-equal place with DNA as a factor influencing an organism's physiology and psychology. The second view shifts the focus completely away from the cell nucleus, DNA, and RNA. It notes that cells alter the selection of genes they express according to environmental influences they experience, and further that cells experience the environment through the mediation of the protective cell membrane and the thousands of proteins floating in it. With membrane proteins being sensitive to both magnetic and electromagnetic signals, the cells become object partners to their immediate environment(epigenetic factors), which includes influences from thoughts and emotions of the human mind responding to the environment (such as the adrenaline rush when a person wakes up in a burning house). In this view, mind becomes the intermediate third actor in the traditional dichotomy of nature (genes) or nurture (environment). New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
ENGL 237 Writing Fiction I • 5 Cr. Focuses on the craft of the short story. Covers plot, scene, character, dialogue, voice and tone. Students write and critique short fiction and read the work of established short story writers. Suitable for beginning or advanced writers. Recommended: ENGL& 101 placement or higher. After completing this class, students should be able to: - Distinguish between plot and story - Show, rather than tell, by using specific details, naming nouns and strong, active verbs - Develop scenes - Create believable characters through description, action, scene, and dialogue - Establish and sustain a point of view - Create and sustain tension - Control sentence structure, length and word choice to create a particular tone and mood - Critique, revise, and edit works in progress
The history of Asia Minor began with the Hittite Empire in 1700 B.C. and continued with the Neo-Hittite kingdoms in 1200 B.C., the Pergamon Empire in 262 B.C. and the Roman Empire in 25 B.C. The Apostle Paul preached Christianity throughout Asia Minor from 42 to 62 B.C. The area was a part of the Byzantine and later the Ottoman empires until World War I.Continue Reading The Hatti settled Asia Minor, or Anatolia, in 2500 B.C. and built the city of Hattusa. The Hittites invaded Asia Minor around 1700 B.C., absorbing the local customs and calling the area Assuwa. The resulting Hittite Empire thrived until 1200 B.C. and covered the entire area, including the city-states Phrygia, Galatia, Mysia, Lydia and Caria. During the Hittite Empire, the Trojan War occurred, although ancient writings disagree on the exact date; Duris of Samos claimed it occurred during 1334 B.C., whereas Herodotus claimed 1250 B.C. and Eratosthenes 1184 B.C. The Phrygians attacked the Hittites in 1200 B.C., followed by Kaskan attacks in 1190 B.C., which weakened the empire enough that it dissolved into individual city-states until the Assyrians completely overran Asia Minor in 800 B.C. The Asia Minor city-state Lydia expanded through much of Asia Minor in 687 B.C. and held control until the Persian Empire invaded in 547 B.C. Alexander the Great overcame the Assyrians and took over Asia Minor in 333 B.C., leaving a legacy that remained until the Roman Empire conquered the territory in 133 B.C. After Rome fell in 476 A.D., the Byzantine Empire took over, followed by the Seljuq Turks in 1068 A.D. and the Ottoman Empire in 1299 A.D. Asia Minor remained a part of the Ottoman Empire until it fell in 1921 after World War I and became Turkey.Learn more about Ancient History
This course will allow the student to understand that speech is made up of a series of individual sounds, or phonemes, and that the individual sounds can be manipulated. A phonemically aware child can segment and blend strings of isolated sounds to form words, recognize and manipulate larger units of sound, and help with understanding the content of speech to the form of speech. This course will provide direct instruction in phonemic awareness to help children decode new words and remember how to read familiar words. Growth and improvement in phonemic awareness can be facilitated through instruction and practice in phonemic awareness in tasks such as: Phoneme Addition requires the identification of a word when a phoneme is added. For example, "Say row with /g/ at the beginning." (grow)
Interactions among microbes suggest oceans could absorb less carbon than expected. It sounds like a cryptic fortune cookie: He who adds carbon to the ocean will find that it has less. Adding carbon compounds to ocean water can sometimes affect microbe communities in ways that result in less stored carbon dioxide than has been assumed, a new study published online August 20 in Nature suggests. The oceans’ carbon storage is an important factor in predicting the severity of climate change. In designing computer simulations of carbon dioxide and its effects on global climate, scientists assume the ocean can absorb a certain amount of the greenhouse gas. These assumptions are based on the idea that other nutrients such as nitrogen determine how much CO2 phytoplankton — the microscopic “plants” of the sea — will absorb from the atmosphere. The new research, while still preliminary, suggests that CO2 absorption by the oceans is much more complex.
The data show early notions of how star clusters form cannot be correct. The simplest idea is stars form into clusters when a giant cloud of gas and dust condenses. The center of the cloud pulls in material from its surroundings until it becomes dense enough to trigger star formation. This process occurs in the center of the cloud first, implying that the stars in the middle of the cluster form first and, therefore, are the oldest. However, the latest data from Chandra suggest something else is happening. Researchers studied two clusters where Sun-like stars currently are forming — NGC 2024, located in the center of the Flame Nebula, and the Orion Nebula Cluster. From this study, they discovered the stars on the outskirts of the clusters actually are the oldest. “Our findings are counterintuitive,” said Konstantin Getman of Penn State University in University Park. “It means we need to think harder and come up with more ideas of how stars like our Sun are formed.” Getman and his colleagues developed a new two-step approach that led to this discovery. First, they used Chandra data on the brightness of the stars in X-rays to determine their masses. Then, they determined how bright these stars were in infrared light using ground-based telescopes and data from NASA’s Spitzer Space Telescope. By combining this information with theoretical models, they could estimate the ages of the stars throughout the two clusters. The results were contrary to what the basic model predicted. At the center of NGC 2024, the stars were about 200,000 years old, while those on the outskirts were about 1.5 million years in age. In the Orion Nebula, star ages ranged from 1.2 million years in the middle of the cluster to almost 2 million years near the edges. “A key conclusion from our study is [that] we can reject the basic model where clusters form from the inside out,” said Eric Feigelson, also of Penn State. “So we need to consider more complex models that are now emerging from star formation studies.” Explanations for the new findings can be grouped into three broad notions. The first is that star formation continues to occur in the inner regions because the gas in the inner regions of a star-forming cloud is denser — contains more material from which to build stars — than the more diffuse outer regions. Over time, if the density falls below a threshold where it can no longer collapse to form stars, star formation will cease in the outer regions, whereas stars will continue to form in the inner regions, leading to a concentration of younger stars there. Another idea is that old stars have had more time to drift away from the center of the cluster or be kicked outward by interactions with other stars. One final notion is that the observations could be explained if young stars are formed in massive filaments of gas that fall toward the center of the cluster. Previous studies of the Orion Nebula Cluster revealed hints of this reversed age spread, but these earlier efforts were based on limited or biased star samples. This latest research provides the first evidence of such age differences in the Flame Nebula.
Imagine a world like ours, only 6.5 light years away – but filled with life forms unlike anything found on Earth. Take a simulated trip in the near future, where astronomers and biologists alike admire the potential to Darwin IV, a nearby planet with two suns, 60% gravity and an atmosphere capable of supporting life. Having identified Darwin as a likely home for life, scientists send a series of unmanned probes to the planet. Initially, the expectation is to find microscopic life. But the probes soon find themselves in the middle of a developed ecosystem, filled with various creatures of all sizes. Looking through the “eyes” of the probes, marvel strange inhabitants of the planet – like the heavy Groveback, which supports a small forest vegetation in the rear; Prongheads deadly hunt in packs, like wolves, and the elegant Gyrosprinter, an elk-like creature with a body of luminescent biolights points. The look and the biology of each animal is based on the laws of evolution and physics, then the model to fit the hypothetical environment of Darwin IV. Leading experts in the fields of paleontology, astrophysics and astrobiology explain how these creatures could evolve the characteristics of another world, such as hollow bodies, propulsion “jet” and skewers pierced tongue. 31 years ago, NASA experienced one of the greatest disasters in the history of the space program. The space shuttle Challenger broke apart just 73 seconds into the flight.The disas...
Choose a poetry book, storybook, and non-fiction book to read outside under a shady tree. Can the children identify the books? R.L. 10 Reading Buddies Divide children into pairs and let them each choose a favorite book. Go out on the playground, find a shady spot, and enjoy sharing their books with each other. *Encourage them to ask each other questions about the books they read. RF. 1.d Alphabet Walk Write letters on a paved surface with chalk. Challenge the children to step on the letters as they name them. Can they think of something that starts with each sound? RF. 3.c Word Hopscotch Draw a hopscotch grid on a paved surface. Write high frequency words in each section. Children hop on the spaces as they read the words. SL. 2.a Talking Stick Choose a stick on the playground and then have the children sit in a circle under a tree. Explain that you will start a story. As you pass the stick around, the child holding the stick can add to the story. Only the person holding the stick is allowed to talk. You might want to start a story about the day a space ship landed on the playground or the day animals started to talk. L.1.e Prepositions on the Move Using playground equipment, call out various prepositions, such as on, off, over, under, by, between, to, from for the children to demonstrate. L.5.b We Can Do Opposites Gather children around playground equipment and tell them you will call out a word. Can they demonstrate the opposite? For example, if the teacher said down, the children would climb up. If the teacher said front, the children would move to the back. Other words could be over, behind, inside, and so forth. L.5.d Verb Relays Divide children into relay teams. The teacher names a verb and the children act out the meaning until everyone on their team has completed the movement. For example, you could have them walk, march, strut, prance, and so forth.
Goal 6: Ensure availability and sustainable management of water and sanitation for all While substantial progress has been made in increasing access to clean drinking water and sanitation, billions of people—mostly in rural areas—still lack these basic services. Worldwide, one in three people do not have access to safe drinking water, two out of five people do not have a basic hand-washing facility with soap and water, and more than 673 million people still practice open defecation. The COVID-19 pandemic has demonstrated the critical importance of sanitation, hygiene and adequate access to clean water for preventing and containing diseases. Hand hygiene saves lives. According to the World Health Organization, handwashing is one of the most effective actions you can take to reduce the spread of pathogens and prevent infections, including the COVID-19 virus. Yet billions of people still lack safe water sanitation, and funding is inadequate.
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
TOEFL IBT Reading Practice Test 07 from IVY’s Reading 15 Actual Test This section measures your ability to understand academic passages in English. The Reading section is divided into 2 separately timed parts. Most questions are worth 1 point but the last question in each set is worth more than 1 point. The directions indicate how many points you may receive. Some passages include a word or phrase that is underlined in blue. Click on the word or phrase to see a definition or an explanation. Within each part, you can go to the next question by clicking Next. You may skip questions and go back to them later. If you want to return to previous questions, click on Back. You can click on Review at any time and the review screen will show you which questions you have answered and which you have not answered. From this review screen, you may go directly to any question you have already seen in the Reading section. You may now begin the Reading section. In this part you will read 1 passage. You will have 20 minutes to read the passage and answer the questions. Passage 1| Art History The Hudson River School The Hudson River School, the first American art movement considered a genuine school of art, originated at the beginning of the nineteenth century, as American painters sought to establish for themselves a distinct style—one not wholly defined by the traditions they had inherited from European art. Drawing on contemporary national values, artists of the Hudson River School based their movement on the principles of democracy and expansion and found inspiration in the North American landscape that was quickly being claimed as United States territory. By focusing on the untouched beauty of the American wilderness, these painters attempted to convey naturalistic scenes with a sense of admiration and idealism—two concepts that reflected their feelings about their new nation. The Hudson River School was a movement situated within the larger context of American Romanticism, a period of cultural maturation and self-definition in the middle of the nineteenth century. After several decades as an independent nation, the conditions were right for a major creative movement Other American social movements that occurred during the period of American Romanticism influenced painters of the Hudson River School. Transcendentalism, a concurrent literary and philosophical movement, similarly argued for the invention of a national identity and a departure from European conventions. The writings of transcendentalist authors fueled the creative aspirations of the Hudson River School artists and encouraged them to participate in the making of an authentically American artistic culture. In particular, the work of Ralph Waldo Emerson provided a framework of beliefs for the developing nation. A quote from his 1836 essay entitled Nature describes the momentum behind American artists’ mission to assert their nation’s individuality: “We will walk on our own feet; we will work with our own hands; we will speak our own minds … A nation of men will for the first time exist, because each believes himself inspired by the Divine Soul which also inspires all men.” Deeply interested in the potential of nature to deliver spiritual renewal to humankind, artists of the Hudson River School believed that their paintings had the ability to connect humans with a spiritual world. To the artists of the Hudson River School, natural features like waterfalls and thunderhead clouds were symbols that conveyed to an audience the presence of God. With this attitude, artists of the Hudson River School applied intense care to their works, filling them with minute details, rich colors, and otherworldly light —components that although unrealistic, idealized the landscape in order to evoke wonderment and reverence. In this manner, they endeavored to represent nature as the work of God. The content of their paintings frequently represented views of the Hudson River Valley and nearby geographical features like the White Mountains, the Catskills, and the Adirondack Mountains. Combining such images of the American landscape with spiritual themes, the Hudson River School integrated religious beliefs into their definition of a national identity. [A] The Hudson River Valley became the focal point for the artistic movement after Thomas Cole, who is considered the founder of the Hudson River School, moved into the Catskill Mountains of New York— a picturesque region that awed him with its natural beauty.[B] Cole began sketching the local landscape, creating large paintings based on his drawings and later displaying them in New York City, where they caught the attention of many Americans. [C] People were interested 85 in these glorified images of their country, as it provided them with a sense of ownership and identity. [D] There was a growing audience for paintings that could be considered unmistakably American. After a period of tremendous popularity, the images of the Hudson River School faded into the background of the American art scene. The new generation of Americans rejected the moral overtones present in the Hudson 95 River School paintings and turned away from such subjective landscapes in favor of more accurate representations of the physical world. Although modem art audiences may find the Hudson River School landscapes somewhat o contrived or artificial, many viewers appreciate the obvious technical ability demonstrated in these paintings. Furthermore, the Hudson River School paintings are gaining modern relevance, as their overt nationalistic and os religious sentiments are reinterpreted not as evidence of a divine creator but as reminders of the duty of citizens to protect the vulnerable resources of their nations. 1. Which of the sentences below best expresses the essential information in the highlighted sentence in the passage? Incorrect choices change the meaning in important ways or leave out essential information. (A) When American painters created the Hudson River School, they realized that many of their techniques were derived from European traditions. (B) American artists attempted to separate themselves from European art and created a new art movement called the Hudson River School. (C) At the beginning of the nineteenth century, American painters began to participate in a national art movement called the Hudson River School. (D) Inheriting the art movement from Europe, American artists discovered a new school of painting in the early nineteenth century. 2. The word situated in the passage is closest in meaning to 3. According to paragraph 2, the Hudson River School was part of which of the following? (A) A period in the mid-nineteenth century known as American Romanticism (B) A literary and philosophical movement called transcendentalism (C) A struggle to make America an independent nation (D) An 1836 essay entitled Nature 4. Why does the author include a quote by Ralph Waldo Emerson in paragraph 2? (A) To give an example of a major figure in the Hudson River School (B) To describe the shared goals of transcendentalism and the Hudson River School (C) To demonstrate the influence the Hudson River School had on literature (D) To explain how the Hudson River School created a national artistic identity 5. The word minute in the passage is closest in meaning to 6. What can be inferred from paragraph 3 about landscapes painted in the style of the Hudson River School? (A) They left out details that were not important to the overall image. (B) They were embellished to exaggerate a religious message. (C) They were meant to be exact duplicates of natural scenes. (D) They always included figures in the composition. 7. The phrase focal point in the passage is closest in meaning to (A) matter in question (B) center of attention (C) point of view (D) place of residence 8 The word they in the passage refers to (A) Catskill Mountains 9 The word tremendous in the passage is closest in meaning to 10. What can be inferred from paragraph 5 about Hudson River School paintings that are currently displayed in museums? (A) They are the best examples of landscape paintings from the Hudson River School. (B) They are frequently the subject of religious controversy. (C) They are appreciated more for their technique than their intended meaning. (D) They are patriotic symbols from American history. 11. Look at the four squares H that indicate where the following sentence could be added to the passage. This particular landscape inspired the beginning of an entire art movement based on such images. Where would the sentence best fit? 11 Directions: An introductory sentence for a brief summary of the passage is provided below. Complete the summary by selecting the THREE answer choices that express the most important ideas in the passage. Some sentences do not belong in the summary because they express ideas that are not presented in the passage or are minor ideas in the passage. This question is worth 2 points. The Hudson River School was very much the product of a developing nation that was deeply interested in the creation of a national identity. (A) Hudson River School artists turned to the American landscape and to the transcendentalist movement for inspiration and began to represent and define their new nation. (B) Ralph Waldo Emerson is considered the most important figure in the Hudson River School; his advice shaped the direction of the national artistic movement, (C) The goals of Hudson River School artists were to separate themselves from European beliefs into the United States’ emerging culture. (D) The Hudson River School was supported by traditions and to integrate their religious an audience that craved a sense of identity, but contemporary audiences have either rejected or reinterpreted the movement’s principal messages. (E) Spirituality was an important theme for artists of the Hudson River School, and they attempted to use their paintings to teach moral lessons. (F) The Hudson River School was focused on a particularly picturesque region of the United States, mainly centered in the Catskill Mountains of New York.
A network is made up of a group of computing devices that exchange data, and those devices are often called "endpoints." After reading this article you will be able to: Copy article link An endpoint is any device that connects to a computer network. When Bob and Alice talk on the phone, their connection extends from one person to the other, and the "endpoints" of the connection are their respective phones. Similarly, in a network, computerized devices have "conversations" with each other, meaning they pass information back and forth. Just as Bob is one endpoint of his and Alice's conversation, a computer connected to a network is one endpoint of an ongoing data exchange. Everyday examples of endpoints include desktop computers, smartphones, tablets, laptops, and Internet of Things (IoT) devices. Infrastructure devices on which the network runs are considered customer premise equipment (CPE) rather than endpoints. CPE includes: Going back to the example above, when Bob and Alice talk on the phone, the cell tower that transmits their conversation is not an endpoint for their data exchange — it is the medium by which the exchange occurs. As a further example, imagine a grocery store that has several cash registers that connect to the store's network and run point-of-sale (POS) software, a router that connects the store's network to the Internet, an internal server that stores records of each day's transactions, and multiple employees who connect their personal smartphones to the store's WiFi. The router would be considered CPE. The rest of these devices are endpoints on the store's network, even the personal smartphones that are not directly managed by the store. Attackers attempt to take over or breach endpoint devices regularly. They may have any number of goals in mind for doing so: infecting the device with malware, tracking user activity on the device, holding the device for ransom, using the device as part of a botnet, using the device as a starting point to move laterally and compromise other devices within the network, and so on. In a business context, attackers often target endpoints because a compromised endpoint can be an entry point into an otherwise secure corporate network. An attacker may not be able to get through the corporate firewall, but an employee's laptop could be a slightly easier target. Endpoints are difficult to secure in business settings because IT teams have less access to them than they do to the internal networking infrastructure. Endpoint devices also vary widely in terms of make, model, operating system, installed applications, and security posture (readiness to face an attack). Security measures that successfully protect smartphones from attack may not work for servers, for example. And while one employee at a company may regularly update their laptop and avoid risky online behaviors, another might avoid software updates and download unsecure files onto their laptop. Yet the company has to find a way to protect both laptops from attack and prevent them from compromising the network. Because of the difficulty of securing endpoints, and the importance of protecting them, endpoint security is its own category of cyber security (along with network security, cloud security, web application security, IoT security, and access control, among others). There are many types of security products specifically for endpoint protection on the market today. Endpoint management is the practice of monitoring endpoints that connect to a network, ensuring only authenticated endpoints have access, securing those endpoints, and managing what software is installed on endpoints (including non-security software). Endpoint management software is sometimes centralized; it can also be installed on each individual device to enforce security and authorization policies. "API endpoint" is a similar term with a slightly different meaning. An API endpoint is the server end of a connection between an application programming interface (API) and a client. For instance, if a website integrated a cartography API in order to provide driving directions, the website server would be the API client and the cartography API server would be the API endpoint. To learn more about this topic, see What is an API endpoint? Learning Center Navigation
- While we all feel sad, moody or low from time to time, some people experience these feelings intensely, for long periods of time (weeks, months or even years) and sometimes without any apparent reason. Depression is more than just a low mood – it’s a serious condition that affects your physical and mental health. - According to the official website of Mayo Clinic,depression is a mood disorder that causes a persistent feeling of sadness and loss of interest. Also called major depressive disorder or clinical depression, it affects how you feel, think and behave and can lead to a variety of emotional and physical problems. What are the types of depression? There are many different types of depression. Events in your life cause some, and chemical changes in your brain cause others. The types of depression are as follows : You may hear your doctor call thinks “major depressive disorder.” You might have this type if you feel depressed most of the time for most days of the week. Some other symptoms you might have are: - Loss of interest or pleasure in your activities - Weight loss or gain - Trouble getting to sleep or feeling sleepy during the day - Feelings restless and agitated, or else very sluggish and slowed down physically or mentally - Being tired and without energy - Feeling worthless or guilty - Trouble concentrating or making decisions - Thoughts of suicide 2.Persistent Depressive Disorder If you have depression that lasts for 2 years or longer, it’s called persistent depressive disorder. This term is used to describe two conditions previously known as dysthymia (low-grade persistent depression) and chronic major depression. You may have symptoms such as: - Change in your appetite (not eating enough or overeating) - Sleep too much or too little - Lack of energy, or fatigue - Low self-esteem - Trouble concentrating or making decisions - Feel hopeless 3. Seasonal Affective Disorder (SAD) Seasonal affective disorder is a period of major depression that most often happens during the winter months, when the days grow short and you get less and less sunlight. It typically goes away in the spring and summer. Bipolar disorder used to be known as ‘manic depression’ because the person experiences periods of depression and periods of mania, with periods of normal mood in between. Mania is like the opposite of depression and can vary in intensity – symptoms include : - feeling great, - having lots of energy, - having racing thoughts - little need for sleep - talking quickly - having difficulty focusing on tasks - feeling frustrated and irritable This is not just a fleeting experience. Sometimes the person loses touch with reality and has episodes of psychosis. Experiencing psychosis involves hallucinations (seeing or hearing something that is not there) or having delusions (e.g. the person believing he or she has superpowers). Bipolar disorder seems to be most closely linked to family history. Stress and conflict can trigger episodes for people with this condition and it’s not uncommon for bipolar disorder to be misdiagnosed as depression, alcohol or drug abuse, attention deficit hyperactivity disorder (ADHD) or schizophrenia. People with psychotic depression have the symptoms of major depression along with “psychotic” symptoms, such as: - Hallucinations (seeing or hearing things that aren’t there) - Delusions (false beliefs) - Paranoia (wrongly believing that others are trying to harm you) 6.Premenstrual Dysphoric Disorder (PMDD) Women with PMDD have depression and other symptoms at the start of their period. Besides feeling depressed, you may also have: - Mood swings - Trouble concentrating - Change in appetite or sleep habits - Feelings of being overwhelmed 7.Peripartum (Postpartum) Depression Women who have major depression in the weeks and months after childbirth may have peripartum depression. Antidepressant drugs can help similarly to treating major depression that is unrelated to childbirth. 8. ‘Situational’ Depression This isn’t a technical term in psychiatry. But you can have a depressed mood when you’re having trouble managing a stressful event in your life, such as a death in your family, a divorce, or losing your job. Your doctor may call this “stress response syndrome.” Psychotherapy can often help you get through a period of depression that’s related to a stressful situation. 9. Atypical Depression This type is different than the persistent sadness of typical depression. It is considered to be a “specifier” that describes a pattern of depressive symptoms. If you have atypical depression, a positive event can temporarily improve your mood. Other symptoms of atypical depression include: - Increased appetite - Sleeping more than usual - Feeling of heaviness in your arms and legs - Oversensitive to criticism What causes Depression? There are a number of factors that may increase the chance of depression, including the following: - Abuse :Past physical, sexual, or emotional abuse can increase the vulnerability to clinical depression later in life. - Certain medications: Some drugs, such as isotretinoin (used to treat acne), the antiviral drug interferon-alpha, and corticosteroids, can increase your risk of depression. - Conflict :Depression in someone who has the biological vulnerability to develop depression may result from personal conflicts or disputes with family members or friends. - Death or a loss: Sadness or grief from the death or loss of a loved one, though natural, may increase the risk of depression. - Genetics :A family history of depression may increase the risk. It’s thought that depression is a complex trait, meaning that there are probably many different genes that each exert small effects, rather than a single gene that contributes to disease risk. The genetics of depression, like most psychiatric disorders, are not as simple or straightforward as in purely genetic diseases such as Huntington’s chorea or cystic fibrosis. - Major events :Even good events such as starting a new job, graduating, or getting married can lead to depression. So can moving, losing a job or income, getting divorced, or retiring. However, the syndrome of clinical depression is never just a “normal” response to stressful life events. - Other personal problems:Problems such as social isolation due to other mental illnesses or being cast out of a family or social group can contribute to the risk of developing clinical depression. - Serious illnesses: Sometimes depression co-exists with a major illness or may be triggered by another medical condition. - Substance abuse: Nearly 30% of people with substance abuse problems also have major or clinical depression. How can we treat depression? Depression can almost always be treated effectively. The first step is a physical examination by a physician. Certain medications and medical conditions can cause the same symptoms as depreasion and must be ruled out before a diagnosis of depression is made. If depression is diagnosed, treatment can include one or more of the following : - Antidepressants : These medications work by influencing the functioning of targeted chemicals in the brain. Types of Antidepressants include: - Selective Serotonin Reuptake Inhibitors(SSRIs) :Increases avaibility of serotin. - Ticyclics(TCAs) : Increases the levels of serotonin and norepinephrine. - Monoamine Oxidase Inhibitors(MAOIs) :Prevents the breakdown of excitatory neurotransmitters (monoamines) - Psychotherapy : - Cognitive Behavioural Therapy(CBT) : Targeted at changing negative, self defeating thought patterns and behaviours. - Interpersonal Therapy(IPT) and Group Psychotherapy :Focuses on interpersonal relationships and improving communication skills and social support. - Psychodynamic Therapy :Focuses on resolving the patients conflicted feelings and making characterological changes. - Mood Stabilisers: Patients with Bipolar disorder are at a risk of switching into hypomania(mild to moderate mania) to severe mania when taking antidepressants. For this reason, mood stabilisers are usually prescribed alone or in combination with antidepressants for treatement of bipolar disorder. - Aternative Therapies : - Herbal Therapy: Herbal products may have benificial effect in mild cases of depression. Patients should talk with their doctor before taking any herbal or dietary supplement. Studies are going on to determine the effectiveness of these remedies. - Exercise : Exercises may be useful in mild cases of depression. Increased physical activity helps by boosting serotonin levels in the body. - Lfestyle changes are essential in the treatement of depression. Regular exercise can be as effective as medication. Eating well is important for both physical and mental health. Sleep deprivation aggravates Irritability, moodiness and fatique. So getting enough sleep each night is important. Strong social networks(not online ones) prevents isolation, a key risk factor for depression.
Ground testing is often thought of as ground electrode testing: the measurement of the resistance associated with a particular rod or grounding system. A useful corollary to this is soil resistivity testing. Resistivity is the electrical property of soil itself that determines how well it can carry current. It varies enormously (Table 1) depending on physical and chemical composition, moisture, temperature, and other variables. Measuring it is of paramount importance in designing a grounding electrode that will meet all the required electrical parameters for performance and safety. GROUND RESISTIVITY TESTING Earth surface potential gradients are critical in determining step-and-touch potentials around electrical facilities such as substations and assuring their safety in the event of extreme conditions like electrical faults. Ground electrode resistance is primarily a function of deep soil resistivity. Here, “deep soil resistivity” refers to depths roughly equivalent to the diameter of a horizontal electrode system or up to ten times the depth of a vertical electrode. Much more than surface resistivity, ground electrode resistance is critical to safety, fault clearance, and electrical performance. Where a grounding system is to be installed, geotechnical work is often critical. Besides soil resistivity, this information may include soil layering, moisture content, soil pH, and depth of groundwater. While measuring resistance between two plates is sometimes used, it is not recommended to try to obtain soil resistivity from resistance measured between opposite faces of a soil sample due to unknown interfacial resistances between the sample and the electrodes being included. A refinement of this crude technique is the measurement of samples in a specially designed box for the purpose, but this technique can be limited by the difficulty of acquiring a representative soil sample of such small volume, as well as duplicating soil compaction and moisture content. The method can still be useful if rigorously controlled and diligently applied, but alternative specialized methods have been developed to test soil resistivity in place. VARIATION OF DEPTH METHOD One of these alternative methods is variation of depth, or the three-point method. Here, ground resistance measurements are repeated in correlation with incremental increases in ground rod depth. This technique forces more test current through deep soil, and changes in resistivity can be noted at each depth. Driving rods also provides confirmation of how deep they can be driven during installation. A disadvantage, however, is that the rod may vibrate as driven, thereby reducing contact with soil and making conversion to true apparent resistivity less accurate. The variation of depth method provides useful information about the soil in the vicinity of the rod, which is generally taken to be five to ten times the length of the rod. For large areas, it is useful to make multiple tests at representative locations to plot the lateral changes so that a resultant ground grid will not end up installed in soil of higher resistivity than was thought for the area. The resistance of concern is designated r1. A series of three two-point measurements are taken, as the resistance between the electrode under test and each of two auxiliary electrodes, designated r2 and r3. The three measurements then would be r12 = r1 + r2, etc. The resistance of the test rod can then be calculated as r1 = [r12 – r23 + r13]/2. If the auxiliary electrodes are of materially higher resistance than the test electrode, this will greatly magnify the error of the test result. Therefore, the electrodes need to be far enough apart so as to minimize mutual resistances. Where inadequate distances have been used, absurdities such as zero and negative resistance can sometimes be calculated. Therefore, the auxiliary rods should be separated from the test rod by at least three times the depth of the test rod. The auxiliary rods should be driven to the same depth as the test rod, or even less. This method can become difficult to apply for large systems and where high accuracy is required, so other methods may be preferred. Four-point methods at one time were somewhat more difficult to run, principally involving more space and longer leads. In its crude form, the method requires a current source and a potentiometer or high-impedance voltmeter. But modern instrumentation has become quite sophisticated in helping the operator cut down steps and eliminate errors. Some instruments even graph the setup and perform the attendant math on screen. However, one thing that must be remembered when acquiring a test instrument is that it must be a four-terminal model. Three-terminal testers exist, for the purpose of performing ground resistance tests. For resistivity testing, a four-terminal model must be used. By far the most widely applied four-point method is the Wenner Method. This has been described in a previous article and will only be touched on here. The applicable tester has a Kelvin bridge configuration (Figure 1). Two outside current terminals apply the test current through the soil. Two inside voltage terminals measure the voltage drop between them, and the current and voltage parameters are used to calculate the resistance between the voltage probes, which is then shown on the display. The four probes are equidistantly spaced. The Wenner formula, 2πaR, where a is the distance between the voltage probes, is then used to calculate the resistivity, typically in units of Ω-cm, although other units of length can be used if desired. This is the average soil resistivity to a depth of a. The full Wenner formula is more complex, but simplifies to the aforementioned if a probe depth of 1/20th of a is used. By systematically varying a, what is called vertical prospecting can be achieved. That is, the changes in resistivities at different depths can be plotted (Figure 2), aiding in the recognition of significant changes like bedrock. Though popularly used, the Wenner Method has two shortcomings. 1. Relatively large spacing between the two inner (potential) electrodes can cause a decrease in magnitude of potential. This might seem counterintuitive, but remember, the test current against which the voltage drop is being measured spreads out in all directions, not in a straight line as in a wire. Modern testers are increasing in sensitivity, which is helping to mitigate this disadvantage. 2. A second disadvantage is that Wenner requires the movement of all four probes in order to measure to varying depths. The walking back and forth can become prohibitive with large probe spacing. With the Schlumberger Method, the inner (potential) probes are placed closer together (Figure 3). Then only the outer probes are moved in order to calculate resistivity to varying depths. If the depth of probes (b) is kept small in comparison to spacing (c and d) and c is greater than 2d, then resistivity can be calculated: ρ = πc(c + d)R/d This yields apparent resistivity to an approximate depth [2c + d]/2, which is the distance from the center of test to the outer current probes. Confidence in results for both methods can be gained by repeating the tests with probes situated at 90 degrees to the prior set. Readings should be essentially the same. This will help eliminate underground interferences from water pipes, boulders, power lines, etc. from unduly influencing the measurements. The variation of depth method can be used to calculate resistivities through the formula: For each length (l) to which the tested rod is driven, the measured resistance value R determines the apparent resistivity value ρa. Here, r is merely the radius of the tested rod and is kept small with respect to l. Plotting R against l yields a visual aid for determining earth resistivity versus depth. Suppose this technique was used to plot the graphs shown in Figure 4. Figure 4a shows two distinct layers, a shallow one of around 300Ω-m and a deeper layer at 100Ω-m. An informative two-layer soil model is obtained. Figure 4b shows a relatively conductive shallow layer of 100Ω-m, but no data for the deeper layer can be determined by this method. Good conductivity at a deeper layer would be preferable for effective and reliable lightning and fault clearance, as surface conductivity can be volatile. And as already mentioned, variation of depth yields data for a relatively small area around the test rod. Gathering data for large grids may better be implemented by a four-point method. Similarly, the results of four-point methods can be plotted as measured apparent resistivity against electrode spacing. Soil structure can be estimated from the resulting curves, but some empirical rules have been established by field workers to help in identifying layers. • A break or change in curvature indicates another layer. • The depth of a lower layer is taken to be two-thirds the electrode separation at which the inflection occurs. • Five axioms may be followed: 1. Computed apparent resistivities are always positive. 2. As actual resistivities increase or decrease with depth, the apparent resistivities increase or decrease with probe spacings. 3. Maximum change in apparent resistivity occurs at probe spacing larger than depth at which the corresponding change in actual resistivity occurs. Therefore, the changes in apparent resistivity are always plotted to the right of the probe spacing corresponding to the change in actual resistivity. 4. The amplitude of the curve is always less than or equal to the amplitude of actual resistivity versus depth curve. 5. In a multilayer model, a change in actual resistivity of a thick layer results in a similar change in the apparent resistivity curve. Resistance and resistivity measurements associated with grounding are particularly difficult and challenging because the earth is like no other electrical test item. A fundamental knowledge will cover most situations, but there’s always room to grow. IEEE Std 81-2012, IEEE Guide for Measuring Earth Resistivity, Ground Impedance, and Earth Surface Potentials of a Grounding System. Jeffrey R. Jowett is a Senior Applications Engineer for Megger in Valley Forge, Pennsylvania, serving the manufacturing lines of Biddle, Megger, and Multi-Amp for electrical test and measurement instrumentation. He holds a BS in biology and chemistry from Ursinus College. He was employed for 22 years with James G. Biddle Co., which became Biddle Instruments and is now Megger.
Life in the Ghetto Related ImagesSee the photographs related to this lesson Using the Analyzing Visual Images strategy and the Critical Analysis Process for exploring an artwork, print off the Related Images with the captions on their reverse sides and arrange them into the specified groups. Place each group of images on tables or display them on a wall for students to see. Ensure there is an obvious separation between each set of images. If your students have not used the Analyzing Visual Images strategy before, model it for the class using another image from the collection. After modelling the strategy, divide students into evenly numbered groups and assign each group a set of images. Each student should select an image from the group and apply the Analyzing Visual Images strategy. More than one student may select the same image. After completing this process, students should read the captions from the backs of the photographs and share their observations and analyses with the group. When all of the students have shared their ideas, ask them to discuss the following questions: What do the images in this collection have in common? What differences do you see within this collection of images? What title would you give to this collection of images? After completing this discussion, the groups should rotate to the next set of images and repeat the process. Continue this process until each group has worked with all four image sets. Once your students have seen all four collections of images, they should return to their seats and participate in a Think, Pair, Share discussion using a large piece of paper with two columns labelled "Collection Similarities" and "Collection Differences." Students should start writing individually in their notebooks, pair to fill in the large piece of paper, and then share their ideas with the whole group. The last piece of this lesson is an Exit Card. On their exit card, ask your students to do two things. First, they should answer the question: How do the photographs of Henryk Ross represent the complexity of life in the Lodz Ghetto? Second, ask students to pose a question of their own about the images. Students should hand in these cards as they exit the room. Analyzing Visual Images and Stereotyping This video shows Nazi footage of the Lodz Ghetto in the winter of 1940. Testimony of Leo Schneiderman on life in the Lodz Ghetto. Testimony of Blanka Rothschild on life in the Lodz Ghetto.
Every year the choral department combines history, reading, writing, music composition, technology, and recording as part of our annual rap project. Students are placed in groups of four, must write their own lyrics, compose their background music in GarageBand, and then record their voices rapping their lyrics over the music they composed. The mp3 files they produce are priceless! We are moving this year’s project to May- what a great end of the year project! If you are another school interested in joining us on the project, use the outline below, or Contact Us for more information on how we can collaborate! History of Hip Hop/ Rap and Subject Matter It is important that students understand the history of the Hip Hop movement, why it began, and how it evolved into the rap music that students are familiar with today on the radio. A quick google search will reveal information and documentaries about the movement. PBS did a documentary about the Hip Hop movement, and a transcript of the documentary can be found here- historydetectiveshiphop . It is important that students understand that hip hop began as a quest for societal change and equality, and that the lyrics they write should have a deep meaning for them as well. For my 6th grade classes, they use the rap to introduce themselves to the class and to let us learn more about themselves. The chorus and bridge focuses on something they have in common with their other group members. For my 7th/ 8th grade classes who have worked on this previously, they are allowed to branch out and pick a topic that they are passionate about. Writing the Lyrics It is helpful to do a warm-up activity to discuss couplet rhyme scheme before tackling the entire rap. Last year we used the New York Times 2015 Rap Contest as our warm-up activity. They have several lesson plans that accompany their contest that you can use in the classroom. We instructed the students to focus on a couplet rhyme scheme, since that is what we use in our own rap project. After reviewing their individual work and making sure they understood how to rhyme every two lines, we started our project. Use this template to have your students work in groups of four and write their rap! rap template 2016 After deciding on their topic, they can work together to write their intro, chorus, bridge, and outro. It makes the whole process easier if they divide up the verses (one person take the first half of verse one, next person take the second half of verse one, and so on). Note that the second page of the document gives examples of each section using “Ice, Ice, Baby” and “Thrift Shoppe.” Most of my kids know both of these songs, and it was helpful for them to understand the different parts of the song. Make sure to use the clean versions when providing listening examples. Composing and Recording the Music After approving each group’s lyrics, it is time to start composing music in garageband. Use this document and the following YouTube videos as a resource! HipHopStructureinGarageBand After composing the music, record the student’s voices on top of their music and share as a mp3 file. I will post samples of our students lyrics and music as soon as we finish the project! Contact us if you need any additional clarification on the project.
Despite its linguistic roots in ancient Greek, the concept of empathy is of recent intellectual heritage. Yet its history has been varied and colorful, a fact that is also mirrored in the multiplicity of definitions associated with the empathy concept in a number of different scientific and non-scientific discourses. In its philosophical heyday at the turn of the 19th to the 20th century, empathy had been hailed as the primary means for gaining knowledge of other minds and as the method uniquely suited for the human sciences, only to be almost entirely neglected philosophically for the rest of the century. Only recently have philosophers become again interested in empathy in light of the debate about our folk psychological mindreading capacities. In the second half of the last century, the task of addressing empathy was mainly left to psychologists who thematized it as a psychological phenomenon and process to be studied by the method of the empirical sciences. Particularly, it has been studied by social psychologists as a phenomenon assumed to be causally involved in creating prosocial attitudes and behavior. Nevertheless, within psychology it is at times difficult to find agreement of how exactly one should understand empathy; a fact of which psychologists themselves have become increasingly aware. The purpose of this entry is to clarify the empathy concept by surveying its history in various philosophical and psychological discussions and by indicating why empathy was and should be regarded to be of such central importance in understanding human agency in ordinary contexts, in the human sciences and for the constitution of ourselves as social and moral agents. - 1. Historical Introduction - 2. Empathy and the Philosophical Problem of Other Minds - 3. Empathy as the Unique Method of the Human Sciences - 4. Empathy as a Topic of Scientific Exploration in Psychology - 5. Empathy and Moral Psychology - 6. Conclusion - Academic Tools - Other Internet Resources - Related Entries The psychologist Edward Titchener (1867–1927) introduced the term “empathy” in 1909 into the English language as the translation of the German term “Einfühlung” (or “feeling into”), a term that by the end of the 19th century was in German philosophical circles understood as an important category in philosophical aesthetics. Even in Germany its use as a technical term of philosophical analysis did not have a long tradition. Various philosophers certainly speak throughout the 19th century and the second half of the 18th century in a more informal manner about our ability to “feel into” works of arts and into nature. Particularly important here is the fact that romantic thinkers, such as Herder and Novalis, viewed our ability to feel into nature as a vital corrective against the modern scientific attitude of merely dissecting nature into its elements; instead of grasping its underlying spiritual reality through a process of poetic identification. But in using mainly the verbal form in referring to our ability to feel into various things they do not treat such an ability as a topic that is worthy of sustained philosophical reflection and analysis. Robert Vischer was the first to introduce the term “Einfühlung” in a more technical sense—and in using the substantive form he indicates that it is a worthy object of philosophical analysis—in his “On the Optical Sense of Form: A contribution to Aesthetics” (1873). It was however Theodor Lipps (1851–1914) who scrutinized empathy in the most thorough manner. Most importantly, Lipps not only argued for empathy as a concept that is central for the philosophical and psychological analysis of our aesthetic experiences. His work transformed empathy from a concept of philosophical aesthetics into a central category of the philosophy of the social and human sciences. For him, empathy not only plays a role in our aesthetic appreciation of objects. It has also to be understood as being the primary basis for recognizing each other as minded creatures. Not surprisingly, it was Lipps's conception of empathy that Titchener had in mind in his translation of “Einfühlung” as “empathy.” In order to appreciate the philosophical motivation for focusing on empathy one has to keep in mind the intellectual context within which an account of aesthetic perception took place at the end of the 19th century. According to the dominant (even though not universally accepted) positivistic and empiricist conception, sense data constitute the fundamental basis for our investigation of the world. Yet from a phenomenological perspective, our perceptual encounter with aesthetic objects and our appreciation of them as being beautiful—our admiration of a beautiful sunset, for example—seems to be as direct as our perception of an object as being red or square. By appealing to the psychological mechanisms of empathy, philosophers intended to provide an explanatory account of the phenomenological immediacy of our aesthetic appreciation of objects. Lipps conceives of empathy as a psychological resonance phenomenon that is triggered in our perceptual encounter with external objects. More specifically, these resonance phenomena are triggering inner “processes” that give rise to experiences similar to ones that I have when I engage in various activities involving the movement of my body. Since my attention is perceptually focused on the external object, I experience them—or I automatically project my experiences—as being in the object. If those experiences are in some way apprehended in a positive manner and as being in some sense life-affirming, I perceive the object as beautiful, otherwise as ugly. In the first case, Lipps speaks of positive; in the later of negative empathy. Lipps also characterizes our experience of beauty as “objectified self-enjoyment,” since we are impressed by the “vitality” and “life potentiality” that lies in the perceived object (Lipps 1906, 1903 a,b. For the contemporary discussion of empathy's role in aesthetics see particularly Breithaupt 2009; Coplan and Goldie 2011 (Part II); Curtis & Koch 2009; and Keen 2007). In his Aesthetik, Lipps closely links our aesthetic perception and our perception of another embodied person as a minded creature. The nature of aesthetic empathy is always the “experience of another human” (1905, 49). We appreciate another object as beautiful because empathy allows us to see it in analogy to another human body. Similarly, we recognize another organism as a minded creature because of empathy. Empathy in this context is more specifically understood as a phenomenon of “inner imitation,” where my mind mirrors the mental activities or experiences of another person based on the observation of his bodily activities or facial expressions. Empathy is ultimately based on an innate disposition for motor mimicry, a fact that is well established in the psychological literature and was already noticed by Adam Smith (1853). Even though such a disposition is not always externally manifested, Lipps suggests that it is always present as an inner tendency giving rise to similar kinaesthetic sensations in the observer as felt by the observed target. In seeing the angry face of another person we instinctually have a tendency of imitating it and of “imitating” her anger in this manner. Since we are not aware of such tendencies, we see the anger in her face (Lipps 1907). Despite the fact that Lipps's primary examples of empathy focus on the recognition of emotions expressed in bodily gestures or facial expressions, his conception of empathy should not be understood as being limited to such cases. As his remarks about intellectual empathy suggest (1903b/05), he regards our recognition of all mental activities—insofar as they are activities requiring human effort—as being based on empathy or on inner imitation (See also the introductory chapter in Stueber 2006). The above explication of empathy constitutes Lipps' core concept of empathy. In this respect one could rightfully call Lipps one of the first proponents of simulation theory (and proposing a position that is very similar to the version of simulation theory advocated currently by Goldman 2006). Unfortunately, in Lipps one finds also a much broader sense of empathy that is not compatible with the notion of empathy as a form of vicarious imitation. Lipps talks about a “universal apperceptive empathy” and a general “empathy of nature.” He even utilizes empathy in order to explain certain perceptual illusions. In these contexts, the term “empathy” refers to any mental activity on part of the observer that is triggered in the perceptual encounter with an external stimulus and that has to be understood as being constitutive for our comprehension of an object qua object. Here one should think of mental activities that are, for example, required to see a line as a line or mental activities that are necessary to grasp events within nature as being events in a causal nexus (Lipps 1912/13). Rightfully, this liberal employment of the term found no takers, since in its wider usage the concept of empathy looses all of its distinctiveness. Everything and nothing seems to have to do with empathy. Lipps's core concept of empathy and his claim that empathy should be understood as the primary epistemic means for our perception of other persons as minded creatures were highly influential and were the focus of a considerable debate among philosophers at the beginning of the 20th century (Prandtl 1910, Stein 1917, Scheler 1973). Even philosophers who did not agree with Lipps's specific explication, found the concept of empathy appealing because his argument for empathy was closely tied to a thorough critique of what was widely seen at that time as the only alternative for conceiving of knowledge of other minds, that is, Mill's inference from analogy. This inference is best understood as describing the steps that enable us to attribute mental states to other persons based on the observation of their physical behavior and our direct experience of mental states from the first person perspective. Traditionally, the inference from analogy presupposes a Cartesian conception of the mind according to which access to our own mind is direct and infallible, whereas knowledge of other minds is indirect, inferential, and fallible. More formally one can characterize the inference from analogy as consisting of the following premises or steps. i.) Another person X manifests behavior of type B. ii.) In my own case behavior of type B is caused by mental state of type M. iii.) Since my and X's outward behavior of type B is similar, it has to have similar inner mental causes. (It is thus assumed that I and the other persons are psychologically similar in the relevant sense.) Therefore: The other person's behavior (X's behavior) is caused by a mental state of type M. Like Wittgenstein, but predating him considerably, Lipps argues in his 1907 article “Das Wissen von fremden Ichen” that the inference from analogy falls fundamentally short of solving the philosophical problem of other minds. Lipps does not argue against the inference from analogy because of its evidentially slim basis, but because it does not allow us to understand its basic presupposition that another person has a mind that is psychologically similar to our own mind. The inference from analogy thus cannot be understood as providing us with evidence for the claim that the other person has mental states like we do because within its Cartesian framework we are unable to conceive of other minds in the first place. For Lipps, analogical reasoning requires the contradictory undertaking of inferring another person's anger and sadness on the basis of my sadness and anger, yet to think of that sadness and anger simultaneously as something “absolutely different” from my anger and sadness. More generally, analogical inference is a contradictory undertaking because it entails “entertaining a completely new thought about an I, that however is not me, but something absolutely different” (Lipps 1907, 708, my translation). Yet while Lipps diagnoses the problem of the inference of analogy within the context of a Cartesian conception of the mind quite succinctly, he fails to explain how empathy is able to provide us with an epistemically sanctioned understanding of other minds or why our “feeling into” the other person's mind is more than a mere projection. More importantly, Lipps does not sufficiently explain why empathy does not encounter similar problems to the ones diagnosed for the inference from analogy and how empathy allows us to conceive of other persons as having a mind similar to our own if we are directly acquainted only with our own mental states. The fundamental problem for Lipps's defense of empathy as primary method of knowing other minds consists in the fact that he still conceives of empathy within the context of a Cartesian conception of the mind tying our understanding of mental affairs and mental concepts essentially to the first person perspective (See Stueber 2006). Wittgenstein's critique of the inference from analogy is in the end more penetrating because he recognizes that its problem depends on a Cartesian account of mental concepts. If my grasp of a mental concept is exclusively constituted by me experiencing something in a certain way, then it is impossible for me to conceive of how that very same concept can be applied to somebody else, given that I cannot experience somebody else's mental states. I therefore cannot conceive of how another person can be in the same mental state as I am because that would require that I can conceive of my mental state as something, which I do not experience. But according to the Cartesian conception this seems to be a conceptually impossible task. Moreover, if one holds on to a Cartesian conception of the mind, it is not clear how appealing to empathy, as conceived of by Lipps, should help us in conceiving of mental states as belonging to another mind. Within the phenomenological tradition, the above shortcomings of Lipps's position of empathy were quite apparent (see for example Stein 1917, 24 and Scheler 1973, 236). Yet despite the fact that they did not accept Lipps's explication of empathy as being based on mechanisms of inner resonance and projection, authors within the phenomenological tradition of philosophy were persuaded by Lipps's critique of the inference from analogy. For that very reason, Husserl and Stein, for example, continued using the concept of empathy and regarded empathy as an irreducible “type of experiential act sui generis” (Stein 1917, 10), which allows us to view another person as being analogous to ourselves without this “analogizing apprehension” constituting an inference of analogy (Husserl 1963, 141). Scheler went probably the furthest in rejecting the Cartesian framework in thinking about the apprehension of other minds, while keeping committed to something like the concept of empathy. (In order to contrast his position from Lipps, Scheler however preferred to use the term “nachfühlen” rather than “einfühlen.”) For Scheler, the fundamental mistake of the debate about the apprehension of other minds consists in the fact that it does not take seriously certain phenomenological facts. Prima facie, we do not encounter merely the bodily movements of another person. Rather, we are directly recognizing specific mental states because they are characteristically expressed in states of the human body; in facial expressions, in gestures, in the tone of voice, and so on. (See Scheler 1973, particularly 232–258; For a succinct explication of the debate about empathy in the phenomenological tradition consult Zahavi 2010) Nevertheless, philosophers in the phenomenological tradition never provided a philosophically comprehensive account of mental concepts that would allow us to see them as part of an intersubjectively accessible practice in which we interpret, predict, and explain the behavior of other agents. Certainly, a few of our mental concepts, particularly concepts of emotions, could be easily understood as being definable in light of the characteristic bodily expressions associated with specific mental states. But not all mental concepts can be defined in this manner, particularly the central folk psychological concepts of belief and desire. Besides an unfamiliarity with the phenomenological literature, the lack of a comprehensive account of mental concepts should be viewed as the main systematic reason of why the idea that empathy is the primary means of understanding other minds has never been taken seriously in the analytic tradition of philosophy until very recently and why the theory theory position has been so dominant in philosophical circles after the decline of behaviorism. (For further reasons to reject empathy as a primary means of understanding other minds see also section 3 of this entry.) Theory theorists conceive of our understanding of mental concepts as being constituted by an implicit grasp of their role in a folk psychological theory and its law-like psychological generalizations. They conceive of the attribution of mental states to other people as a theoretical inference. We infer the existence of mental states from behavioral evidence together with knowledge of theoretical principles that link the existence of mental states to such evidence in a complex fashion. In suggesting that attributing a mental state to another person is a theoretical inference based on the use of a theory and available evidence, theory theorists also propose an alternative to the traditional inference from analogy; an alternative that philosophers like Lipps or Scheler never even considered in their defense of empathy. Moreover, theory theorists are not without conceptual resources to account for the phenomenological fact that we seem to directly grasp another person's mental states by looking at his facial expressions. For them, such phenomenological directness in the apprehension of particular mental states can be explicated in terms of our familiarity with a folk psychological theory. (For a critical discussion see Stueber 2006). 2.1 Mirror Neurons, Simulation, and the Discussion of Empathy in the Contemporary Theory of Mind Debate The idea that empathy— particularly empathy understood as inner imitation — is the primary epistemic means for understanding other minds has been revived in the 1980's by simulation theorists in the context of the interdisciplinary debate about folk psychology; an empirically informed debate about how best to describe the underlying causal mechanisms of our folk psychological abilities to interpret, explain, and predict other agents. (See Davies and Stone 1995). In contrast to theory theory, simulation theorists conceive of our ordinary mindreading abilities as an ego-centric method and as a “knowledge poor” strategy, where I do not utilize a folk psychological theory but use myself as a model for the other person's mental life. It is not the place here to discuss the contemporary debate extensively, but it has to be emphasized that contemporary simulation theorists vigorously discuss how to account for our grasp of mental concepts and whether simulation theory is committed to Cartesianism. Whereas Goldman (2002, 2006) links his version of simulation theory to a neo-Cartesian account of mental concepts, other simulation theorists develop versions of simulation theory that are not committed to a Cartesian conception of the mind. (Gordon 1995a, b, and 2000; Heal 2003; and Stueber 2006, 2012). Moreover, neuroscientific findings according to which so called mirror neurons play an important role in recognizing another person's emotional states and in understanding the goal-directedness of his behavior have been understood as providing empirical evidence for Lipps' idea of empathy as inner imitation. With the help of the term “mirror neuron,” scientists refer to the fact that there is significant overlap between neural areas of excitation that underlie our observation of another person's action and areas that are stimulated when we execute the very same action. A similar overlap between neural areas of excitation has also been established for our recognition of another person's emotion based on his facial expression and our experiencing the emotion. (For a survey on mirror neurons see Gallese 2003a and b, Goldman 2006, chap. 6; Keysers 2011; Rizzolatti and Craighero 2004; and particularly Rizzolatti and Sinigaglia 2008). Since the face to face encounter between persons is the primary situation within which human beings recognize themselves as minded creatures and attribute mental states to others, the system of mirror neurons has been interpreted as playing a causally central role in establishing intersubjective relations between minded creatures. For that very reason, the neuroscientist Gallese thinks of mirror neurons as constituting what he calls the “shared manifold of intersubjectivity” (Gallese 2001, 44). Stueber (2006, chap. 4)—inspired by Lipps's conception of empathy as inner imitation—refers to mirror neurons as mechanisms of basic empathy; as mechanisms that allow us to apprehend directly another person's emotions in light of his facial expressions and that enable us to understand his bodily movements as goal-directed actions. The evidence from mirror neurons—and the fact that in perceiving other people we use very different neurobiological mechanisms than in the perception of physical objects—does suggest that in our primary perceptual encounter with the world we do not merely encounter physical objects. Rather, even on this basic level, we distinguish already between mere physical objects and objects that are more like us (See also Meltzoff and Brooks 2001). The mechanisms of basic empathy have to be seen as Nature's way of dissolving one of the principal assumptions of the traditional philosophical discussion about other minds shared by opposing positions such as Cartesianism and Behaviorism; that is, that we perceive other people primarily as physical objects and do not distinguish already on the perceptual level between physical objects like trees and minded creatures like ourselves. Mechanisms of basic empathy might therefore be interpreted as providing us with a perceptual basis for developing an intersubjectively accessible folk psychological framework that is applicable to the subject and observed other (Stueber 2006, 142–45. It needs to be acknowledged however that this interpretation of mirror neurons crucially depends on the assumption that the primary function of mirror neurons consists in providing us with a cognitive grasp of another person's actions and emotions; an assumption that has met with some criticism more recently from researchers inside and outside of the neuroscientific community. For the debate about mirror neurons see particularly Allen 2010, Borg 2007, Csibra 2009, Debes 2010, Goldman 2009, Hickok 2008, Iacoboni 2011, Jacob 2008, and Stueber 2012a) Yet it should be noted that everyday mindreading is not restricted to the realm of basic empathy. Ordinarily we not only recognize that other persons are afraid or that they are reaching for a particular object. We understand their behavior in more complex social contexts in terms of their reasons for acting using the full range of psychological concepts including the concepts of belief and desire. Evidence from neuroscience shows that these mentalizing tasks involve very different neuronal areas such as the medial prefrontal cortex, temporoparietal cortex, and the cingulate cortex. (For a survey see Kain and Perner 2003; and Frith and Frith 2003). Low level mindreading in the realm of basic empathy has therefore to be distinguished from higher levels of mindreading (Goldman 2006). It is clear that low level forms of understanding other persons have to be conceived of as being relatively knowledge poor as they do not involve a psychological theory or complex psychological concepts. How exactly one should conceive of high level mindreading abilities, whether they involve primarily knowledge poor simulation strategies or knowledge rich inferences is controversially debated within the contemporary debate about our folk psychological mindreading abilities(See Davies and Stone 1995, Gopnik and Meltzoff 1997, Gordon 1995, Curry and Ravenscroft 2002, Heal 2003, Nichols and Stich 2003, Goldman 2006, and Stueber 2006). Simulation theorists, however, insist that even more complex forms of understanding other agents involve resonance phenomena that engage our cognitively intricate capacities of imaginatively adopting the perspective of another person and reenacting or recreating their thought processes (For various forms of perspective-taking see Coplan 2011 and Goldie 2000). Accordingly, simulation theorists distinguish between different types of empathy such as between basic and reenactive empathy (Stueber 2006) or between mirroring and reconstructive empathy (Goldman 2011). Interestingly, the debate about how to conceive of these more complex forms of mindreading resonates with the traditional debate about whether empathy is the unique method of the human sciences and whether or not one has to strictly distinguish between the methods of the human and the natural sciences. Equally noteworthy is the fact that in the contemporary theory of mind debate voices have grown louder that, in light of insights from the phenomenological and hermeneutic traditions in philosophy, assert that the contemporary theory of mind debate fundamentally misconceives of the nature of social cognition. They claim that on the most basic level empathy should not be conceived of as a resonance phenomenon but as a type of direct perception. (See particularly Zahavi 2010; Zahavi and Overgaard 2012, but Jacob 2011 for a response). More complex forms of social cognition are also not to be understood as being based on either theory or empathy/simulation, rather they are better best conceived of as the ability to directly fit observed units of actions into larger narrative or cultural frameworks (See Gallagher 2012, Gallagher and Hutto 2008, Hutto 2008, and Seemann 2011, but Stueber 2011 and 2012a for a defense of empathy. For skepticism about empathic perspective-taking understood as a complete identification with the perspective of the other person see also Goldie 2011). Regardless of how one views this specific debate it should be clear that ideas about mindreading developed originally by proponents of empathy at the beginning of the 20th century can no longer be easily dismissed and have to be taken seriously. At the beginning of the 20th century, empathy understood as a non-inferential and non-theoretical method of grasping the content of other minds became closely associated with the concept of understanding (Verstehen); a concept that was championed by the hermeneutic tradition of philosophy concerned with explicating the methods used in grasping the meaning and significance of texts, works of arts, and actions. (For a survey of this tradition see Grondin 1994). Hermeneutic thinkers insisted that the method used in understanding the significance of a text or a historical event has to be fundamentally distinguished from the method used in explaining an event within the context of the natural sciences. This methodological dualism is famously expressed by Droysen in saying that “historical research does not want to explain; that is, derive in a form of an inferential argument, rather it wants to understand” (Droysen 1977, 403), and similarly in Dilthey's dictum that “we explain nature, but understand the life of the soul” (Dilthey 1961, vol. 5, 144). Yet Droysen and authors before him never conceived of understanding solely as an act of mental imitation or solely as an act of imaginatively “transporting” oneself into the point of view of another person. Such “psychological interpretation” as Schleiermacher (1998) used to call it, was conceived of as constituting only one aspect of the interpretive method used by historians. Other tasks mentioned in this context involved critically evaluating the reliability of historical sources, getting to know the linguistic conventions of a language, and integrating the various elements derived from historical sources into a consistent narrative of a particular epoch. The differences between these various aspects of the interpretive procedure were however downplayed in the early Dilthey. For him, grasping the significance of any cultural fact had to be understood as a mental act of “transposition.” Understanding the meaning of a text, an action, or work of art requires us to relate it to the primary realm of significance; that is, our own mental life accessible through introspection. (See for example Dilthey 1961, vol. 5, 263–265). Even though Dilthey himself never used the empathy terminology, his position certainly facilitated thinking about understanding as a form of empathy. No wonder then, that at this time the concepts of empathy and understanding were used almost interchangeably in order to delineate a supposed methodological distinction between the natural and the human sciences. (See Stueber 2006 for a more extensive discussion). Ironically, the identification of empathy and understanding and the associated claim that empathy is the sole and unique method of the human sciences also facilitated the decline of the empathy concept and its almost utter disregard by philosophers of the human and social sciences later on, in both the analytic and continental/hermeneutic traditions of philosophy. Within both traditions, proponents of empathy were—for very different reasons—generally seen as advocating an epistemically naïve and insufficiently broad conception of the methodological proceedings in the human sciences. As a result, most philosophers of the human and social sciences maintained their distance from the idea that empathy is central for our understanding of other minds and mental phenomena. Notable exceptions in this respect are R.G. Collingwood and his followers, who suggested that reenacting another person's thoughts is necessary for understanding them as rational agents (Collingwood 1946, Dray 1957 and 1995). Notice however that in contrast to the contemporary debate about folk psychology, the debate about empathy in the philosophy of social science is not concerned with investigating underlying causal mechanisms. Rather, it addresses normative questions of how to justify a particular explanation or interpretation. Philosophers arguing for a hermeneutic conception of the human and social sciences insist on a strict methodological division between the human and the natural sciences. Yet they nowadays favor the concept of understanding (Verstehen) and reject the earlier identification of understanding and empathy for two specific reasons. First, empathy is no longer seen as the unique method of the human sciences because facts of significance, which a historian or an interpreter of literary and non-literary texts are interested in, do not solely depend on facts within the individual mind. Within the philosophy of history, for example, it has become the established consensus that a historian is not and should not be bound by the agent's perspective in telling the story about a particular historical event or a particular period that he is interested in. Historians necessarily surpass the conceptual categories of the agent, since the significance of historical events is constituted not only by an agent's intentions but by their long range and at times unintended consequences (Danto 1965). Similarly, philosophers such as Hans Georg Gadamer, have argued that the significance of a text is not tied to the author's intentions in writing the text. In reading a text by Shakespeare or Plato we are not primarily interested in finding out what Plato or Shakespeare said but what these texts themselves say. Moreover, like the significance of any historical event, the significance of a literary text is dependent on its effect on subsequent generations and its meaning supervenes on its own interpretive history. (Gadamer 1989; for a critical discussion see Skinner (in Tully 1988); “Introduction” in Kögler and Stueber 2000; and Stueber 2002). The above considerations, however, do not justify the claim that empathy has no role to play within the context of the human sciences. It justifies merely the claim that empathy cannot be their only method, at least as long as one admits that recognizing the thoughts of individual agents has to play some role in the interpretive project of the human sciences. Accordingly, a second reason against empathy is also emphasized. Conceiving of understanding other agents as being based on empathy is seen as an epistemically extremely naïve conception of the interpretation of individual agents, since it seems to conceive of understanding as a mysterious meeting of two individual minds outside of any cultural context. Philosophers, influenced by considerations of Heidegger and also the later Wittgenstein, have started to think of individual agents as socially and culturally embedded creatures and have started to conceive of the mind of individual agents as being socially constituted. Understanding other agents thus presupposes an understanding of the cultural context within which an agent functions. Moreover, in the interpretive situation of the human sciences, the cultural background of the interpreter and the person, who has to be interpreted, can be very different. In that case, I can not very easily put myself in the shoes of the other person and imitate his thoughts in my mind. If understanding medieval knights, to use an example of Winch (1958), requires me to think exactly as the medieval knight did, then it is not clear how such a task can be accomplished from an interpretive perspective constituted by very different cultural presuppositions. Making sense of other minds has, therefore, to be seen as an activity that is a culturally mediated one; a fact that empathy theorists according to this line of critique do not sufficiently take into account when they conceive of understanding other agents as a direct meeting of minds that is independent of and unaided by information about how these agents are embedded in a broader social environment. (See Stueber 2006, chap.6, Zahavi 2001, 2005; for the later Dilthey see Makreel 2000. For a critical discussion of whether the concept of understanding without recourse to empathy is useful for marking an epistemic distinction between the human and natural sciences consult also Stueber 2012b). Philosophers, who reject the methodological dualism between the human and the natural sciences as argued for in the hermeneutic context, are commonly referred to as naturalists in the philosophy of social science. They deny that the distinction between understanding and explanation points to an important methodological difference. Even in the human or social sciences, the main point of the scientific endeavor is to provide epistemically justified explanations (and predictions) of observed or recorded events (see also Henderson 1993). At most, empathy is granted a heuristic role in the context of discovery. It however can not play any role within the context of justification. As particularly Hempel (1965) has argued, to explain an event involves—at least implicitly—an appeal to law-like regularities providing us with reasons for expecting that an event of a certain kind will occur under specific circumstances. Empathy might allow me to recognize that I would have acted in the same manner as somebody else. Yet it does not epistemically sanction the claim that anybody of a particular type or anybody who is in that type of situation will act in this manner. Hempel's argument against empathy has certainly not gone unchallenged. Within the philosophy of history, Dray (1957), following Collingwood, has argued that empathy plays an epistemically irreducible role, since we explain actions in terms of an agent's reasons. For him, such reason explanations do not appeal to empirical generalizations but to normative principles of actions outlining how a person should act in a particular situation. More recently, similar arguments have been articulated by Jaegwon Kim (1984, 1998). Yet as Stueber (2006, chap. 5) argues such a response to Hempel would require us to implausibly conceive of reason explanations as being very different from ordinary causal explanations. It would imply that our notions of explanation and causation are ambiguous concepts. Reasons that cause agents to act in the physical world would be conceived of as causes in a very different sense than ordinary physical causes. Moreover, as Hempel himself suggests, appealing to normative principles explains at most why a person should have acted in a certain manner. It does not explain why he ultimately acted in that way. Consequently, Hempel's objection against empathy retain their force as long as one maintains that reason explanations are a form of ordinary causal explanations and as long as one conceives of the epistemic justification of such explanations as implicitly appealing to some empirical generalizations. Despite these concessions to Hempel, Stueber suggests that empathy (specifically reenactive empathy) has to be acknowledged as playing a central role even in the context of justification. For him, folk psychological explanations have to be understood as being tied to the domain of rational agency. In contrast to explanations in terms of mere inner causes, folk psychological explanations retain their explanatory force only as long as agents' beliefs and desires can also be understood as reasons for their actions. The epistemic justification of such folk psychological explanations implicitly relies on generalizations involving folk psychological notions such as belief and desire. Yet the existence of such generalizations alone does not establish specific beliefs and desires as reasons for a person's actions. Elaborating on considerations by Heal (2003) and Collingwood (1946), Stueber suggests that recognizing beliefs and desires as reasons requires the interpreter to be sensitive to an agent's other relevant beliefs and desires. Individual thoughts function as reasons for rational agency only relative to a specific framework of an agent's thoughts that are relevant for consideration in a specific situation. Most plausibly—given our persistent inability to solve the frame problem—recognizing which of another agent's thoughts are relevant in specific contexts requires the practical ability of reenacting another person's thoughts in one's own mind. Empathy's central epistemic role has to be admitted, since beliefs and desires can be understood only in this manner as an agent's reasons. (See Stueber 2002, 2003 and 2006, chaps. 4 and 5). For similar reasons, Stueber (2006, chap. 6) argues that, while the above objections to empathy from hermeneutic philosophers limit the scope of empathy within the social sciences, they do not imply that empathy has no role to play in our understanding of individual agency. Accordingly, the fact that we at times experience imaginagive resistance in our attempt to understand others should be understood as indirectly confirming the thesis that empathy is the implicit default method for understanding individual agency that at times however needs to be supplemented by various theoretical and narrative strategies (For a discussion see Henderson and Horgan 2000, Henderson 2011, and Stueber 2008, 2011a). Within the context of anthropology, Hollan and Throop argue that empathy is best understood as a dynamic, culturally situated, temporally extended, and dialogical process actively involving not only the interpreter but also his or her interpretee (Hollan 20012; Hollan and Throop 2008, 2001; Throop 2010). The discussion of empathy within psychology has been largely unaffected by the critical discussion and negative view of empathy that became prevalent in mainstream philosophical circles until the 1980's. Rather, psychologists were more influenced by the positive evaluation of empathy by philosophers from the beginning of the century. Empathy related phenomena were in general understood as playing an important role in interpersonal understanding and as causal factors motivating humans to act in a prosocial manner. Indeed by the time philosophers were ready to retire the empathy concept, psychologists tried to investigate empathy in an experimentally rigorous manner. Throughout the early 20th century, but particularly since the late 1940's, empathy has been an intensively studied topic of psychological research. As psychologists themselves have become increasingly aware, the empirical investigation of empathy has been hindered (particularly in the beginning) by conceptual confusions and a multiplicity of definitions of the empathy concept. (See for example Davis 1994; Eisenberg and Strayer 1987; and Batson 2009). Within social psychology, this state of affairs is due to the fact that the empathy concept merged with and completely replaced the multi-dimensional concept of sympathy used by earlier psychologists and philosophers (Wispe 1986, 1987, and particularly 1991). Whereas the emphasis in the philosophical discussion—outside the context of aesthetics—lay on empathy's cognitive role in providing us with knowledge of other minds, the concept of sympathy was primarily situated within the context of moral psychology and moral philosophy. In the works of David Hume and Adam Smith, sympathy referred to a family of psychological mechanisms that would allow us to explain how an individual could be concerned about and motivated to act on behalf of another human being. Given its focus on human social motivation, it is no wonder that within this context one did not only stress cognitive abilities to understand other persons. One also referred to our emotional reactivity in encountering another person, particularly when perceiving another person's suffering or distress. More broadly one can distinguish two psychological research traditions studying empathy related phenomena; that is, the study of what is currently called empathic accuracy and the study of empathy as an emotional phenomenon in the encounter of others. The first area of study defines empathy primarily as a cognitive phenomenon and conceives of empathy in general terms as “the intellectual or imaginative apprehension of another's condition or state of mind,” to use Hogan's (1969) terminology. Within this area of research, one is primarily interested in determining the reliability and accuracy of our ability to perceive and recognize other persons' enduring personality traits, attitudes and values, and occurrent mental states. One also investigates the various factors that influence empathic accuracy. One has, for example, been interested in determining whether empathic ability depends on gender, age, family background, intelligence, emotional stability, the nature of interpersonal relations, or whether it depends on specific motivations of the observer. (For a survey see Ickes 1993 and 2003; and Taft 1955). A more detailed account of the research on empathic accuracy and some of its earlier methodological difficulties can be found in the Philosophically more influential has been the study of empathy defined primarily as an emotional or affective phenomenon, which psychologists in the middle of the 1950's started to focus on. In this context, psychologists have also addressed issues of moral motivation that have been traditionally topics of intense discussions among moral philosophers. They were particularly interested to investigate (i) the development of various means for measuring empathy as a dispositional trait of adults and of children and as a situational response in specific situations, (ii) the factors on which empathic responses and dispositions depend, and (iii) the relation between empathy and pro-social behavior and moral development. Before discussing the psychological research on emotional empathy and its relevance for moral philosophy and moral psychology in the next section, it is vital to introduce important conceptual distinctions that one should keep in mind in evaluating the various empirical studies. Anyone reading the emotional empathy literature has to be struck by the fact that empathy tended to be incredibly broadly defined in the beginning of this specific research tradition. Stotland, one of the earliest researcher who understood empathy exclusively as an emotional phenomenon, defined it as “an observer's reacting emotionally because he perceives that another is experiencing or is about to experience an emotion” (1969, 272). According to Stotland's definition very diverse emotional responses such as feeling envy, feeling annoyed, feeling distressed, being relieved about, feeling pity, or feeling what Germans call Schadenfreude (feeling joyful about the misfortune of another) have all to be counted as empathic reactions. Since the 1980's however, psychologists have fine tuned their understanding of empathy conceptually and distinguished between different aspects of the emotional reaction to another person; thereby implicitly acknowledging the conceptual distinctions articulated by Max Scheler (1973) almost a century earlier. In this context, it is particularly useful to distinguish between the following reactive emotions that are differentiated in respect to whether or not such reactions are self or other oriented and whether they presuppose awareness of the distinction between self and others. (See also the survey in the Introduction to Eisenberg/Strayer 1987, and the introduction in Stueber 2006) - Emotional contagion: Emotional contagion occurs when people start feeling similar emotions caused merely by the association with other people. You start feeling joyful, because other people around you are joyful or you start feeling panicky because you are in a crowd of people feeling panic. Emotional contagion however does not require that one is aware of the fact that one experiences the emotions because other people experience them, rather one experiences them primarily as one's own emotion (Scheler 1973, 22). A newborn infant's reactive cry to the distress cry of another, which Hoffman takes as a “rudimentary precursor of empathic distress” (Hoffman 2000, 65), can probably be understood as a phenomenon of emotional contagion, since the infant is not able to properly distinguish between self and other. Affective Empathy: More narrowly and properly understood, empathy in the affective sense is the vicarious sharing of an affect. Authors however differ in how strictly they interpret the phrase of vicariously sharing an affect. For some, it requires that the empathizers and the persons they empathize with need to be in very similar affective states (Coplan 2011; de Vignemont and Singer 2006; Jacob 2011). For Hoffman, on the other hand, it is an emotional response requiring only “the involvement of psychological processes that make a person have feelings that are more congruent with another's situation than with his own situation” (Hoffman 2000, 30). According to this definition, empathy does not necessarily require that the subject and target feel similar emotions (even though this is most often the case). Rather the definition also includes cases of feeling sad when seeing a child who plays joyfully but who does not know that it has been diagnosed with a serious illness (assuming that this is how the other person himself or herself would feel if he or she would fully understand his or her situation). In contrast to mere emotional contagion, genuine empathy presupposes the ability to differentiate between oneself and the other. It requires that one is minimally aware of the fact that one is having an emotional experience due to the perception of the other's emotion, or more generally due to attending to his situation. In seeing a sad face of another and feeling sad oneself, such feeling of sadness should count as genuinely empathic only if one recognizes that in feeling sad one's attention is still focused on the other and that it is not an appropriate reaction to aspects of one's own life. Moreover, empathy outside the realm of a direct perceptual encounter involves some appreciation of the other person's emotion as an appropriate response to his or her situation. To be happy or unhappy because one's child is happy or sad should not count necessarily as an empathic emotion. It cannot count as a vicarious emotional response if it is due to the perception of the outside world from the perspective of the observer and her desire that her children should be happy. My happiness about my child being happy would therefore not be an emotional state that is more congruent to his situation. Rather, it is an emotional response appropriate to my own perspective on the world. In order for my happiness or unhappiness to be genuinely empathic it has to be happiness or unhappiness about what makes the other person happy. (See Sober and Wilson 1998, 231–237; for a useful discussion see also H. Maibom 2007). It is exactly for this reason that perspective taking has been traditionally conceived of as a mechanism of empathy. Yet it has to be admitted that empirically both forms of emotional responses are very often intertwined. Sympathy: In contrast to affective empathy, sympathy is not an emotion that is congruent with the other's emotion or situation such as feeling the sadness of the other person's grieving for the death of his father. Rather, sympathy is seen as an emotion sui generis that has the other's negative emotion or situation as its object from the perspective of somebody who cares for the other person's well being (Darwall 1998). In this sense, sympathy consists of “feeling sorrow or concern for the distressed or needy other,” a feeling for the other out of a “heightened awareness of the suffering of another person as something that needs to be alleviated.” (Eisenberg 2000a, 678; Wispe 1986, 318; and Wispe 1991). Whereas it is quite plausible to assume that empathy—that is, empathy with negative emotions of another or what Hoffman (2000) calls “veridical empathic distress”—under certain conditions (and when certain developmental markers are achieved) can give rise to sympathy, it should be stressed that the relation between affective empathy and sympathy is a contingent one; the understanding of which requires further empirical research. First, sympathy does not necessarily require feeling any kind of congruent emotions on part of the observer, a detached recognition or representation that the other is in need or suffers might be sufficient. (See Scheler 1973 and Nichols 2004). Second, empathy or empathic distress might not at all lead to sympathy. People in the helping professions, who are so accustomed to the misery of others, suffer at times from compassion fatigue. It is also possible to experience empathic overarousal because one is emotionally so overwhelmed by one's empathic feelings that one is unable to be concerned with the suffering of the other (Hoffman 2000, chap. 8). In the later case, one's empathic feeling are transformed or give rise to mere personal distress, a reactive emotional phenomenon that needs to be distinguished from emotional contagion, empathy, and sympathy. Personal Distress: Personal distress in the context of empathy research is understood as a reactive emotion in response to the perception/recognition of another's negative emotion or situation. Yet, while personal distress is other-caused like sympathy, it is, in contrast to sympathy, primarily self-oriented. In this case, another person's distress does not make me feel bad for him or her, it just makes me feel bad, or “alarmed, grieved, upset, worried, disturbed, perturbed, distressed,and troubled;” to use the list of adjectives that according to Bateson's research indicates personal distress (Batson et al. 1987 and Batson 1991). And, in contrast to empathic emotions as defined above, my personal distress is not any more congruent with the emotion or situation of another. Rather it wholly defines my own outlook onto the world. While it is conceptually necessary to differentiate between these various emotional responses, it has to be admitted that it is empirically not very easy to discriminate between them, since they tend to occur together. This is probably one reason why early researchers tended not to distinguish between the above aspects in their study of empathy related phenomena. Yet for the purpose of evaluating the impact and contribution of empathy to an agent's motivation (and for evaluating empathy's centrality for moral psychology), it is important to distinguish between various aspects of emotional responding to another person. As Batson's work—first summarized in his 1991 and more comprehensively in his 2011—suggests, personal distress is only inducing us to help another for egoistic reasons: One wants to get rid of the unpleasant feeling of seeing the other in need, and one helps because one conceives of helping as a means of achieving those egoistic ends. Feelings of being sympathetic, moved by, being compassionate, tender, warm and soft-hearted towards the other's plight (Batson et al. 1987, 26)—that is, feelings that are associated with sympathy according to the above classification but which Batson calls feelings of empathy (see his 1991 , 86/87)—on the other hand motivate for altruistic reasons. In such altruistic motivations the welfare of the other is the ultimate goal of my helping behavior (for this terminology see Sober and Wilson 1998); my helping behavior is not regarded to be a further means to another goal that I desire. Nevertheless, given the ambiguity of the empathy concept within psychology—particularly in the earlier literature—in evaluating and comparing different empirical empathy studies it is always crucial to keep in mind how empathy has been defined and measured within the context of these studies. For a more extensive discussion of the methods used by psychologists to measure empathy see the Moral philosophers have always been concerned with moral psychology and with articulating an agent's motivational structure, since the philosophical articulation of principles for the normative evaluation of human behavior has to be psychologically plausible. Normative rules are commonly thought of as expressing an obligation for human agents and as asserting a motivational pull on the agent's will. For that very reason, descriptive knowledge of the psychological or biological constitution of human beings can be understood as providing us with knowledge of plausible constraints for evaluating the validity of various normative standards. Moreover, two additional assumptions have traditionally been important for considerations by moral philosophers. First, moral norms have to be distinguished from mere social conventions in that they are somehow regarded to be universally valid, independent from the commands of social authority and a particular culture. Second, moral motivation is in some sense self-less, it is not the mere satisfaction of selfish desires (an intuition that despite their differences both Kant and Schopenhauer agree on.) Giving to charity for selfish reasons seems to diminish the moral worth of that action. For Kant, both intuitions imply that we have to think of morality and moral norms as being derived from pure reason with its abstract notions of duty and conformity to the law. Philosophers who have been skeptical about the claim that pure reason with its abstract notion of duty can be motivationally effective, on the other hand, have tended to emphasize our natural ability to sympathize with the suffering of other people and have claimed that sympathy has to be seen as the primary non-selfish (moral) motivation in human beings.(See Schopenhauer's critique of Kant (Schopenhauer 1995) in this respect). Moreover, some philosophers like Smith (1853) and Schopenhauer (1995) also suggest that the normative force of various moral standards is derived from reflections on the results of sympathizing with others in various situations. Yet the claim that sympathy leads to actions that are in some sense selflessly motivated and that it is the capacity for sympathy that is empirically necessary for moral agency has never been sufficiently substantiated by those very same philosophers. It is easily imaginable that, even if the suffering of another person makes me feel sad, I am just interested in helping the other person not for selfless reasons—because I am interested in his well-being—but because I want for selfish reasons to get rid off a bad feeling. It is also imaginable that moral agency is possible without sympathy, even if as a matter of fact most people do have such feelings from time to time. It is exactly for this very reason that psychological research on emotional empathy has become so important for contemporary philosophers. It promises that the question regarding the validity of some of the above assumptions about the structure of human motivation can be answered in an empirically informed and rigorous manner. In this context, the work of the psychologists Batson and Hoffman are of particular interest. (For a survey of other relevant issues from social psychology, specifically social neuroscience, consult also Decety and Lamm 2006; Decety and Ickes 2009, and Decety 20012. For a discussion of the importance empathy for medical practice see Halpern 2001) In a series of ingeniously designed experiments, Batson has accumulated evidence for what he calls the empathy-altruism thesis. The task of those experiments consists in showing that empathy/sympathy does indeed lead to genuinely altruistic motivation rather than to helping behavior because of predominantly egoistic motivations. According to the egoistic interpretation of empathy related phenomena, empathizing with another person in need is associated with a negative feeling or can lead to a heightened awareness of the negative consequences of not helping; such as feelings of guilt, shame, or social sanctions. Alternatively, it can lead to an enhanced recognition of the positive consequences of helping behavior such as social rewards or good feelings. Empathy according to this interpretation induces us to help through mediation of purely egoistic motivations. We help others only because we recognize helping behavior as a means to egoistic ends. It allows us to reduce our negative feelings (aversive arousal reduction hypothesis), to avoid “punishment,” or to gain specific internal or external “rewards” (empathy-specific punishment and empathy-specific reward hypotheses). Notice however that in arguing for the empathy-altruism thesis, Batson is not claiming that empathy always induces helping behavior. Rather, he argues against the predominance of an egoistic interpretation of an agent's motivational structure. He argues for the existence of genuinely altruistic motivations and more specifically for the claim that empathy causes such genuinely altruistic motivation. These genuinely altruistic motives (together with other egoistic motives) are taken into account by the individual agent in deliberating about whether or not to help. Even for Batson, the question of whether the agent will act on his or her altruistic motivations depends ultimately on how strong they are and what costs the agent would incur in helping another person. The basic set up of Batson's experiments consists in the manipulation of the situation of the experimental subjects (dependent on the egoistic alternative to be argued against) and the manipulation of empathy/sympathy felt for an observed target in need. The decisive evidence for the empathy/sympathy-altruism thesis is always the recorded behavior of the subject, who is in a high empathy condition and in a situation where his helping behavior can not plausibly be seen as a means for the satisfaction of a personal goal. Since here is not the place to extensively describe the details of Batson's experiments, a brief description of the experimental set up—focusing on Batson's argument against the aversive arousal interpretation of empathy—and a brief evaluation of the success of his general argumentative strategy has to suffice (for more details see Batson 1991 and 2011). In all of his experiments Batson assumes—based on Stotland (1969) and others—that empathy/sympathy can be manipulated either by manipulating the perceived similarity between subjects and targets or by manipulating the perspective taking attitude of the subjects. Empathy according to these assumptions can be increased by enhancing the perceived similarity between subject and target or by asking the subject to imagine how the observed person would feel in his or her situation rather than asking the subject to attend carefully to the information provided. [Note also that instructing the subject to imagine how they themselves would feel in the other's situation, rather than instructing them to imagine how the other feels, is associated with an increase in personal distress and not only sympathetic feelings. (Batson et al. 1997b and Lamm, Batson, and Decety 2007).] In trying to argue against the aversive arousal reduction interpretation, Batson also manipulates the ease with which a subject can avoid helping another person (in this case taking his place when they see him getting electric shocks). He reasons that if empathy leads to genuinely altruistic motivations, subjects in the high empathy/easy escape condition should still be willing to help. If they were only helping in order to reduce their own negative feelings, they would be expected to leave in this situation, since leaving is the less costly means for reaching an egoistic goal. As Batson was happy to report, the results confirmed his empathy/sympathy-altruism hypothesis, not only in the above experiments but also in experiments testing other alternative interpretations of empathy such as the empathy- specific punishment and the empathy-specific award hypotheses. Researchers generally agree in finding Batson's experimental research program and the accumulated evidence for the empathy-altruism thesis to be impressive. Yet they disagree about how persuasive one should ultimately regard his position. In particular it has been pointed out that his experiments have limited value, since they target only very specific egoistic accounts of why empathy might lead to helping behavior. Batson is not able to dismiss conclusively every alternative egoistic interpretation. In addition, it has been claimed that egoism has the resources to account for the result of his experiments. For example, one might challenge the validity of Batson's interpretation by speculating whether empathy/sympathy leads to a heightened awareness of the fact that one will be troubled by bad memories of seeing another person in need, if one does nothing to help him or her. In this case even an egoistically motivated person would help in the high empathy/easy escape condition. (For this reply and various other egoistic interpretations of Batson's experiments see Sober and Wilson 1998, 264–271). Cialdini and his collaborators have suggested an even more elaborate non-altruistic interpretation of helping behavior in high empathy/easy escape conditions. According to their suggestions, conditions of high empathy are also conditions of increased “interpersonal unity, wherein the conception of self and other are not distinct but are merged to some degree” (Cialdini et al. 1997, 490). It is this increased feeling of oneness rather than empathy that is causally responsible for motivating helping behavior (For a discussion see Batson et al. 1997a, Neuberg et al. 1997, and Batson 1997 and 2011). One therefore has to be cautious in claiming that Batson has conclusively proven that the empathy/sympathy-altruism hypothesis is true, if that means one has logically excluded every egoistic alternative in accounting for helping behavior. But it has to be acknowledged that Batson has radically changed the argumentative dialectic of the egoism-altruism debate by forcing the egoistic account of human agency to come up with ever more elaborate alternative interpretations in order to account for helping behavior within its framework. Egoism was supposed to provide a rather unified and relatively simple account of the motivational structure of human agency. In challenging the predominance and simplicity of this framework in an empirically acute fashion, Batson has at least established altruism—claiming that besides egoistic motivations we are also motivated by genuinely altruistic reasons—as an empirically plausible hypothesis. He has shown it to be a hypothesis one is almost persuaded to believe that it is true, as he himself recently has characterized his own epistemic attitude (Batson 1997, 522.) More positively expressed, Batson's research has at least demonstrated that empathy/sympathy is a causal factor in bringing about helping behavior. Regardless of the question of the exact nature of the underlying motivation for helping or prosocial behavior, psychologists generally assume that in adults and children a positive correlation between empathy—measured in a variety of ways—and prosocial behavior has been established; and this despite the fact that the above aspects of emotional responding to another person have not always been sufficiently distinguished. (For a survey see Eisenberg and Miller 1987; Eisenberg/Fabes 1998. For a general survey of the various factors contributing to prosocial behavior see Bierhoff 2002). Regardless of how exactly one views the strength of Batson's position, his research alone does not validate the thesis, articulated by various traditional moral philosophers, that sympathy or empathy is the basis of morality or that it constitutes the only source for moral motivation. First, nothing in his research has shown that empathy/sympathy is empirically necessary for moral agency. Second, some of Batson's own research casts doubt on the claim that sympathy/empathy is the foundation of morality as empathy induced altruism can lead to behavior that conflicts with our principles of justice and fairness. One, for example, tends to assign a better job or a higher priority for receiving medical treatment to persons with whom one has actually sympathized, in violation of the above moral principles (See Batson et al. 1995). For that very reason, Batson himself distinguishes between altruistic motivation concerned with the well-being of another person and moral motivation guided by principles of justice and fairness (Batson 2011). Feelings of sympathy/empathy might thus lead to selfless and other oriented behavior, such behavior is however not necessarily derived from the right sort of selflessness and impartiality characteristic of the moral attitude. Since Batson understands empathy primarily as an emotional phenomenon, it should also be kept in mind that the research discussed above is not at all relevant for deciding the question of whether or not sophisticated mindreading abilities are required for full blown moral agency. (See Nichols 2001 and Batson et al. 2003 in this respect.) Within the psychological literature, one of the most comprehensive accounts of empathy and its relation to the moral development of a person is provided by the work of Martin Hoffman (for a summary see his 2000). Hoffman views empathy as a biologically based disposition for altruistic behavior (Hoffman 1981). He conceives of empathy as being due to various modes of arousal allowing us to respond empathically in light of a variety of distress cues from another person. Hoffman mentions mimicry, classical conditioning, and direct association—where one empathizes because the other's situation reminds one of one's own painful experience—as “fast acting and automatic” mechanisms producing an empathic response. As more cognitively demanding modes, Hoffman lists mediated association—where the cues for an empathic response are provided in a linguistic medium—and role taking. Hoffman also distinguishes between five developmental stages of empathic responses ranging from the reactive newborn cry, egocentric empathic distress, quasi-ego-centric empathic distress, to veridical distress and empathy for another beyond the immediate situation. He conceives of full blown empathy on a developmental continuum that ranges from emotional contagion (as in the case of a reactive newborn cry) to empathy proper reached at the fourth stage. At the developmentally later stages, the child is able to emotionally respond to the distress of another in a more sophisticated manner due to an increase of cognitive capacities, particularly due to the increased cognitive ability to distinguish between self and other and by becoming aware of the fact that others have mental states that are independent from its own. Only at the fourth stage of empathic development (after the middle of the second year) do children acquire such abilities. They do no longer try to comfort themselves, when emotionally responding to another child's distress—like seeking comfort from their own mother—, or use helping strategies that are more appropriate to comfort themselves than the other person—like using their own teddy-bear in trying to comfort the other child. Only at the fourth stage does empathy become also transformed or associated with sympathy leading to appropriate prosocial behavior. Hoffman's developmental view is further supported by Preston and DeWaal's account of empathy as a phenomenon to be observed across species at various levels of complexities related to different degrees of cognitive development. (Preston and DeWaal 2002a,b. For a discussion of the philosophical relevance of DeWaal's view see also DeWaal 2006). Significantly, Hoffman combines his developmental explication of empathy with a sophisticated analysis of its importance for moral agency. He is also acutely aware of the limitations in our natural capacity to empathize or sympathize with others, particularly what he refers to as “here and now” biases, that is, the fact that we tend to empathize more with persons that are in some sense perceived to be closer to us. (For a neuro-scientific investigation of how racial bias modulates empathic responses see Xuo, Zuo, Wang and Han 2009). Accordingly, Hoffman does not regard the moral realm as being exclusively circumscribed by our ability to empathize with other people. Besides empathic abilities, moral agency requires also knowledge of abstract moral principles, such as the principles of caring and justice. Hoffman seems to conceive of those principles as being derived from cognitive sources that are independent from our empathic abilities. For him, stable and effective moral agency requires empathy in order for moral principles to have a motivational basis in an agent's psychology. Knowledge of abstract moral principles is however needed in order to overcome the limits and biases of an emotional and empathic response to seeing others in distress (For a measured evaluation of empathy in the legal context see also Deigh 2011 and Hoffman 2011). Hoffman's remarks about the centrality of empathy for moral agency and his account of the relation between empathy and universal moral rules are certainly suggestive. However, they are in need of further empirical confirmation and philosophical clarification, so that one can more fully understand how empathy/sympathy as a non-moral emotional phenomenon provides a motivational basis for moral principles. It is also worthwhile emphasizing in this context that moral sentimentalists from the eighteenth century, such as David Hume or Adam Smith, were quite aware of empathy's natural limitations. They already recognized the need for corrective mechanisms such as “some steady and general points of view” or the perspective of the “impartial spectator” in order to argue for empathy or sympathy's foundational role in moral judgments. (For a good analysis of the philosophical discussion about empathy/sympathy in the eighteenth century see Frazer 2010). How exactly we should think of the importance of our capacity for empathy/sympathy for the constitution of moral agency and the foundation of moral judgment has also been controversially discussed among moral philosophers in recent years. Philosophers with Kantian leanings do admit at times admit that empathy understood as the ability to take the perspective of another person is epistemically relevant for moral deliberations, even if they do not agree with moral sentimentalists that empathy is solely constitutive for moral agency (Deigh 1996; Darwall 2006, Shermann 1998). Within the context of an ethics of care, Michael Slote (2007, 2008) has most dramatically made the case for moral sentimentalism. He argues that empathy, particularly the vicarious sharing of the sympathetic feelings of an observed agent toward the target of his or her actions, should be conceived of as the foundational principle of moral judgments. Interestingly, in contrast to Hume and Smith, he does not regard empathy's natural limitations as shortcomings in this respect but conceives of them as tracking morally relevant aspects of a situation. (For a review see also Oxley 2011). While some evidence for empathy as a “building block” of morality has come from an evolutionary perspective and ethology (DeWaal 2006 & 2009), to a large extent the contemporary philosophical debate about the moral significance of empathy —and whether or not we should conceive of morality in a sentimentalist or rationalist manner— has been driven by the results of empirical investigations into the causes of psychopathy and autism. Both pathologies are seen as involving a deficit in some dimensions of empathy (For a survey of additional issues in contemporary moral psychology see Sinnott–Armstrong 2008). Most forcefully, Blair defends the claim that a psychopath's inability to behave morally is related to a deficit in empathy or a reduced ability to emotionally respond to the observation of distress in others (Blair, Mitchel, and Blair 2005). Autistic persons on the other hand behave morally because they still are able to appropriately respond to the distress of others, despite their generally reduced mindreading abilities (Blair 1996). Yet Blair's interpretation of these empirical findings have been controversial. Jesse Prinz, who is otherwise a committed moral sentimentalist, argues that a psychopath's moral deficits are better explained by their inability to feel strong emotions. For Prinz (2011a, b), empathy is neither necessary nor sufficient for morality. Rather emotions like anger, resentment, or guilt play a foundational role in this context (see also Nichols 2004 and Maibom 2009 for further discussion of this topic). Other philosophers also link the moral shortcomings of psychopaths to a deficit in their rational capacities. They moreover argue that evidence from autistic individuals, whose imaginative role-playing and thus empathic capacities are diminished, does not support the claim that empathy is necessary for moral agency. It rather suggests that empathy plays a contingent role in the normal development of a moral agent in making it easier to live up to moral standards. (For a position understanding psychopathy as rational deficit see Maibom 2005. For claiming that empathy is not essential for moral agency based on an interpretation of research on autism see Kennett 2002.). This entry has delineated some of the main domains of traditional and more recent philosophical discussion within which empathy has played an important role. It has also analyzed various areas of psychological empathy research, particularly if they intersect with philosophical interests. At the end, it is important to emphasize that empathy is the topic of an ongoing interdisciplinary research project that has transcended the disciplinary and subdisciplinary boundaries, which have characterized empathy research so far. Specifically, the addition of a neuroscientific perspective has been crucial in recent years. Such interdiciplinarity has contributed to overcoming the conceptual confusions hindering and unnecessarily compartmentalizing the scientific study of empathy. For that very reason, researchers in psychology nowadays tend no longer to conceive of empathy exclusively either in affective or cognitive terms but as encompassing both. Such a unified conception of empathy is further supported by the above mentioned neuroscientific research on mirror neurons. If mirror neurons are indeed the primary underlying causal mechanisms for cognitively recognizing certain emotional states in others by looking at their facial expressions, then it is quite understandable how such an observation could also lead to the feeling of an emotion that is more congruent with the situation of the other; that is, to empathy in the affective sense. Such affective responses are due to the fact that the perception of another person activates similar neurons in the subject and the target. Moreover, the above research would also suggest that empathy is indeed a phenomenon to be found across various species—given the fact that neuronal mirror systems are found across species—and a phenomenon at various levels of complexity starting with the capacity for emotional contagion to proper affective empathy and sympathy in light of increased cognitive capacities and abilities to distinguish between self and other (DeWaal 2006, 2009; Preston and DeWaal 2002a, b). In thinking of empathy in such a unified manner, current empathy research in psychology and neuroscience connects again with the philosophical tradition from the beginning of the 20th century. It is to be hoped that further interdisciplinary research—freed from the limitations of the framework of psychological research of the middle of the 20th century—will enable us to acquire a better understanding of empathy's importance for understanding other agents. It should do so by enabling us to conceive of empathy in a conceptually concise and differentiated manner and by providing us with an even more detailed picture of its underlying mechanisms. - Abel, Theodore, 1948. “The Operation Called Verstehen,” The American Journal of Sociology, 54: 211–18; reprinted in Dallmayr, Fred and Thomas McCarthy (eds.), Understanding and Social Inquiry, Notre Dame: Notre Dame University Press, 1977. - Allen, C., 2010. “Miror, Mirror in the Brain, what's the Monkey Stand to Gain?” Noûs, 44: 372–391. - Baron-Cohen, S., 2003. The Essential Difference: The Truth about the Male and Female Brain, New York: Basic Books. - Baron-Cohen, S. and S. Wheelwright, 2004: “The Empathy Quotient: An Investigation of Adults with Asperger Syndrome or High Functioning Autism, and Normal Sex Differences,” Journal of Autism and Developmental Disorders, 34: 163–175. - Batson, C.D., 1991. The Altruism Question: Toward a Social Psychological Answer, Hillsdale, NJ.: Lawrence Erlbaum. - –––, 1997. “Self-Other Merging and the Empathy-Altruism Hypothesis: Reply to Neuberg et al,” Journal of Personality and Social Psychology, 73: 517–522. - –––, 2009. “ These Things Called Empathy: Eight Related But Distinct Phenomona,” The Social Neuroscience of Empathy, J. Decety and W. Ickes (eds.), Cambridge, Mass.: MIT Press, 3–15. - –––, 2011. Altruism in Humans, Oxford: Oxford University Press. - Batson, C.D., J. Fultz, and P. Schoenrade, 1987. “Distress and Empathy: Two Qualitatively Distinct Vicarious Emotions with Different Motivational Consequences,” Journal of Personality 55: 19–39. - Batson, C.D., R.R. Klein, L. Highberger, and L.L. Shaw, 1995. “Immorality From Empathy-Induced Altruism: When Compassion and Justice Conflict,” Journal of Personality and Social Psychology, 68: 1042–1054. - Batson, C.D., K. Sager, E. Garst, M. Kang, K. Rubchinsky, and K. Dawson, 1997a. “Is Empathy Induced Helping Due to Self-Other Merging?,” Journal of Personality and Social Psychology 73: 495–509. - Batson, C.D., S. Early, and G.Salvarini, 1997b. “Perspective Taking: Imagining how Another Feels versus Imagining how You Would Feel,” Personality and Social Personality Bulletin, 23: 751–758. - Batson, C.D., D. Lishner, A.Carpenter, L. Dulin, S. Harjusola-Wevv, E. L. Stocks, S. Gale, O. Hassan, and B. Sampat, 2003. “‘…As you Would Have Them Do Unto You’: Does Imagining Yourself in the Other's Place Stimulate Moral Action?,” Personality and Social Psychology Bulletin, 29: 1190–1201. - Bierhoff, H.-W., 2002. Prosocial Behavior, East Sussex: Psychology Press. - Borg, E., 2007. “If Mirror Neurons Are the Answer, What was the Question?” Journal of Consciousness Studies, 14: 5–19. - Breithaupt, F., 2009. Kulturen der Empathie, Frankfurt a. M.: Suhrkamp. - Blair, R., 1996. “Brief Report: Morality in the Autistic Child,” Journal of Autism and Developmental Disorders 26: 571–579. - Blair, R., D. Mitchell, and D. Blair, 2005. The Psychopath, Oxford: Blackwell Publishing. - Choplan, B., M. McCain, J. Carbonell, and R. Hagen, 1985. “Empathy: Review of Available Measures,” Journal of Personality and Social Psychology, 48: 635–653. - Churchland, P., 1970. “The Logical Character of Action-Explanations,” The Philosophical Review, 79, 214–236. - Cialdini, R.B., S.L. Brown, B.P. Lewis, C. Luce, and S.L. Neuberg, 1997. “Reinterpreting the Empathy-Altruism Relationship: When One Into One Equals Oneness,” Journal of Personality and Social Psychology, 73: 481–494. - Collingwood, R.G., 1946. The Idea of History, Oxford: Clarendon Press. - Coplan, A., 2011. “Understanding Empathy: Its Features and Effects,” in Empathy: Philosophical and Psychological Perspectives, A. Coplan and P. Goldie (eds.), Oxford: Oxford University Press, 3–18. - Coplan, A. and P. Goldie (eds.), 2011. Empathy: Philosophical and Psychological Perspectives, Oxford: Oxford University Press. - Cronbach, L., 1955. “Processes Affecting Scores on ‘Understanding of Others’ and ‘Assumed Similarity’,” Psychological Bulletin, 52: 177–193. - Csibra, F., 2007. “Action Mirroring and Action Interpretation: An Alternative Account,” in Sensorimotor Foundations of Higher Cognition (Attention and Performance XII), ed. P. Haggard, Y. Rosetti and M. Kawato, 435–459. Oxford: Oxford University Press. - Curry, G., and I. Ravenscroft, 2002. Recreative Minds, Oxford: Clarendon Press. - Curtis, R. and G. Koch, 2009. Einfühlung: Zur Geschichte und Gegenwart eines ästhetischen Konzepts, München: Wilhelm Fink Verlag. - Danto, Arthur, 1965. Analytical Philosophy of History, Cambridge: Cambridge University Press. - Darwall, S., 1998. “Empathy, Sympathy, and Care,” Philosophical Studies, 89: 261–282. - –––, 2006. The Second-Person Standpoint: Morality, Respect, and Accountability, Cambridge, Mass.: Harvard University Press. - Davies M., and T. Stone (eds.), 1995. Folk Psychology, Oxford: Blackwell Publishers. - Davis, M., 1980. “A Multidimensional Approach to Individual Differences in Empathy”. JSAS Catalog of Selected Documents in Psychology, 10: 85. - –––, 1983. “Measuring Individual Differences in Empathy: Evidence for a Multidimensional Approach,” Journal of Personality and Social Psychology, 44: 113–126. - –––, 1994. Empathy: A Social Psychological Approach, Boulder: Westview Press. - Davis, M. and L. Kraus, 1997. “Personality and Empathic Accuracy,” in Empathic Accuracy, W. Ickes (ed.), New York/London: Guilford Press, 144–168. - Debes, R., 2010. “Which Empathy? Limitations in the Mirrored ”Understanding“ of Emotions,” Synthese, 175: 219–239. - Decety, J., 2012. Empathy: From Bench to Bedside, Cambridge, Mass.: MIT Press. - Decety, J. and C. Lamm, 2006. “Human Empathy through the Lens of Social Neuroscience,” The Scientific World Journal, 6: 1146–1163. - Decety, J. and W. Ickes, 2009. The Social Neuroscience of Empathy, Cambridge, Mass.: MIT Press. - Deigh, J., 1996. “Empathy and Universalizability,” in Mind and Morals, L. May, M. Friedman and A. Clark (eds.), Cambridge, Mass.: MIT Press, 199–219. - –––, 2011. “Empathy, Justice, and Jurisprudence,”The Southern Journal of Philosophy, 49 (Spindel Supplement): 73–90. - De Waal, Fr., 2006. Primates and Philosophers: How Morality Evolved, Princeton, N.J.: Princeton University Press. - –––., 2009. The Age of Empathy: Nature's Lessons for a Kinder Society, New York: Random House. - DeVignemont, F., and T. Singer, 2006. “The Empathic Brain: How, When, and Why?” Trends in Cognitive Sciences, 10: 435–441. - Dilthey, W., 1961–. Gesammelte Schriften, 15 vols. Leipzig: Teubner Verlagsgesellschaft. - Dray, W., 1957. Laws and Explanation in History, Oxford: Clarendon Press. - –––, 1995. History as Re-enactment, Oxford: Oxford University Press. - Droysen, J.G., 1977. Historik, Stuttgart: Frommann-Holzboog. - Dymont, R., 1949. “A Scale of Measurement of Empathic Ability,” Journal of Consulting Psychology, 13: 127–133. - Eisenberg, N., 2000a. “Empathy and Sympathy,” M. Lewis and J.M. Haviland-Jones (eds.), Handbook of Emotions, New York/London: Guilford Press, 677-691. - –––, 2000b. “Emotion, Regulation, and Moral Development,” Annual Review of Psychology, 51: 665–697. - Eisenberg, N., and J. Strayer (eds.), 1987. Empathy and Its Development, Cambridge: Cambridge University Press. - Eisenberg, N., and P.A. Miller, 1987. “Empathy, Sympathy, and Altruism: Empirical and Conceptual Links,” ed. N. Eisenberg,and J. Strayer, Empathy and Its Development, 292–316. Cambridge: Cambridge University Press. - Eisenberg, N. and R. Fabes, 1998. “Prosocial Development,” W. Damon and N. Eisenberg (eds.), Handbook of Child Psychology (Volume 3: Social, Emotional and Personality Development), 701–778. New York: Wiley. - Eisenberg, N., B. Murphy, and S. Shepard, 1997. “The Development of Empathic Accuracy,” in Empathic Accuracy, W. Ickes (ed.), New York/London: Guilford Press, 73–116. - Frazer, M., 2010. The Enlightenment of Sympathy: Justice and the Moral Sentiments in the Eighteenth Century and Today, Oxford: Oxford University Press. - Frith, U., and C.D. Frith, 2003. “Development and Neurophysiology of Mentalizing,” Philosophical Transactions of the Royal Society, Series B, 358: 459–473. - Gadamer, H.-G., 1989. Truth and Method, New York: Crossroad Publishing. - Gallagher, S., 2012. “Neurons, Neonates, and Narrativ: From Embodied Resonance to Empathic Understanding,” in Moving Ourselves, Moving Others: Motion and Emotion in Intersubjectivity, Consciousness, and Language, A. Foolen, U, Lüdtke, t. Racine and J. Zlatev (eds.), Amsterdam/Philadelphia: John Benjamins Publishing Company, 165–196. - Gallagher, S., and D. Hutto, 2008. “Understanding Others Through Primary Interaction and Narrative Practice,” in The Shared Mind: Perspectives on Intersubjectivity, J. Zlatev, T. Racine, C. Sinha, and E. Itkonen (eds.), Amsterdam/Philadelphia: John Benjamins Publishing Company, 17–38. - Gallese, V., 2001. “The ‘Shared Manifold’ Hypothesis: From Mirror Neurons to Empathy,” Journal of Consciousness Studies, 8: 33–50. - –––, 2003a. “The Roots of Empathy: The Shared Manifold Hypothesis and the Neural Basis of Intersubjectivity,” Psychopathology, 36: 171–180. - –––, 2003b. “The Manifold Nature of Interpersonal Relations: The Quest for a Common Mechanism,” Philosophical Transactions of the Royal Society, 358: 517–528. - Gallese, V., C. Keysers, and G. Rizzolatti, 2004. “A Unifying View of the Basis of Social Cognition,” Trends in Cognitive Science, 8: 396–403. - Gazzola, V., L. Aziz-Zadeh, and C. Keysers, 2006. “Empathy and the Somatotopic Auditory Mirror System in Humans,” Current Biology 16: 1824–1829. - Goldie, P., 2000. The Emotions, Oxford: Oxford University Press. - –––, 2011. “Anti-Empathy,” in Empathy: Philosophical and Psychological Perspectives, A. Coplan and P. Goldie (eds.), Oxford: Oxford University Press, 302–317. - Goldman, A., 2002. “Simulation Theory and Mental Concepts,” in Simulation and Knowledge of Action, ed. J. Dokic and J. Proust, 1–19. Amsterdam/Philadelphia: John Benjamins Publishing Company. - –––, 2006. Simulating Minds: The Philosophy, Psychology, and Neuroscience of Mindreading, Oxford: Oxford University Press. - –––, 2009. “Mirroring, Simulating, and Mindreading,” Mind and Language, 24: 235–252. - –––, 2011. “Two Routes to Empathy: Insights from Cognitive Neuroscience,” in Empathy: Philosophical and Psychological Perspectives, A. Coplan and P. Goldie (eds.), Oxford: Oxford University Press, 31–44. - Gordon, Robert M., 1995a. “The Simulation Theory: Objections and Misconceptions,” in Folk Psychology, M. Davies and T. Stone (eds.), Oxford: Blackwell Publishers, 100–122. - –––, 1995b. “Simulation Without Introspection from Me to You,” in Mental Simulation, M. Davies and T. Stone (eds.), Oxford: Blackwell Publishers, 53–67. - –––, 2000. “Sellars's Rylean Revisited,” Protosoziologie, 14: 102–114. - Gopnik, A. and A.N. Meltzoff, 1997. Words, Thoughts and Theories: Cambridge, MA: MIT Press. - Greif, E. and R. Hogan, 1973. “The Theory and Measurement of Empathy,” Journal of Counseling Psychology, 20: 280–284. - Grondin, J., 1994. Introduction to Philosophical Hermeneutics, New Haven: Yale University Press. - Halpern, J., 2001. From Detached Concern to Empathy: Humanizing Medical Practice, New York: Oxford University Press. - Heal, J., 2003. Mind, Reason and Imagination, Cambridge: Cambridge University Press. - Heider, F., 1958: The Psychology of Interpersonal Relations, New York: Wiley. - Hempel, C., 1965. Aspects of Scientific Explanations, New York: Free Press. - Henderson, D., 1993. Interpretation and Explanation in the Human Sciences, Albany: State University of New York Press. - –––, 2011. “Let's Be Flexible: Our Interpretive/Explanatory Toolbox, or In Praise of Using a Range of Tools,” Journal of the Philosophy of History5: 261–299. - Henderson, D., and T. Horgan, 2000. “Simulation and Epistemic Competence,” in Empathy and Agency: The Problem of Understanding in the Human Sciences, H.H. Kögler and K. Stueber (eds.), Boulder: Westview Press, 119–143. - Hickok, G., 2008. “Eight Problems for the Mirror Neuron Theory of Action Understanding in Monkeys and Humans,” Journal of Cognitive Neuroscience, 21: 1229–1243. - Hoffman, M., 1981. “Is Altruism Part of Human Nature?,” Journal of Personality and Social Psychology 40: 121–137. - –––, 2000. Empathy and Moral Development, Cambridge: Cambridge University Press. - –––, 2011. “Empathy, Justice and the Law,” in Empathy: Philosophical and Psychological Perspectives, ed. A. Coplan and P. Goldie, 230–254. Oxford: Oxford University Press. - Hogan, R., 1969. “Development of an Empathy Scale,” Journal of Consulting and Clinical Psychology, 33: 307–316. - Hollan, D.W., 2012. “Emerging Issues in the Cross-Cultural Study of Empathy” Emotion Review, 4: 70–78. - Hollan, D.W. and C.J. Throop, 2008. “Whatever Happened to Empathy? Introduction,” Ethos, 36: 385–401. - Hollan, D.W. and C.J. Throop, 2011. The Anthropology of Empathy: Experiencing the Lives of Others in Pacific Societies, New York: Berghahn Books. - Holz-Eberling, F. and M. Steinmetz, 1995. “Wie brauchbar sind die vorliegenden Fragebogen zur Messung von Empathie? Kritische Analysen unter Berücksichtigung der Iteminhalte,” Zeitschrift für Differentielle und Diagnostische Psychologie, 16: 11–32. - Husserl, E., 1963. Cartesianische Meditationen und Pariser Vorträge, Gesammelte Werke, vol. 1. The Hague: Martinus Nijhoff. (Translated as Cartesian Meditations, The Hague: Martinus Nijhoff, 1969.) - Hutto, D., 2008. Folk-Psychological Narratives, The Sociocultural Basis of Understanding Reasons, Cambridge, Mass.: MIT Press. - Iacoboni, M., 2011. “Within Each Other: Neural Mechanisms for Empathy in the Primate Brain,”In Empathy: Philosophical and Psychological Perspectives, A. Coplan and P. Goldie (eds.), Oxford: Oxford University Press, 45–57. - –––, 1993. “Empathic Accuracy,” Journal of Personality 61: 587–610. - –––, 1997. Empathic Accuracy, New York/London: Guilford Press. - –––, 2003. Everyday Mindreading, New York. Prometheus Book. - Jacob, P., 2008. “What Do Mirror Neurons Contribute to Human Social Cognition?” Mind and Language, 23: 190–223. - –––, 2011. “The Direct–Perception Model of Empathy: A Critique,” Review of Philosophy and Psychology, 2: 519–540. - Johnson, J., J. Cheek, and R. Smither, 1983. “The Structure of Empathy,” Journal of Personality and Social Psychology, 45: 1299–1312. - Jolliffe, D. and D.P. Farrington, 2006. “Development and Validation of the Basic Empathy Scale,” Journal of Adolescence, 29: 589–611. - Kain, W. and J. Perner, 2003. “Do children with ADHD not need their frontal lobes for theory of mind?: A review of brain imaging and neuropsychological studies,” in The Social Brain: Evolution and Pathology, M. Brüne, H. Ribbert, and W. Schiefenhövel (eds.), Chichester, UK: John Wiley, 197–230. - Keen, S., 2007. Empathy and the Novel, Oxford: Oxford University Press. - Kennett, J., 2002: “Autism, Empathy and Moral Agency,” The Philosophical Quarterly, 50, 340–357. - Keysers, Chr., 2011. The Empathic Brain: How the Discovery of Mirror Neurons Changes our Understanding of Human Nature, Social Brain Press. - Kim, J., 1984. “Self-Understanding and Rationalizing Explanations,” Philosophia Naturalis, 21: 309–320. - –––, 1998. “Reasons and the First Person,” in Human Action, Deliberation, and Causation, J. Bransen and St. Cuypers (eds.), Dordrecht: Kluwer Academic Publishers, 67–87. - Kögler, H.-H. and K. Stueber, 2000. Empathy and Agency: The Problem of Understanding in the Human Sciences, Boulder: Westview Press. - Lamm, C., D. D. Batson, and J. Decety, 2007. “The Neural Substrate of Human Empathy: Effects of Perspective-Taking and Cognitive Appraisal,” Journal of Cognitive Neuroscience 19: 42–58. - Lipps, T., 1903a. “Einfühlung, Innere Nachahmung und Organempfindung,” Archiv für gesamte Psychologie 1: 465–519. (Translated as “Empathy, Inner Imitation and Sense-Feelings,” in A Modern Book of Esthetics, 374–382. New York: Holt, Rinehart and Winston, 1979). - –––, 1903b. Aesthetik, vol. 1. Hamburg: Voss Verlag. - –––, 1905. Aesthetik, vol. 2. Hamburg: Voss Verlag. - –––, 1906. “Einfühlung und Ästhetischer Genuß,” Die Zukunft, 16: 100–114. - –––, 1907. “Das Wissen von Fremden Ichen,” Psychologische Untersuchungen, 1: 694–722. - –––, 1912/13. “Zur Einfühlung,” Psychologische Untersuchungen, 2: 111–491. - Maibom, H., 2005: “Moral Unreason: The Case of Psychopathy,” Mind and Language, 20: 237–257. - –––, 2007. “The Presence of Others,” Philosophical Studies, 132: 161–190. - –––, 2009. “Feeling for Others: Empathy, Sympathy, and Morality” Inquiry, 52: 483–499. - Makkreel, R., 2000. “From Simulation to Structural Transposition: A Diltheyan Critique of Empathy and Defense of Verstehen”. In Empathy and Agency: The Problem of Understanding in the Human Sciences, H.H. Kögler and K. Stueber (eds.), Boulder: Westview Press, 181–193. - May, L., M. Friedman, and A. Clark, 1996. Mind and Morals, Cambridge, Mass.: MIT Press. - Mead. G.H., 1934. Mind, Self, and Society, Chicago: University of Chicago Press. - Mehrabian, A. and N. Epstein, 1972. “A Measure of Emotional Empathy,” Journal of Personality 40: 525–543. - Mehrabian, A., A.L. Young, and S. Sato, 1988. “Emotional Empathy and Associated Individual Differences,” Current Psychology: Research and Review, 7: 221–240. - Meltzoff, A., and R. Brooks, 2001. “‘Like Me’ as a Building Block for Understanding Other Minds: Bodily Acts, Attention, and Intention,” in Intentions and Intentionality, ed. B. Malle, L. Moses and D. Baldwin, 171–191. Cambridge, MA: MIT Press. - Nichols, S., 2001. “Mindreading and the Cognitive Architecture underlying Altruistic Motivation,” Mind & Language, 16: 425–455. - –––, 2004. Sentimental Rules: On the Natural Foundation of Moral Judgment, Oxford: Oxford University Press. - Nichols, S., and S. Stich, 2003. Mindreading, Oxford: Clarendon Press. - Neuberg, S.L., R. Cialdini, S.L. Brown, C. Luce, and B. Sagarin, 1997. “Does Empathy Lead to Anything More Than Superficial Helping?,” Journal of Personality and Social Psychology 73: 510–516. - Oxley, J.C., 2011. The Moral Dimensions of Empathy: Limits and Applications in Ethical Theory and Practice, Basingstoke: Palgrave Macmillan. - Prandl, A., 1910. Die Einfühlung, Leipzig: Verlag von Johann Ambrosius Barth. - Preston, S., and F. de Waal, 2002a. “Empathy: Its Ultimate and Proximate Bases,” Behavioral and Brain Sciences, 25: 1–72. - –––, 2002b. “Communications of Emotions and the Possibility of Empathy in Animals,” ed. S. Post, L. Underwood, J. Schloss, and W. Hurblut, Altruism and Altruistic Love: Science, Philosophy, and Religion in Dialogue, 284–308. Oxford: Oxford University Press. - Prinz, J., 2011a. “Is Empathy Necessary for Morality?” in Empathy: Philosophical and Psychological Perspectives, ed. A. Coplan and P. Goldie, 211–229. Oxford: Oxford University Press. - –––, 2011b. “Against Empathy,”The Southern Journal of Philosophy, 49 (Spindel Supplement): 214–233. - Ravenscroft, I., 1998. “What is it Like to be Someone Else? Simulation and Empathy,” Ratio, 11: 170–185. - Rizzolatti, G. and L. Craighero, 2004. “The Mirror Neuron System,” Annual Review Neuroscience, 27: 169–192. - Rizzolatti, G. and C. Sinigaglia, 2008. Mirrors in the Brain: How our Minds Share Actions and Emotions, Oxford: Oxford University Press. - Rogers, C., 1959. “A Theory of Therapy, Personality, and Interpersonal Relations, as Developed in the Client-Centered Framework,” in S. Koch (ed.), Psychology: A Study of a Science (Vol. 3), New York: McGraw–Hill, 184–256. - –––, 1975. “Empathic: An Unappreciated Way of Being,” The Counseling Psychologist, 5: 2–10 (Reprinted in C. Rogers. 1980. A Way of Being, Boston: Houghton Mifflin, 137–164. - Sherman, N., 1998. “Empathy and Imagination,” Midwest Studies in Philosophy, 22, 82–119. - Seemann, A., 2011. Joint Attention: New Developments in Psychology, Philosophy of Mind, and Social Neuroscience, Cambridge, Mass.: MIT Press. - Scheler, M., 1973. Wesen und Form der Sympathie, Bern/München, Francke Verlag (Engl. Translation: The Nature of Sympathy, London: Routledge & Kegan Paul 1954). - Schleiermacher, Fr., 1998. Hermeneutics and Criticism, edited by A. Bowie. Cambridge: Cambridge University Press. - Schopenhauer, A., 1995. On the Basis of Morality, Providence: Berghahn Books. - Singer, T., and C. Lamm, 2009. “The Social Neuroscience of Empathy,” The Year in Cognitive Neuroscience: Annals of the New York Academy of Sciences, 1156, 81–96. - Slote, M., 2007. The Ethics of Care and Empathy, London: Routledge. - –––, 2010. Moral Sentimentalism, Oxford: Oxford University Press. - Smith, A., 1853. The Theory of Moral Sentiments, New York: August M. Kelley Publishers, 1966. - Sober, E. and D.S. Wilson, 1998. Unto Others: The Evolution and Psychology of Unselfish Behavior, Cambridge, Mass.: Harvard University Press. - Stein, E., 1917. Zum Problem der Einfühlung, München: Kaffke Verlag, 1980; English translation, E. Stein, On the Problem of Empathy, Washington: ICS Publishers, 1989. - Stotland, E., 1969. “Exploratory Investigations of Empathy,” Advances in Experimental Social Psychology, (Vol. 4), L. Berkowitz (ed.), New York/London: Academic Press, 271–314. - Stueber, K., 2002. “The Psychological Basis of Historical Explanation: Reenactment, Simulation and the Fusion of Horizons,” History and Theory, 41: 24–42. - –––, 2006: Rediscovering Empathy: Agency, Folk Psychology, and the Human Sciences, Cambridge, Mass.: MIT Press. - –––, 2008. “Reasons, Generalizations, Empathy, and Narratives: The Epistemic Structure of Action Explanation,” History and Theory, 47: 31–43. - –––, 2009. “The Ethical Dimension of Folk-Psychology,” Inquiry, 52: 532-547. - –––, 2011a. “Imagination, Empathy, and Moral Deliberation: The Case of Imaginative Resistance,” The Southern Journal of Philosophy, 49 (Spindel Supplement): 156–180. - –––, 2011b. “Social Cognition and the Allure of the Second-Person Perspective: In Defense of Empathy and Simulation,” in Joint Attention: New Developments in Psychology, Philosopy of Mind, and Social Neuroscience, ed. A. Seemann, 265–292. Cambridge, Mass.: MIT Press. - –––, 2012a. “Varieties of Empathy, Neuroscience and the Narrativist Challenge to the Theory of Mind Debate,” Emotion Review, 4: 55–63. - –––, 2012b.“Understanding versus Explanation: How to Think about the Distinction between the Human and the Natural Sciences,” Inquiry, 55: 17–32. - Taft, R., 1955. “The Ability to Judge People,” Psychological Bulletin, 52: 1–23. - Titchener, E. B., 1909: Lectures on the Experimental Psychology of Thought-Processes, New York: Macmillan. - Tully, J. (ed.), 1988. Meaning and Context: Quentin Skinner and his Critics, Princeton: Princeton University Press. - Throop, C.J., 2010. “Latitudes of Loss: On the Vicissitudes of Empathy,” American Ethnologist, 37: 771–782. - Vischer, R., 1873. “On the Optical Sense of Form: A Contribution to Aesthetics,” in Empathy, Form, and Space, H.F. Mallgrave and Eleftherios Ikonomou (eds., trans.), Santa Monica, CA: The Getty Center for the History of Art and the Humanities, 1994, 89–123. - Winch, P., 1958. The Idea of a Social Science and its Relation to Philosophy, London: Routledge and Kegan Paul. - Wispe, L., 1986. “The Distinction between Sympathy and Empathy: To Call Forth a Concept a Word is Needed,” in Journal of Personality and Social Psychology, 50: 314–321. - –––, 1987. “History of the Concept of Empathy,” in Empathy and Its Development, N. Eisenberg and J. Strayer (eds.), Cambridge. Cambridge University Press, 17–37. - –––, 1991. The Psychology of Sympathy, New York/London: Plenum Press. - Xu, X., X. Zuo, X. Wang, and S. Han, 2009. “Do You Feel My Pain? Racial Group Membership Modulates Empathic Neural Responses,” Journal of Neuroscience, 29: 8525–8529, - Zahavi, D., 2001. “Beyond Empathy: Phenomenological Approaches to Intersubjectivity,” Journal of Consciousness Studies, 8. 151–167. - –––, 2005. Subjectivity and Selfhood: Investigating the First-Person Perspective, Cambridge, Mass.: MIT Press. - –––, 2010. “Empathy, Embodiment and Interpersonal Understanding: From Lipps to Schutz,” Inquiry, 53: 285–306. - Zahavi, D., and S. Overgaard, 2012. “Empathy without Isomorphism: A Phenomenological Account,” in Empathy: From Bench to Bedside, J. Decety (ed.), Cambridge, Mass.: MIT Press, 3–20. - Zhou, Q., C. Valiente, and N. Eisenberg, 2003. “Empathy and Its Measurement,” S.J. Lopez and C.R. Snyder (eds.), Positive Psychological Assessment: A Handbook of Models and Measures, Washington, DC: American Psychological Association, 269–284. How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
Lyme disease symptoms could be mistaken for COVID-19, with serious consequences Summer is field season for ecologists like me, a time when my colleagues, students and I go out into fields and woods in search of ticks to study the patterns and processes that allow disease-causing microbes – primarily bacteria and viruses – to spread among wildlife and humans. That field work means we’re also at risk of getting the very diseases we study. I always remind my crew members to pay close attention to their health. If they get a fever or any other signs of sickness, they should seek medical treatment immediately and tell their doctor that they may have been exposed to ticks. When summer flu-like illnesses develop in anyone who spends time outdoors in areas where ticks are common, tick-transmitted diseases like Lyme disease should be considered a likely culprit. This summer, however, the global emergence of the novel coronavirus and COVID-19 is presenting a whole new set of challenges for diagnosing Lyme disease and other tick-borne illnesses. Lyme disease shares a number of symptoms with COVID-19, including fever, achiness and chills. Anyone who mistakes Lyme disease for COVID-19 could unknowingly delay necessary medical treatment, and that can lead to severe, potentially debilitating symptoms. Delaying medical treatment can be dangerous As we move from spring into summer, and into the peak period of tick activity in much of the Northern Hemisphere, time spent outdoors will increase, as will risk of tick-transmitted disease. In some cases, there are key symptoms of a tick-transmitted disease that can help with diagnosis. For example, early Lyme disease, which is caused by the bite of an infected black-legged tick, sometimes called the deer tick, is commonly associated with an expanding “bull’s-eye rash.” Seventy percent to 80% of patients have this symptom. However, other symptoms of Lyme disease – fever, head and body aches and fatigue – are less distinctive and can be easily confused with other illnesses, including COVID-19. This can make it more difficult to diagnose a patient who did not notice a rash or was unaware that they ever had a tick bite. As a result, Lyme disease cases can be misdiagnosed. Nationally, Lyme disease may be undercounted to the point that only one in 10 cases is reported to the CDC. If Lyme disease is identified and treated quickly, two to four weeks of antibiotics can usually knock out Borrelia burgdorferi, the species of spirochete bacteria that causes it. But delays in the treatment of Lyme disease can lead to more severe and persistent symptoms. If Lyme disease goes untreated, neurological and cognitive problems and potentially fatal heart problems can develop, and painful arthritis that is much more difficult to treat can set in. Lyme disease isn’t the only tick problem Lyme disease is most common in the Northeast and North Central U.S., but that does not mean that people in areas without Lyme disease are free from worry about tick-transmitted disease. Ticks throughout North America can spread a wide range of diseases, many of which also present with flu-like symptoms, leading to the potential for misdiagnosis, especially when these diseases are not especially common in the general population. Spotted fevers are another group of tick-transmitted diseases. The most severe of these is Rocky Mountain spotted fever, which can be fatal. Spotted fevers, as the name suggests, are typically associated with a rash. But the rash may not show until after fever and other flu-like symptoms, creating the same risk of being mistaken for COVID-19. Like Lyme disease, spotted fevers can be treated with anitibiotics, and early treatment can head off more severe infections, so quick, accurate diagnosis is critical. Is COVID-19 increasing chances of tick bites? Recent reports from across the nation and around the globe suggest that wildlife have become more bold this spring, wandering into suburbs and cities where human and vehicle traffic are reduced because of COVID-19. Whether this phenomenon is being driven by changes in animal behavior or is simply an artifact of humans spending more time in their homes and becoming more aware of their surroundings is not clear, but changes in wildlife behavior and habitat use could affect tick-transmitted disease. For example, white-tailed deer are important hosts to multiple human-biting tick species in eastern North America, including black-legged ticks, and more deer around our homes and in our neighborhoods could lead to more ticks that have a chance to bite humans. Ticks do not move very far by themselves – perhaps about a foot per day for some species – but can be dispersed dozens of miles or more while hitching a ride on a highly mobile host like a deer, coyote or bird. Thus, the wildlife we observe exploring our neighborhoods while we are encouraged to stay at home may be leaving behind ticks that are carrying pathogens, or that could acquire infection from the more common wildlife already near our homes. Awareness is a key component of preventing and treating tick-borne disease. People should be aware of the activities that could expose them to ticks, and physicians should consider the possibility of tick-borne disease, especially given the potential overlap in symptoms with COVID-19. As with COVID-19, mitigation efforts can substantially reduce the risk of tick-borne diseases. Wear long sleeves and long pants and use an EPA-registered repellent when you are in tick habitat, and check yourself thoroughly for ticks when you get home. It is important to be aware of ticks when spending time outside, but fear of ticks should not stop people from enjoying nature. [Get facts about coronavirus and the latest research. Sign up for The Conversation’s newsletter.] (Editor’s note: Cases of Lyme disease have been identified in all 50 states and many other countries.)
Each year, approximately 30,000 cases of Lyme disease are reported to CDC by state health departments and the District of Columbia. However, this number does not reflect every case of Lyme disease that is diagnosed in the United States every year. Surveillance systems provide vital information but they do not capture every illness. Because only a fraction of illnesses are reported, researchers need to estimate the total burden of illness to set public health goals, allocate resources, and measure the economic impact of disease. CDC uses the best data available and makes reasonable adjustments—based on related data, previous study results, and common assumptions—to account for missing pieces of information. To improve public health, CDC wants to know how many people are actually diagnosed with Lyme disease each year and for this reason has conducted two studies: - Project 1 (Lyme Disease Testing by Large Commercial Laboratories in the United States) estimated the number of people who tested positive for Lyme disease based on data obtained from a survey of clinical laboratories. Researchers estimated that 288,000 (range 240,000–444,000) infections occur among patients for whom a laboratory specimen was submitted in 2008. - Project 2 (Incidence of Clinician-Diagnosed Lyme Disease, United States, 2005-2010) estimated the number of people diagnosed with Lyme disease based on medical claims information from a large insurance database. In this study, researchers estimated that 329,000 (range 296,000–376,000) cases of Lyme disease occur annually in the United States. Results of these studies suggest that the number of people diagnosed with Lyme disease each year in the United States is around 300,000. Notably, these estimates do not affect our understanding of the geographic distribution of Lyme disease. Lyme disease cases are concentrated in the Northeast and upper Midwest, with 14 states accounting for over 96% of cases reported to CDC. The results obtained using the new estimation methods mirror the geographic distribution of cases that is shown by national surveillance.
What would it be like to inherit a compass from your parents that always led you home, no matter where you were in the world? Apparently, sea turtles do inherit such a compass, and so do other marine animals such as salmon and whales. They use the earth’s magnetic field to guide them to feeding grounds or mating grounds. What’s even more amazing is that they seem to be born with an internal map to go along with their compass. Salmon, for instance automatically know how to get to the same feeding grounds used by their biological parents even if they were born and raised in a hatchery.. Scientists are finding out that these internal navigation tools are way more sensitive than they thought. They’ve known for a long time that many animals use the earth’s magnetic field to determine their latitude, or north-south position. It’s a fairly straightforward thing to do because it relies on the north-south direction of the earth’s magnetic field, something that never changes. The surprising thing that researchers have recently learned is that sea turtles also use magnetic fields to determine their longitude (east-west position). That’s a trickier job because it’s not about simply knowing the direction of the Earth’s magnetic field; it’s about recognizing very small changes in the strength of it, too. The turtles combine information about magnetic strength and direction to determine where they are in a vast ocean with no other clues or landmarks to guide them. These changes in magnetic strength are so small a regular compass can’t detect them, which means sea turtles are very sensitive to shifts in magnetic fields. That’s important because they travel such long distances that even a small mistake can send them way off course. Understanding how sea turtles navigate during their long journeys might help us protect them. It may also lead to improvements in navigation technology, and it could provide clues to why the turtles and other animals such as whales and dolphin sometimes beach themselves. Read about these studies here: Sea Turtles Migrate with help of a magnetic map
The Battle of Ridgeway (also known as the Battle of Lime Ridge or Limestone Ridge) was fought on the morning of 2 June 1866, near the village of Ridgeway and the town of Fort Erie in Canada West (present-day Ontario). Approximately 850 Canadian soldiers clashed with 750 to 800 Fenians, Irish American insurgents who had crossed the Niagara River from Buffalo, New York. It was the first industrial-era battle to be fought by Canadians, the first to be fought exclusively by Canadian troops and led entirely by Canadian officers. The Battle of Ridgeway was the last battle fought in Ontario against a foreign invasion force. The battlefield was designated a National Historic Site of Canada in 1921. (See also Fenian Raids.) Fenians were members of a mid-19th century movement to secure Ireland’s independence from Britain. While they functioned in secret as an outlawed organization in the British Empire, where they were known as the Irish Republican Brotherhood, they operated freely and openly in the United States as the Fenian Brotherhood. Eventually, both wings became known as the Fenians. Fenian raids were armed incursions into Canadian territory between 1866 and 1871. They were intended to seize and hold Canadian territory hostage in return for Irish independence. It was thought that this would create a crisis in Britain — perhaps even a war between Britain and the US — and weaken British resolve in Ireland, once a planned rebellion broke out there. Though US authorities tried to prevent the Fenians from mobilizing on the US-Canada border, the Fenians raided Campobello Island, New Brunswick, in April 1866. In late May, they began to amass enough guns and ammunition to arm about 20,000 insurgents. Most of the Fenians were battle-hardened American Civil War veterans: experienced officers, infantrymen, sappers, gunners and other military tradesmen. On 1 June 1866, an advance party of 1,000 heavily armed Fenians crossed the Niagara River from Buffalo, New York, and invaded Canada. The Fenians were led by John O’Neill, a former U.S. Cavalry officer who had served in Ohio and West Virginia during the Civil War. The Fenians quickly captured the undefended town of Fort Erie, Canada West, and its railway and telegraph terminals. They arrested the town council and the customs and border officials at the international ferry docks and forced the town’s bakery and hotels to provide them with breakfast. After cutting outgoing telegraph lines to Canada, the insurgents seized horses along with tools to build trenches and field fortifications. By the end of that first day, the Fenians controlled the Niagara frontier from Black Creek in the north to Fort Erie in the south. They were within marching distance of the Welland Canal, the only navigable naval passage between Lake Ontario and Lake Erie. During the Fenian Raids, some 22,000 Canadian militia volunteers were mobilized to respond to the Fenian incursion, along with several British infantry units stationed in Canada. As the Fenians took positions around Fort Erie, two Canadian militia units were deployed to Port Colborne near the village of Ridgeway: the 2nd Battalion, Queen’s Own Rifles (QOR) from Toronto and the 13th Battalion of Hamilton “Rileys” (The Royal Hamilton Light Infantry [RHLI]). As senior officer in the field, 13th Battalion commanding officer Lieutenant Colonel Alfred Booker, a prominent Hamilton auctioneer and volunteer officer, took command of the brigade of battalions. On the night of 1–2 June, Booker was ordered to take a train to Ridgeway and march to the nearby town of Stevensville. There, he was to join an arriving column of British troops and Canadian militia for a joint counterattack against the Fenians, who were believed to be positioned near Fort Erie. Booker was explicitly ordered to avoid the Fenians on his march to join the arriving column. The Battle of Ridgeway The Canadians and the British did not know that the Fenians had marched to a strategic ridge just north of Ridgeway during the night of 1–2 June. The ridge ran along the Canadians’ route to Stevensville. Although Lieutenant Colonel Alfred Booker had been warned that the Fenians had laid an ambush on the ridge, he proceeded to march toward the Fenian positions and engaged them despite his orders to avoid contact. In the first hour of the battle, the Canadians appeared to prevail, driving Fenian skirmishers from their positions. Then something went wrong: to this day, it is not clear exactly what. Contemporary sources reported that Canadian militiamen mistook Fenian scouts on horseback for cavalry (mounted soldiers). Booker's order to form a square, designed to defend against a cavalry charge, exposed the Canadians to intense Fenian rifle fire. Although Booker quickly canceled the order, he was unable to reform the inexperienced Canadian ranks now under intense and accurate Fenian fire. Other sources indicate that troops mistook a company of 13th Battalion infantry for British troops relieving them and began to withdraw, triggering a panic among other troops who mistook the withdrawal for a retreat. Observing the chaos breaking out in the Canadian ranks, John O'Neill quickly ordered a bayonet charge that completely routed the inexperienced Canadians. The Fenians took and briefly held the town of Ridgeway. Then, expecting to be overwhelmed by British reinforcements, they quickly turned back to Fort Erie where they fought a second battle against a small but determined detachment of Canadians holding the town. On the night of 2–3 June, O’Neill realized that U.S. Navy gunboats were going to intercept any Fenian reinforcements crossing the Niagara River. The Fenians attempted to cross back into the United States, but were arrested and held in midstream by the U.S. Navy. They were eventually released on the condition that they would return to their home states. Aftermath and Significance The Canadian losses were 9 killed in action — known today as the “The Ridgeway Nine”— and 33 wounded, some severely enough to require amputation of their limbs. Four more Canadian militia volunteers eventually died in the months following the battle, either of wounds sustained or disease contracted at Ridgeway. While the Canadians were well deployed and arrived in the vicinity of the Fenians within several hours of their incursion, they were poorly trained and unprepared for combat. Troops had scarce ammunition, no food or field kitchens, no proper maps, no provisions for medical care, no canteens for water, no tools for the proper care of their rifles and only half of the troops had previously practised firing their rifles with live ammunition. They were no match for the Fenians, who were well-armed and supplied Civil War veterans. The inefficiency of the militia department under Canada West's attorney general and minister of militia, John A. Macdonald, was whitewashed by two military courts of inquiry. They concluded that the blame lay with inexperienced frontline troops, who panicked and broke, and not with the officers who led them or the government who undersupplied and undertrained them. The QOR were disparagingly nicknamed “Quickest Outta Ridgeway,” while the 13th Battalion were dubbed “The Scarlet Runners.” The history of the Battle of Ridgeway was muted in Canadian military heritage and history and the Canadian government was reluctant to acknowledge the veterans of the battle for nearly 25 years. In 1890, the Veterans of ’66 Association held a protest at the Canadian Volunteers Monument at Queen's Park, Toronto, by laying flowers at the foot of the monument on 2 June, the 24th anniversary of the Battle of Ridgeway. It took a 10-year campaign of protests and lobbying before the Canadian government sanctioned a Fenian Raid medal and land grants to surviving veterans in 1899–1900. The protest became an annual memorial event known as Decoration Day, when graves and monuments of Canadian soldiers were decorated in flowers. For the next 30 years, Decoration Day would be Canada’s popular national military memorial day, the first “remembrance” day, commemorated on the weekend nearest to 2 June and acknowledging Canadian fallen in the Battle of Ridgeway, the North-West Resistance (1885), the South African War (1899–1902) and the First World War (1914–18). In 1931, 11 November was established as Canada's official national memorial day, named Remembrance Day. After the Armistice Day Act was passed, the casualties of Ridgeway and the North-West Resistance were no longer found in national memorialization, limiting Remembrance Day to Canadian casualties overseas, starting from the South African War. Petitions to the federal government in 2013 — from the City of Toronto and from the Town of Fort Erie — to restore the Ridgeway Nine to Canadian military memorial heritage by including them in national Books of Remembrance in Ottawa, were not heeded.
‘Built on dry land’ Scientists first made the discovery by accident in 2003 using sonar to survey the bottom of the lake but published their findings only recently. The structure is comprised of basalt rocks, arranged in the shape of a cone. It measures 230 feet (70 meters) at the base of the structure, is 32 feet (10 meters) tall, and weighs an estimated 60,000 tons. It is twice the size of the ancient stone circle at Stonehenge in England. Its size and location, says Shmuel Marco, a geophysicist from Tel Aviv University who worked on the project and who also took video of the structure during a scuba dive to examine it, indicated it could have been constructed underwater as a type of fish nursery. However archaeologists think it more likely it was built on dry land and later submerged by the lake. ‘Even more enigmatic’ The exact age of the structure has been difficult to pinpoint, but calculations based on the six to ten feet (two to three meters) of sand that have accumulated over the bottom of the base — sand accumulates an average of one to four millimeters per year — as well as comparisons to other structures in the region, put the estimate anywhere between 2,000 and 12,000 years old. The possible purpose of the structure is even more enigmatic. Dani Nadel, an archeologist from the University of Haifa, who partnered on the site, and who has led several prehistoric excavations in the region, notes it shares similarities with communal burial sites, though he’s quick to discourage anyone from drawing a definitive conclusion. What do you think could have been the purpose of this mysterious ancient structure? Feel free to share your own speculations with us! Source: Daisy Carrington, CNN
Lab demo of multiband photovoltaic device was showed. As you might know, the usual photovoltaic panes are efficient between 15-20%. It’s because the light spectrum is limited, and the rays falling out of the spectrum are not converted to electricity. But this might change soon. The demonstration was done using RSLE’s IBand technology and is the first known intermediate band solar cell reduced to practice in a laboratory demonstration. Thin film solar cells have two main advantages: cheaper to manufacture than a traditional silicon solar panels , and in addition are flexible and easily adaptable to any surface. Unfortunately, until now faced with efficiency up to about 9%. But with the technology of RSLE, the iBand technology makes it possible to compose several thin film solar cells ,each of which captures a different part of the solar spectrum. The experimental samples were produced by using commercially available technology, so production could begin relatively soon. The company said that this technology illustrates great promise for high efficiency thin film solar efficiencies above 35% by potentially capturing the full spectrum of the sun’s spectrum. The intermediate band solar cell developed by RSLE, is a thin film technology based on the discovery of highly mismatched alloys.
This preview shows pages 1–3. Sign up to view the full content. This preview has intentionally blurred sections. Sign up to view the full version.View Full Document Unformatted text preview: Econ 103 UCLA, Fall 2010 Answers to the Problem Set 1 by Dmitry Plotnikov Part 1: True or False and explain briefly why. 1. The expected value of a discrete random variable is the outcome that is most likely to occur. FALSE. The expected value of a random variable can lie (and usually does) between the possible outcomes of possible outcomes of a discrete random variable. This happens because by definition it is a weighted average of these outcomes. 2. If two random variables X and Y are independently distributed, then E ( Y ) = E ( Y | X ) . TRUE. If X and Y are independently distributed, distribution of Y does not depend on X , thus E ( Y | X ) can not depend on X and has to be equal to E ( Y ) 3. A probability density function tells the probability that a random variable is less than or equal to a certain value. FALSE. It is a cumulative distribution function that tells the probability that a random variable is less than or equal to a certain value. 4. V ar ( X + Y ) = V ar ( X ) + V ar ( Y ) + 2 Cov ( X,Y ) TRUE. Follows from the definition of variance. 5. V ar ( X- Y ) = V ar ( X )- V ar ( Y )- 2 Cov ( X,Y ) FALSE. V ar ( X- Y ) = V ar ( X )+ V ar ( Y )- 2 Cov ( X,Y ) because V ar (- Y ) = V ar ( Y ) . 6. If XY = 0 , then X and Y are independent. FALSE. If two random variables are uncorrelated it does not mean they are independent. However the opposite is true. 7. Let Y be a random variable. Then the standard deviation of Y equals E ( Y- Y ) . FALSE. Using properties of expectation operator E ( Y- Y ) = E ( Y )- Y = Y- Y = 0 .Also, by definition, the standard deviation of Y is equal to Y = p E [( Y- Y ) 2 ] . 8. Assume that X, Y and Z follow the distribution N ( ; 2 ) . Then W = X + Y- Z is normally distributed. TRUE. Any linear combination of normal random variables is normally distributed. 9. Assume that Y F 1 , . Then Y 2 1 . TRUE. Property of F-distribution. Replacing m = 1 in F m, = 2 m (see Lecture 1) the result follows. 1 Econ 103 UCLA, Fall 2010 10. Observations in a random sample are independent of each other. TRUE. Definition of a random sample. 11. If is an unbiased estimator of , then = . FALSE. If is an unbiased estimator of , then E [ ] = , but in general will not exactly equal . 12. If the p-value equals 0.96, then we cannot reject the null hypothesis. TRUE. We cannot reject the null if p-value is greater than the significance level (which usually equals 0.01, 0.05 or 0.10) 13. The standard error of Y equals the standard deviation of Y . That is, SE ( Y ) = Y . FALSE. It was calculated in class that V ar ( Y ) = 2 Y n . Thus SE ( Y ) = Y n 14. Assume that H : Y = Y, and H 1 : Y > Y, , and Y is normally distributed. To compute the critical value for this 1-sided test, we divide by two the positive critical value of the 2-sided test.... View Full Document - Winter '09
In the end of the XIX century most Jews were concentrated in the Russian empire. (Modern Poland, Ukraine, Belorussia). Until 1917 Jews in the Russian empire were discriminated (Pale of settlement, restrictions on education, discrimination in the army etc.). There were pogroms, people were killed, their property destroyed. With the start of WW I, conditions for many Jews became intolerable. So many of them wanted to emigrate. US was a principal destination for several reasons. The main reasons were: a) religious tolerance, b) immigration policy which made large scale immigration possible. After all, this is a "country of immigrants", c) the reputation of US as "a land of opportunities". In any case, the immigrants find it easier in the US than in other countries to completely integrate into the society. Palestine was also a destination for Jewish immigrants, but on a much smaller scale; a Jewish state in Palestine was only a dream, and neither the Turks before WW I, not the British after it welcomed immigration. And the local (Arab) population never welcomed immigrants... The second largest Jewish population was in Austro-Hungary. Conditions in Austro-Hungary were much better than in the Russian empire, but only until 1914. After WW I, (and Hungarian revolution) the empire collapsed in several national states, each with strong nationalist feeling, and nationalist policies. Many Austro-Hungarian Jews also wanted to emigrate, and US was also a prime destination. Several other peoples which felt oppression or lack of opportunity at home had very large immigration to the US at the same time (Irish, Italians). Finally, I add that a very large Jewish population still existed in Poland/Ukraine/Belorussia between WW I and WW II. Perhaps more than in the US. And you know what happened to these people... Most of those who survived, eventually moved to Israel or USA. Some numbers. Jewish population of Russian empire (including Poland) according to 1897 census: 5,189,400. This was slightly more that 1/2 of the total number of Jews in the world. Jewish population of Soviet Union before WW II was aproximately 2,500,000 (not including the part of Poland annexed in 1939). An estimate of the number of Jews who emigrated to the US in 1880-1928: 1.7 million. Number of Jews in Russia now: 194,000. An estimate of the number of those who emigrated to Israel (after the creation of Israel) - 1 million. Etc. Sources: EDIT. Exact statistics of immigration to the US is not available because the US authorities do not ask you about your religion. Which confirms the point a) above. To address the interesting question in the comment of Andrew Grimm: 34 million people in the US claim Irish descent (12% of population of the US). Population of Ireland is 6.4 millions. Source: Wikipedia.
By the end of this section, you will be able to: - Describe the pathway involved with neural sensation, integration and motor response. Having looked at the components of nervous tissue, and the basic anatomy of the nervous system, next comes an understanding of how nervous tissue is capable of communicating within the nervous system. Before getting to the nuts and bolts of how this works, an illustration of how the components come together will be helpful. An example is summarized in Figure 12.3.1. Imagine you are about to take a shower in the morning before going to school. You have turned on the faucet to start the water as you prepare to get in the shower. You put your hand out into the spray of water to test the temperature. What happens next depends on how your nervous system interacts with the stimulus of the water temperature and what you do in response to that stimulus. Found in the skin is a type of sensory receptor that is sensitive to temperature, called a thermoreceptor. When you place your hand under the shower (Figure 12.3.2), the cell membrane of the thermoreceptors changes its electrical state (voltage). The amount of change is dependent on the strength of the stimulus (in this example, how hot the water is). This is called a graded potential. If the stimulus is strong, the voltage of the cell membrane will change enough to generate an electrical signal that will travel down the axon. You have learned about this type of signaling before, with respect to the interaction of nerves and muscles at the neuromuscular junction. The voltage at which such a signal is generated is called the threshold, and the resulting electrical signal is called an action potential. In this example, the action potential travels—a process known as propagation—along the axon from the initial segment found near the receptor to the axon terminals and into the synaptic end bulbs in the central nervous system. When this signal reaches the end bulbs, it causes the release of a signaling molecule called a neurotransmitter. In the central nervous system (in this case, the spinal cord), the neurotransmitter diffuses across the short distance of the synapse and binds to a receptor protein of the target neuron. When the neurotransmitter binds to the receptor, the cell membrane of the target neuron changes its electrical state and a new graded potential begins. If that graded potential is strong enough to reach threshold, the second neuron generates an action potential at its initial segment. The target of this neuron is another neuron in the thalamus of the brain, the part of the CNS that acts as a relay for sensory information. At this synapse, neurotransmitter is released and binds to its receptor. The thalamus then sends the sensory information to the cerebral cortex, the outermost layer of gray matter in the brain, where conscious perception of that water temperature begins. Within the cerebral cortex, information is processed among many neurons, integrating the stimulus of the water temperature with other sensory stimuli, as well as with your emotional state and memories. Finally, a plan is developed about what to do, whether that is to turn the temperature up, turn the whole shower off and go back to bed, or step into the shower. To do any of these things, the cerebral cortex has to send a command out to your body to move muscles (Figure 12.3.3). A region of the cortex is specialized for sending signals down to the spinal cord for movement. The upper motor neuron starts in this region, called the precentral gyrus of the frontal cortex, and has an axon that extends all the way down the spinal cord. The upper motor neuron synapses in the spinal cord with a lower motor neuron, which directly stimulates muscle fibers to contract. In the manner described in the chapter on muscle tissue, an action potential travels along the motor neuron axon into the periphery. The lower motor neuron axon terminates on muscle fibers at the neuromuscular junction. Acetylcholine is the neurotransmitter released at this specialized synapse, and binding to receptors on the muscle cell membrane causes the muscle action potential to begin. When the lower motor neuron excites the muscle fiber, the muscle contracts. All of this occurs in a fraction of a second, but this story is the basis of how the nervous system functions. Career Connections – Neurophysiologist There are many pathways to becoming a neurophysiologist. One path is to become a research scientist at an academic institution. A Bachelor’s degree in science will get you started, and for neurophysiology that might be in biology, psychology, computer science, engineering, or neuroscience. But the real specialization comes in graduate school. There are many different programs out there to study the nervous system, not just neuroscience itself. Most graduate programs are doctoral, and are usually considered five-year programs, with the first two years dedicated to course work and finding a research mentor, and the last three years dedicated to finding a research topic and pursuing that with a near single-mindedness. The research will usually result in a few publications in scientific journals, which will make up the bulk of a doctoral dissertation. After graduating with a Ph.D., researchers will go on to find specialized work called a postdoctoral fellowship within established labs. In this position, a researcher starts to establish their own research career with the hopes of finding an academic position at a research university. Other options are available if you are interested in how the nervous system works. Especially for neurophysiology, a medical degree might be more suitable so you can learn about the clinical applications of neurophysiology. An academic career is not a necessity. Biotechnology firms are eager to find motivated scientists ready to tackle the tough questions about how the nervous system works so that therapeutic chemicals can be tested on some of the most challenging disorders such as Alzheimer’s disease or Parkinson’s disease, or spinal cord injury. Others with a medical degree and a specialization in neuroscience go on to work directly with patients, diagnosing and treating mental disorders. You can do this as a psychiatrist, a neuropsychologist, a neuroscience nurse, or a neurodiagnostic technician, among other possible career paths. Sensation starts with the activation of a sensory receptor, such as the thermoreceptor in the skin sensing the temperature of the water. The sensory receptor in the skin initiates an electrical signal that travels along a sensory axon within a nerve into the spinal cord, where it synapses with a neuron in the gray matter of the spinal cord. At the synapse the temperature information represented in that electrical signal is passed to the next neuron by a chemical signal (the neurotransmitter) that diffuses across the small gap of the synapse and initiates a new electrical signal. That signal travels through the sensory pathway to the brain, synapsing in the thalamus, and finally the cerebral cortex where conscious perception of the water temperature occurs. Following integration of that information with other cognitive processes and sensory information, the brain sends a command back down to the spinal cord to initiate a motor response by controlling a skeletal muscle. The motor pathway is composed of two cells, the upper motor neuron and the lower motor neuron. The upper motor neuron has its cell body in the cerebral cortex and synapses with the lower motor neuron in the gray matter of the spinal cord. The axon of the lower motor neuron extends into the periphery where it synapses with a skeletal muscle fiber at a neuromuscular junction. Critical Thinking Questions 1. Sensory fibers, or pathways, are referred to as “afferent.” Motor fibers, or pathways, are referred to as “efferent.” What can you infer about the meaning of these two terms (afferent and efferent) in a structural or anatomical context? 2. If a person has a peripheral motor disorder and cannot move their arm voluntarily, which motor neuron—upper or lower—is probably affected? Explain why. - action potential - change in voltage of a cell membrane in response to a stimulus that results in transmission of an electrical signal; unique to neurons and muscle fibers - cerebral cortex - outermost layer of gray matter in the brain, where conscious perception takes place - graded potential - change in the membrane potential that varies in size, depending on the size of the stimulus that elicits it - lower motor neuron - second neuron in the motor command pathway that is directly connected to the skeletal muscle - chemical signal that is released from the synaptic end bulb of a neuron to cause a change in the target cell - precentral gyrus of the frontal cortex - region of the cerebral cortex responsible for generating motor commands, where the upper motor neuron cell body is located - movement of an action potential along the length of an axon - region of the central nervous system that acts as a relay for sensory pathways - type of sensory receptor capable of transducing temperature stimuli into neural action potentials - membrane voltage at which an action potential is initiated - upper motor neuron - first neuron in the motor command pathway with its cell body in the cerebral cortex that synapses on the lower motor neuron in the spinal cord Answers for Critical Thinking Questions - Afferent means “toward,” as in sensory information traveling from the periphery into the CNS. Efferent means “away from,” as in motor commands that travel from the brain down the spinal cord and out into the periphery. - The upper motor neuron would be affected because it is carrying the command from the brain down.
Proteins are found in virtually every living system. They are the enzymes that are the driving force for our biological processes. They are the main components found in hair, skin, tissue and bone, and they provide the active basis for our immune response. Proteins are macromolecules (or polymers) composed of amino acids linked together by covalent peptide bonds. There are 20 different common amino acids, and they can be found in many different combinations (known as sequences) in protein molecules. Both the types of amino acids present as well as the amino acid sequence determine the final properties of the protein, and as you can imagine the possibilities are endless. The recipe for each highly-specialized macromolecule is contained within the DNA for all living organisms. Proteins have four levels of structure that aid them in performance of the function for which they are designed. - The primary structure is the amino acid sequence. This is the fundamental “building block” of the protein. - The secondary structure arises as the molecule grows in size and begins to twist or fold into an alpha-helix, beta sheet, or other less defined “turn” structures. - The tertiary structure is the structure that develops when side chains on a protein molecule are attracted to one another and assemble together to give the molecule a distinctive shape - The quaternary structure is the final structure composed of multiple assembled protein molecules that form a complex. The primary structure is formed via covalent bonding, while the other three structures are due to hydrogen bonding and hydrophobic interactions. Proteins and the study of them is a lifetime pursuit, but hopefully this brief background will establish the idea that proteins are quite remarkable in their design and function. Proteins in Hair Care Products Proteins have been used in cosmetics applications probably since the development of vanity in the human race. The beneficial properties of these natural substances were readily recognized and utilized, as illustrated in the tales of Cleopatra and her infamous milk baths. Proteins adsorb readily onto the surface of skin and hair, forming moisture-retentive films. The films act to smooth and flatten the hair cuticle, which makes the hair shiny and more-easily detangled. These films can also provide some protection from the environment and pollutants. Proteins are generally hygroscopic, meaning they attract water molecules from the air, so they also act as humectants. Proteins added to bleaching or perming solutions have been found to significantly reduce damage to the cuticle, and their addition to dyeing solutions has been found to improve dye uptake into the hair while minimizing damage as well. Most proteins used in personal care products have been hydrolyzed, a chemical method of breaking down the large structure of the protein into a smaller fragment of the primary structure (either a polypeptide or in some cases the amino acids themselves). These smaller polypeptides are more water soluble and thus more easily mixed into a formulation and also more readily absorb into the cortex of the hair. Hydrolyzed proteins penetrate the cuticle and absorb into the cortex of the hair. Research has shown that as much as 30-50% of the protein found in shampoos is absorbed and retained by the hair. The percentage is even higher in conditioning products due to the absence of cleansing surfactants. This protein absorption has been found to increase the strength and elasticity of hair fibers. Also, the more damaged the hair, the greater the extent of absorption and retention. The high level of protein-retention by the hair may lead to buildup problems for some people, which can manifest as dry or brittle hair. This effect is more pronounced when a person has healthy hair that has had little exposure to thermal or chemical treatments. The best way to minimize or avoid this problem is to use protein-containing products sparingly if you notice build-up problems. Some amino acids found in many proteins are positively-charged, which causes them to be attracted to negative substrates such as hair and skin. Proteins and polypeptides can also be chemically modified (quaternized) much like other polymers to have a greater number of positive charges on them to make them more substantive to hair. A few examples of these types of molecules are soydimonium hydroxypropyl hydrolyzed wheat protein, lauryldimonium hydroxypropyl hydrolyzed wheat protein and cocodimonium hydroxypropyl hydrolyzed wheat protein. These modified polypeptides are excellent conditioning agents and static reducers. Some developments have been made by DuPont in devising genetic engineering techniques to produce a spider silk protein in its intact form (non-hydrolyzed) that is water soluble. They have made claims in their patents that this whole protein forms far-superior films on the hair and provides many excellent benefits. As technology in this area of biomaterials and genetic engineering develops further, we can hope to see more contributions of this sort to the field. In summary, proteins are extraordinarily complex natural materials that can be of great benefit to the hair when applied in shampoos, chemical treatments, conditioners and styling products. On the exterior they provide moisture-retention, humectant properties, smoothing and detangling, and shine. As they penetrate the interior of the hair, they add strength and elasticity and act to “patch” weak spots. They are retained by the hair in high percentages, so some users may find it beneficial to rotate protein-containing products with ones without proteins. Many consumers have also found that using a very moisturizing conditioner paired with a protein product in their routine gives added benefit, probably due to the protein acting to seal in the extra moisture. As always, everyone’s hair is different as is their perception of what makes their hair feel and look nice, so it is always best to find what works best for you through experimentation |Some common proteins found in hair care products| |Protein||Major Amino Acids (generally many more amino acids are present)| |Keratin||Proline, lysine, cysteine (a sulfur-containing amino acid)| |Silk||Glycine and alanine| |Soy||Glutamic acid, aspartic acid| |Rice||Glutamic acid, aspartic acid, arginine| |Milk||Glutamic acid, proline (contains all eight of the “essential” amino acids)| |Wheat||Arginine, Leucine, Methionine|
In this lesson, students will gain an understanding of the Roaring Twenties era in the inter-war years. They will develop an awareness of the key people, events, and concepts associated with this event. Students will have the opportunity to achieve this through choosing their own method of learning, from reading, research, and video watching options, as well as the chance to engage in extension activities. This lesson includes a self-marking quiz for students to demonstrate their learning. Step 1: Download a copy of the reading questions worksheet: Step 2: Answer the set questions by reading the webpage below: Download a copy of the research worksheet and use the internet to complete all the tables. Step 1: Download a copy of the viewing questions worksheet: Step 2: Answer the set questions by watching the documentary below: The Century - America’s Time: 1920-1929: Boom to Bust. (1999). ABC News. Copyright © History Skills 2014-2022. Contact via email
This article needs additional citations for verification. (October 2016) (Learn how and when to remove this template message) This article's factual accuracy may be compromised due to out-of-date information. (February 2013) According to the US Energy Information Administration, in 1980 the world had enough proved gas reserves to last 48 years at the 1980 rate of production. Cumulative world gas production from 1980 through 2011 was greater than the proved gas reserves in 1980. In 2011, world proved gas reserves were enough to last 58 years at 2011 production levels, even though the 2011 production rate was more than double the 1980 rate. Role of new technology As new technologies for natural gas production are discovered, the world's ultimate reserves can grow. Although some predictions of ultimate reserve recovery include provisions for new technology, not every magnitude of breakthrough can be accurately accounted for. More than half the increase in US natural gas production from 2006 to 2008 came from Texas, where production rose 15% between the first quarter of 2007 and the first quarter of 2008. This was mostly due to improved technology, which allowed the production of deepwater offshore and "unconventional" resources. Important new developments were horizontal drilling and fracking in a geologic formation known as the Barnett Shale, underlying the city of Fort Worth, which is a highly impermeable formation and difficult to produce by conventional means. The Barnett Shale now produces 6% of US natural gas. Other shale gas formations in the lower 48 states are widely distributed, and are known to contain large resources of natural gas. - "In 2010, the United States used 24.1 Tcf of natural gas." http://www.naturalgas.org/business/supply.asp further cites estimates of reserves (from multiple independent analysts) ranging from 2,632 trillion cubic feet (Tcf) of technically recoverable natural gas resources in the United States to as low as 1,451 Tcf. - "92 years worth of natural gas is technically recoverable using ... today’s technology" http://energytomorrow.org/blog/a-paucity-of-scarcity/ - Peak gas - Reserves-to-production ratio - U.S. methane emissions from livestock - Animal agriculture produces more than 100 million tons of methane a year. - Methanogenesis is the formation of methane by microbes known as methanogens. - US Energy Information Administration, Accelerated depletion - US Energy Information Administration, International energy statistics, accessed 16 Sept. 2013. - "Is U.S. natural gas production increasing?". US Energy Information Administration. 2008. Archived from the original on 2008-06-12. Retrieved 2008-09-08.
Polar easterlies - Wikipedia The polar easterlies (also Polar Hadley cells) are the dry, cold prevailing winds that blow from the high-pressure areas of the polar highs at the North and South Poles towards low-pressure areas within the Westerlies at high latitudes. Another example of the phenomenon would be the trade winds, which reach from where. at the North Pole. Where is The westerlies have their origin in the: eastern The ______ marks the location where the westerlies and polar easterlies meet. Westerlies move towards east and Easterlies move towards west). The answer to your question lies in Conservation of Angular Momentum. In simple words. Мужчина смотрел на него недовольно. Период полураспада.
Science in Focus: Shedding Light: Lights, Camera, Action 3D Photography and Anaglyphs IV. Do it With Your Students Can you see in 3D? - Put your hand over your right eye and look at the image. - Now, put your hand over your left eye and look at the image - Look at the image with both eyes Then ask them to work with a partner as they try to answer the following questions: - Can you explain how these images were made? - What is the role of the color filters? - Why do you think there appear to be two images superimposed on each other? Looking for more? Look at the list of 3D, anaglyph, and filter resources in V. Resources.
- Principles of atomic (fission) weapons - Principles of thermonuclear (fusion) weapons - The effects of nuclear weapons - The first atomic bombs - The first hydrogen bombs - The spread of nuclear weapons Founding the Manhattan Project The United States’ entry into World War II in December 1941 was decisive in providing funds for a massive research and production effort for obtaining fissionable materials, and in May 1942 the momentous decision was made to proceed simultaneously on all promising production methods. Vannevar Bush decided that the army should be brought into the production plant construction activities. The U.S. Army Corps of Engineers was given the job in mid-June, and Col. James C. Marshall was selected to head the project. Soon an office in New York City was opened, and in August the project was officially given the name Manhattan Engineer District—hence Manhattan Project, the name by which this effort would be known ever afterward. Over the summer, Bush and others felt that progress was not proceeding quickly enough, and the army was pressured to find another officer that would take more decisive action. Col. Leslie R. Groves replaced Marshall on September 17 and immediately began making major decisions from his headquarters office in Washington, D.C. After his first week a workable oversight arrangement was achieved with the formation of a three-man military policy committee chaired by Bush (with chemist James B. Conant as his alternate) along with representatives from the army and the navy. Throughout the next few months, Groves (by then a brigadier general) chose the three key sites—Oak Ridge, Tenn.; Los Alamos, N.M.; and Hanford, Wash.—and selected the large corporations to build and operate the atomic factories. In December contracts were signed with the DuPont Company to design, construct, and operate the plutonium production reactors and to develop the plutonium separation facilities. Two types of factories to enrich uranium were built at Oak Ridge. On November 16 Groves and physicist J. Robert Oppenheimer visited the Los Alamos Ranch School, some 100 km (60 miles) north of Albuquerque, N.M., and on November 25 Groves approved it as the site for the main scientific laboratory, often referred to by its code name Project Y. The previous month, Groves had decided to choose Oppenheimer to be the scientific director of the laboratory where the design, development, and final manufacture of the weapon would take place. By July 1943 two essential and encouraging pieces of experimental data had been obtained—plutonium did give off neutrons in fission, more than uranium-235; and the neutrons were emitted in a short time compared to that needed to bring the weapon materials into a supercritical assembly. The theorists working on the project contributed one discouraging note, however, as their estimate of the critical mass for uranium-235 had risen more than threefold, to something between 23 and 45 kg (50 and 100 pounds). Selecting a weapon design The emphasis during the summer and fall of 1943 was on the gun method of assembly, in which the projectile, a subcritical piece of uranium-235 (or plutonium-239), would be placed in a gun barrel and fired into the target, another subcritical piece. After the mass was joined (and now supercritical), a neutron source would be used to start the chain reaction. A problem developed with applying the gun method to plutonium, however. In manufacturing plutonium-239 from uranium-238 in a reactor, some of the plutonium-239 absorbed a neutron and became plutonium-240. This material underwent spontaneous fission, producing neutrons. Some neutrons would always be present in a plutonium assembly and would cause it to begin multiplying as soon as it “went critical” but before it reached supercriticality; the assembly would then explode prematurely and produce comparatively little energy. The gun designers tried to overcome this problem by achieving higher projectile speeds, but they lost out in the end to a better idea—the implosion method. In late April 1943 a Project Y physicist, Seth H. Neddermeyer, proposed the first serious theoretical analysis of implosion. His arguments showed that it would be feasible to compress a solid sphere of plutonium by surrounding it with high explosives and that this method would be superior to the gun method both in its higher velocity and in its shorter path of assembly. John von Neumann, a mathematician who had experience in working on shaped-charge, armour-piercing projectiles, supported the implosion method enthusiastically and went on to be a major contributor to the design of the high-explosive “lenses” that would focus the compression inward. Physicist Edward Teller suggested that because the material was compressed, less of it would be needed. By late 1943 the implosion method was being given a higher priority, and by July 1944 it had become clear that an efficient gun-assembly device could not be built with plutonium. Los Alamos’ central research mission rapidly shifted to solve the new challenge. Refinements in design eventually resulted in a solid 6-kg (13-pound) sphere of plutonium, with a small hole in the centre for the neutron initiator, that would be compressed by imploding lenses of high explosive.
Solar power is a conversion of sunlight into electricity. The amount of solar energy reaching the earth’s surface is huge – almost 6000 times more than the power consumed by humans throughout the world. There are two systems of converting sunlight into electricity. - Photovoltaic System (PV) and, - Concentrated Solar Power System (CSP). The concentrating solar power system (CSP) uses lenses or mirrors to focus sunlight into a sharp beam with the help of concentrating solar collectors. This powerful beam is next focused on a small receiver to heat a fluid to a high temperature. The hot fluid is then used to generate steam that drives a steam turbine coupled to an electrical generator. Types of Concentrating Solar Collectors Various types of concentrating solar collectors are as under: - Parabolic trough collector. - Power tower receiver. - Parabolic dish collector. - Fresnel lens collector. Parabolic Trough Collector It is a line focusing type collector. In this type of collector, the solar radiations falling on the area of the parabolic reflector are concentrated at the focus of the parabola. When the reflector is manufactured in the form of a trough with the parabolic cross-section, the solar radiations gets focused along a line. An absorber pipe is placed along this line and a working fluid (usually synthetic oil or water) flows through it. When the focused solar radiations fall on the absorber pipe, it heats the fluid to a high temperature. Then the heat absorbed by the working fluid is transferred to water for producing steam. The focus of solar radiations changes with the change in sun’s elevation. In order to focus the solar radiations on the absorber pipe, either the trough or the collector pipe is rotated continuously about the axis of the absorber pipe. Solar Power Plant Using Parabolic Trough Collector These power plants employ an array of parabolic trough collectors installed with sun tracing device to collect the solar radiations which are used to heat a fluid (water). The general range of working temperature is between 250oC to 400oC. This heat is transferred to a storage tank and finally to feed water where the steam is generated in the steam generator. This steam is used to drive a turbine coupled to an electric generator. The mechanical energy produced by the turbine is converted into electrical power by the generator. The exhaust of the steam turbine is condensed in the condenser with the help of circulating cold water. The condensate is returned to the boiler with the help of a feed pump. The parabolic trough collectors are generally preferred over dish collectors because of low cost and requirement of sun tracking in one plane only. The system works on the Rankine cycle. The block diagram of the power plant using parabolic trough collectors is shown in Figure. Power Tower Receiver In this collector, the receiver is located at the top of the tower. It has a large number of independently-moving flat mirrors (heliostats) spread over a large area of ground to focus the reflected solar radiations on the receiver. The heliostats are installed all around the central tower. Each heliostat is rotated into two directions so as to track the sun. The solar radiations reflected from heliostats are absorbed by the receiver mounted on a tower of about 500 m height. The tower supports a bundle of vertical tubes containing the working fluid. The working fluid in the absorber receiver is converted into the high-temperature steam of about 600oC – 700oC. This steam is supplied to a conventional steam power plant coupled to an electric generator to generate electric power. Parabolic Dish Collector In these collectors, the receiver is placed at the focal point of the concentrator. The solar beam radiations are focused at a point where the receiver (absorber) is placed. The solar radiations are collected in the receiver. A small volume of fluid is heated in the receiver to a high temperature. This heat is used to run a prime mover coupled with a generator. A typical parabolic dish collector has a dish of 6 m diameter. This collector requires two-axis tracking. It can yield temperatures up to 3000oC. Due to the limitations of size and the small quantity of fluid, dish type solar collectors are suitable for only small power generation (up to few kW). Fresnel Lens Concentrating Collector In this collector, a Fresnel lens which consists of fine, linear grooves on the surface of refracting material of optical quality on one side and flat on the other side is used. The angle of each groove is so designed that the optical behavior of the Fresnel lens is similar to that of a common lens. The solar radiations which fall normally to the lens are refracted by the lens and are focused on a line where the absorber tube (receiver) is placed to absorb solar radiations. Advantages and Disadvantages of Solar Power - Solar power is silent, limitless and free. - It is pollution free. It releases no CO2, SO2 and NO2 gases which are produced in coal-fired generating stations. - It does not contribute to global warming. - Operating costs of solar power plants are very low. - Solar electricity is not produced at night. Hence, a complimentary power system is required. - Solar power is much reduced in cloudy conditions. - It is very location dependent, only suitable for favorable sun-shine sites. - Solar power plants require very large ground area. - At present, solar power is very costly. - Low thermal efficiency. - Needs a thermal storage system. Thanks for reading about types of concentrating solar collectors. - MHD Generator Working Principle - Closed | Open Cycle MHD System - Tidal Power Plant Working Principle - Working Principle of Hydroelectric Power Plant - Nuclear Power Plant Working Principle - Wind Power Plant Working Advantages | Disadvantages - Concentrating Solar Collector Types | Power Plants - Solar Panel Working Principle - How Geothermal Energy Works - OTEC | Ocean Thermal Energy Conversion System Working
About 90 percent of the worlds 30 million fishermen work in Asia (FAO 1998b), roughly 80 percent of them as small-scale or artisanal fishermen (IPFC 1994). Population growth, open access to the sea, and the belief of unlimited fishing resources in the sea have doubled the number of fisherfolk since 1970 (FAO 1998b). On the other hand, fishery resources are limited and are depleting fast in most coastal areas in Asia. The work and production of most commercial fishery are well documented by national and international organizations. However, the importance of small-scale fishery for national food security and for specific social groups within a region is not fully understood. One reason is that many fisherfolk involved in small-scale fishery offer their products on local markets or consume their catch themselves. This makes it difficult to collect reliable fishery data and assessments probably underestimate the total catch. Also the differentiation between small-scale or artisanal fishery and industrial or commercial fishery differs from one country to another in Southeast Asia. Therefore, comparable data about the catch and value of small-scale fishery in the region are not generally available. Besides supplying food, small-scale fishery also provides employment for a large group of mainly poor people. Fishing is often the only opportunity for villagers in coastal rural areas to earn some income. A study of small-scale fishery in Southeast Asia should therefore cover social as well as economic aspects. Population growth has caused a rise in the demand for fish. The increased fishing pressure, particularly in coastal waters, has resulted in already overexploited inshore fish stocks in many parts of Southeast Asia. The consequences for the fisheries as well as for the marine environment have been disastrous. Lower catches further increase the fishing effort and lead to the use of destructive fishing techniques such as fishing with too fine mesh sizes (mosquito nets) or with dynamite, which further accelerates the overexploitation of the aquatic resources and results in the destruction of the marine environment. Finally, in order to make a living, fishermen are forced to turn to other occupations or explore new fishing grounds. Although open access to marine resources is practiced in most areas of the region, migration into other fishing grounds has resulted in conflicts with the folk already fishing there. Migrating fishermen, who use different, mainly destructive, fishing gear, are seen as competitors for local fish stocks. Besides, the higher number of fishermen further increases the fishing pressure on fish stocks and further depletes fishing grounds. Therefore, migration into other fishing grounds is no solution for the problems of overexploited inshore resources. The alternative is for fishermen to change their occupation. However, in rural areas with a low average income and often no possibility of land ownership, opportunities for alternative income-generating activities are limited. In most cases, fisherfolk have to leave the village. This increases migration pressures on cities and leads to changes in the population structure of rural areas. The best way to ensure the livelihood of small-scale fisherfolk in rural areas is to establish sustainable fishery management plans that will support the rural poor fisherfolk. For fishery management, the implementation of the FAO Code of Conduct for Responsible Fishery (1995) will provide the necessary legal framework to achieve this goal. However, fishery management also has to recognize the social importance of small-scale fishery. It has to address the problem that the sustainable use of marine resources may no longer generate enough income for all fisherfolk engaged in small-scale fishery. Only if the economics of small-scale fishery is fully understood and its social importance as source of employment and income is fully recognized can proper recommendations for socially equitable and sustainable fishery management be made. This stresses the need for socio-economic studies on small-scale fishery. This study is a step in this direction. It was carried out in Southern Thailand to review the situation of small-scale fisherfolk along the west coast, with special emphasis on the bay of Phang-nga. With the full picture of the social structure of the area and a thorough description of its main fishery activities, their cost, profit and value as job-providing businesses, this study presents a fishery management plan adapted to the conditions of Thailands Andaman Sea. The objectives of the study were to:
Medical Definition of Infection, pork tapeworm Infection, pork tapeworm: Known medically as cysticercosis, an infection caused by Taenia solium (the pork tapeworm). Infection occurs when the tapeworm larvae enter the body and form cysticerci (SIS-tuh-sir-KEY) (cysts). When cysticerci are found in the brain, the condition is called neurocysticercosis (NEW-row SIS-tuh-sir-KO-sis). The tapeworm that causes cysticercosis is found worldwide. Infection is found most often in rural, developing countries with poor hygiene where pigs are allowed to roam freely and eat human feces. This allows the tapeworm infection to be completed and the cycle to continue. Infection can occur, though rarely, if you have never traveled outside of the United States. Taeniasis and cysticercosis are very rare in Muslim countries where eating pork is forbidden. Cysticercosis is contracted by accidentally swallowing pork tapeworm eggs. Tapeworm eggs are passed in the bowel movement of a person who is infected. These tapeworm eggs are spread through food, water, or surfaces contaminated with feces. This can happen by drinking contaminated water or food, or by putting contaminated fingers to your mouth. A person who has a tapeworm infection can reinfect themselves (autoinfection). Once inside the stomach, the tapeworm egg hatches, penetrates the intestine, travels through the bloodstream and may develop into cysticerci in the muscles, brain, or eyes. The signs and symptoms of the disease depend on the location and number of cysticerci in the body. Symptoms can occur months to years after infection, usually when the cysts are in the process of dying. When this happens, the brain can swell. The pressure caused by swelling is what causes most of the symptoms of neurocysticercosis. Most people with cysticerci in muscles won't have symptoms of infection. Diagnosis can be difficult and may require several testing methods. The health care provider will usually ask about where the patient has traveled and their eating habits. Diagnosis of neurocysticercosis is usually made by MRI or CT brain scans. Blood tests are available to help diagnose an infection, but may not always be accurate. If surgery is necessary, confirmation of the diagnosis can be made by the laboratory. Treatment is generally with anti-parasitic drugs in combination with anti-inflammatory drugs. Surgery is sometimes necessary to treat cases in the eyes, cases that are not responsive to drug treatment, or to reduce brain edema (swelling). Not all cases of cysticercosis are treated. Often, the decision of whether or not to treat neurocysticercosis is based upon the number of lesions found in the brain and the symptoms. When only one lesion is found, often treatment is not given. If there is more than one lesion, specific anti-parasitic treatment is generally recommended. If the brain lesion is considered calcified (this means that a hard shell has formed around the tapeworm larvae), the cysticerci is considered dead and specific anti-parasitic treatment is not beneficial. As the cysticerci die, the lesion will shrink. The swelling will go down, and often symptoms (such as seizures) will go away. To prevent cysticercosis and other disease causing germs: Cysticercosis is not spread from person to person. However, a person infected with the intestinal tapeworm stage of the infection (T. solium) will shed tapeworm eggs in their bowel movements. Tapeworm eggs that are accidentally swallowed by another person can cause infection. Anyone suspected of having cysticercosis (and family members) should be tested. Because the tapeworm infection can be difficult to diagnose, several stool specimens over several days may be needed to examine the stools for evidence of a tapeworm.Source: MedTerms™ Medical Dictionary Last Editorial Review: 6/9/2016 Medical Dictionary Definitions A - Z Search Medical Dictionary
There is a good chance higher animals would not exist without mitochondria, according to Molecular Expressions from The Florida State University. Without these organelles, it would be very difficult for some organisms to produce enough energy to survive.Continue Reading Mitochondria convert oxygen and nutrients into a form of energy called adenosine triphosphate, or ATP. Cells need ATP to perform all of their metabolic functions. Researchers from The Florida State University say cells are able to produce approximately 15 times more energy with mitochondria than they would without it. If cells did not have any mitochondria, they would have to use anaerobic glycolysis to produce ATP, according to the fourth edition of "Molecular Biology of the Cell." This process converts glucose into a substance called pyruvate, but it is less efficient than aerobic respiration so only a small amount of the energy from glucose is released. When pyruvate enters the mitochondria, however, the sugars are completely metabolized. As a result, more energy is available to the cell. Humans also need mitochondria to produce cholesterol and a component of hemoglobin, notes Genetics Home Reference. Even if cells were able to produce enough energy using anaerobic glycolysis, it is likely humans would still not be able to survive without mitochondria to help regulate these functions.Learn more about Molecular Biology & DNA
California State Standards (formerly Common Core SS) The Common Core State Standards initiative (CCSS) was a collaboration by various stake-holders to create problem-solving individuals who use reliable information to make decisions. These individuals are ready for college and career, and to compete in the global marketplace. These standards were adopted by the state of California in the spring of 2013. The shift in curricular instruction occurs gradually over the next three years in all grades and content areas. The teamwork between parents, students and teachers is crucial in supporting the development of critical thinking and application of knowledge to real-world scenarios. What Parents Can Do - Stay positive! Remember, EVERY KID CAN DO IT! - Encourage problem-solving by offering options rather than solutions. - Have students explain their thinking to you. - Encourage challenging reading, especially of non-fiction texts. - Engaging Your Student's Thinking (PDF) Use this handout to engage your children in conversation and encouraging learning. - Skills for 21st Century Learning (PDF) Helping with Homework: A Parent Resource
Scientists from 80 institutions world-wide collaborate to provide new answers to key evolutionary questions about birds and dinosaurs. Jim Drury reports. Many geneticists agree that birds are the living descendants of dinosaurs, sharing characteristics such as wings and feathers. Now the authors of an unprecedented scientific collaboration across 80 institutions in 20 countries say they have evidence to back this up. The Avian Phylogenome Project saw 45 avian species genomes sequenced, allowing the creation of arguably the most reliable avian tree of life ever drawn up. Professor of Genetics at the University of Copenhagen Tom Gilbert says the evidence backs up theories that modern birds emerged fast from a mass extinction event that wiped out the dinosaurs 66 million years ago. SOUNDBITE (English) PROFESSOR OF GENETICS AT UNIVERSITY OF COPENHAGEN, TOM GILBERT, SAYING: "Firstly it tells us that dinosaurs were very, very successful, but actually what it then starts to tell us is we can look at what genetic features are basically common to all birds and by this you can then infer that they were probably present in dinosaurs and we can find a number of things. For example, all birds have got pretty small genomes and this again might tie in with the ability to fly." Twenty eight studies are being published simultaneously in journals such as Science, Genome Biology, and GigaScience, with multiple findings. A Duke University team describe how vocal learning may have evolved independently in a few bird groups, and say this could help our understanding of human speech development. Montclair State University scientists believe mutations that led to birds losing their teeth began 116 million years ago. Other studies revealed that various bird species' sex chromosomes are at different stages of evolution, while saltwater crocodiles and American alligators' genomes are advancing exceptionally slowly. Gilbert says the next stage is to establish a theoretical genome of dinosaurs. SOUNDBITE (English) PROFESSOR OF GENETICS AT UNIVERSITY OF COPENHAGEN, TOM GILBERT, SAYING: "That hasn't been finished yet but once that's been done we can actually start to make predictions about features that dinosaurs had, so going from beyond the information from a single little bone, we can actually suggest that dinosaurs had this kind of metabolism or they had these kind of feathers, for example, or they had this kind of vision or this kind of smell or even this kind of brain." The bird genomes were created using frozen tissue samples collected over the past 30 years by museums and other scientific institutions. Most of the sequencing took place at the Beijing Genomics Institute. The consortium is creating a database to be made publicly available for scientists to further our understanding of both modern and prehistoric life.
Sql Stored Procedure Tutorial with Examples A stored procedure is a subroutine available to applications that access a relational database system. A stored procedure (sometimes called a proc, sproc, StoPro, StoredProc, StoreProc, sp or SP) is actually stored in the database data dictionary. |sql stored procedure tutorial with examples| What is SQL Stored Procedures? A stored procedure is a group of Transact-SQL statements compiled into a single execution plan. So if you think about a query that you write over and over again, instead of having to write that query each time you would save it as a stored procedure and then just call the stored procedure to execute the SQL code that you saved as part of the stored procedure. A stored procedure in SQL Server is similar to a procedure in other programming languages, Its a precompiled collection of Transact-SQL statements stored under a name and processed as a unit. - It can accept input parameters and return multiple values in the form of output parameters to the calling procedure or batch. - It can contain programming statements that perform operations in the database, including calling other procedures. - It can return a status value to a calling procedure or batch to indicate success or faliure (and the reason for failure) Advantages of using stored procedures - Modular programming - Faster Execution - Reduction in network traffic - Efficient reuse of code and programming abstraction - Can be used as a security mechanism (Grant users permission to execute a stored procedure independently of underlying table permissions) Read more about SQL Stored Procedure Best Practices Sql Stored Procedure Example To write a sql stored procedure use the create command:- create procedure sp_ShowEmpDetails select Name from Employee Exec Stored ProcedureTo execute a sql stored procedure use the execute command:- execute sp_ShowEmpDetails 'meerut' Drop Stored ProcedureTo delete a sql stored procedure use the drop command:- drop procedure sp_ShowEmpDetails; Conclusion: It was fun in learning and writing an article on stored procedure example in sql server 2005. I hope this article will be helpful for enthusiastic peoples who are eager to learn and implement some interesting stuffs in new technology. Please feel free to comment your opinion about this article or whatever you feel like telling me. Also if you like this article, don't forget to share this article with your friends. Thanks!
Character education is a crucial component of a good education. A large research study of good repute looked for the character traits that correlate most highly with success in life. The study identified the top 24 traits. Twenty-four is too many to remember for most people, however, so here are the top 7 in decreasing order of importance: - Zest: approaching life with excitement and energy; feeling alive and activated - Grit: finishing what one starts; completing something despite obstacles; a combination of persistence and resilience. - Self-control: regulating what one feels and does; being self-disciplined - Curiosity: taking an interest in experience for its own sake; finding things fascinating - Social intelligence: being aware of motives and feelings of other people and oneself - Gratitude: being aware of and thankful for the good things that happen - Hope (Optimism): expecting the best in the future and working to achieve it I talk about these a lot in assembly, particularly zest; the phrase ‘not apathy but zest’ has become so familiar to Birkdale students that it provokes a smile every time I use it! Several parents have told me that they cannot look at the amazon logo with its a to z arrow without thinking of the phrase. Birkdale places, and has always placed, great emphasis on developing ‘character’, more usually expressed as the desire to turn out well-rounded people. In the words of part of the school’s mission statement, ‘We aim to give all pupils a strong academic education, while developing them as whole individuals prepared for their wider role as responsible citizens willing to serve the community’. Here are five points about character education: Firstly, it is perfectly possible to achieve both excellent examination results and produce well-rounded students; there is no dichotomy between high academic standards and character education and in fact people with zest and grit tend to secure better examination grades for obvious reasons. Secondly, character education cannot be taught through explicitly focused lessons. Timetabling innovations which schedule a lesson on grit for 14 year olds followed by a double lesson of Maths are doomed to failure, although the latter might well help students to learn grit as well as curiosity. What would one do in a ‘grit’ lesson? Thirdly, developing character requires students to feel part of a community and to have a wide range of opportunities. Real communities always strengthen honesty, integrity and dignity as individuals absorb the collective values which are regularly and systematically articulated. Participating in sports teams, musical ensembles, drama productions, Duke of Edinburgh expeditions and so on all encourage grit and social intelligence. Charitable fund raising and volunteering promote community spirit. Assemblies provide opportunities for shared experience and reflection, engendering some sense of social intelligence as well as stimulating gratitude and hope and infecting many with zest. Fourthly, developing character requires students to have excellent relationships with skilled teachers: at Birkdale the Form Tutors actively encourage students to be involved with appropriate activities that will build particular character strengths in the individual. A shy student may be encouraged to try some debating or public speaking to develop confidence and social intelligence. Equally, a brash peer who finds it difficult to recognise the contribution of others may be helped to participate in a choir and experience the power of collective endeavour leading to greater social intelligence. In some year groups we ask students to rate themselves in each of the 7 character traits and think about how they may address their weaknesses. Repeating the exercise a year later is a powerful way for the students to judge their own progress and take responsibility for their personal growth. In a relatively small school, the teachers get to know each student extremely well. Fifthly, Birkdale has a curriculum that deliberately includes topics and opportunities to develop curiosity. Each subject area covers the standard National Curriculum material but departments also have time to tackle extra material chosen to stimulate interest. Higher up the school the Extended Project Qualification allows Sixth Form students to achieve accreditation for completing a curiosity-driven research project. Birkdale also runs a London Research Trip; potential student researchers pitch an idea for a project to members of staff and the winners receive support from the school to travel to London and make use of libraries, museums, galleries and university staff to research their idea. Upon returning from the capital, the winning students provide a substantial talk to staff, parents and students as well as a summary of their findings to the whole school in an assembly. I am much encouraged that the days of schools being judged solely on examination results now seem very much to be numbered. Great schools have always been about preparing students for life and not just for GCSE.
We're unlocking the evolution of incarceration as our ancestors saw it. If Fox’s new TV series “Alcatraz” has piqued your curiosity about prisons in your ancestors’ day, penitentiaries’ real history may surprise you. Until the mid-18th century, dungeons, gaols and other places of incarceration weren’t built primarily for punishing violent offenders. In the United States until the 1830s and England as late as 1869, debtors were sent to prison—where, ironically, they had to pay for their keep. As little as 60 cents’ worth of debt could mean imprisonment. England’s political detainees were locked in the Tower of London or Pontefract Castle. Those we’d consider criminals, however, were jailed only until they could be deported to penal colonies such as America, Australia or France’s Devil’s Island. Or they would be held pending corporal or capital punishment. In the 1500s and 1600s, lawbreakers were shamed and made into examples in public events: whipping, branding, dunking, confinement to the stocks. Execution was the punishment for many crimes, not just murder, reducing the need for cells.
The 1997 Nobel prize for Chemistry has been awarded to 3 biochemists for the study of the important biological molecule, adenosine triphosphate. This makes it a fitting molecule with which to begin the 1998 collection of Molecule's of the Month. Other versions of this page are: a Chime version and a Chemsymphony version. All living things, plants and animals, require a continual supply of energy in order to function. The energy is used for all the processes which keep the organism alive. Some of these processes occur continually, such as the metabolism of foods, the synthesis of large, biologically important molecules, e.g. proteins and DNA, and the transport of molecules and ions throughout the organism. Other processes occur only at certain times, such as muscle contraction and other cellular movements. Animals obtain their energy by oxidation of foods, plants do so by trapping the sunlight using chlorophyll. However, before the energy can be used, it is first transformed into a form which the organism can handle easily. This special carrier of energy is the molecule adenosine triphosphate, or ATP. The ATP molecule is composed of three components. At the centre is a sugar molecule, ribose (the same sugar that forms the basis of DNA). Attached to one side of this is a base (a group consisting of linked rings of carbon and nitrogen atoms); in this case the base is adenine. The other side of the sugar is attached to a string of phosphate groups. These phosphates are the key to the activity of ATP. |ATP consists of a base, in this case adenine (red), a ribose (magenta) and a phosphate chain (blue).| ATP works by losing the endmost phosphate group when instructed to do so by an enzyme. This reaction releases a lot of energy, which the organism can then use to build proteins, contact muscles, etc. The reaction product is adenosine diphosphate (ADP), and the phosphate group either ends up as orthophosphate (HPO4) or attached to another molecule (e.g. an alcohol). Even more energy can be extracted by removing a second phosphate group to produce adenosine monophosphate (AMP). When the organism is resting and energy is not immediately needed, the reverse reaction takes place and the phosphate group is reattached to the molecule using energy obtained from food or sunlight. Thus the ATP molecule acts as a chemical 'battery', storing energy when it is not needed, but able to release it instantly when the organism requires it. The fact that ATP is Nature's 'universal energy store' explains why phosphates are a vital ingredient in the diets of all living things. Modern fertilizers often contain phosphorus compounds that have been extracted from animal bones. These compounds are used by plants to make ATP. We then eat the plants, metabolise their phosphorus, and produce our own ATP. When we die, our phosphorus goes back into the ecosystem to begin the cycle again... The Nobel prize for Chemistry in 1997 has been shared by: The prize was for the determination of the detailed mechanism by which ATP shuttles energy. The enzyme which makes ATP is called ATP synthase, or ATPase, and sits on the mitochondria in animal cells or chloroplasts in plant cells. Walker first determined the amino acid sequence of this enzyme, and then elaborated its 3 dimensional structure. Boyer showed that contrary to the previously accepted belief, the energy requiring step in making ATP is not the synthesis from ADP and phosphate, but the initial binding of the ADP and the phosphate to the enzyme. Skou was the first to show that this enzyme promoted ion transport through membranes, giving an explanation for nerve cell ion transport as well as fundamental properties of all living cells. He later showed that the phosphate group that is ripped from ATP binds to the enzyme directly. This enzyme is capable of transporting sodium ions when phosphorylated like this, but potassium ions when it is not. More details on the chemistry of ATPase can be found here, and you can download the 2 Mbyte pdb file for Bovine ATPase from here. References: Chemistry in Britain, November 1997, and much more information on the history of ATP and ATPase can be found at the Swedish Academy of Sciences and at Oxford University.
Typography. The art and professional practice of communication in visual language. Broadly typography consists of the choice and application of: - Fonts, the software used to generate typefaces. - Typefaces, their style size and weight. - Spacing of these letterforms, in words, paragraphs and lines. - Arrangement of these blocks of information appropriately for the media on which they are to be distributed and read. Typography can be broadly divided into: - Display typography which aims to communicate visually as well as literally. It is often allusive and evocative in its visual form as well as the strict meaning of the language used. The history and associations of particular letterforms are carefully chosen to reinforce the message. Letter forms can be considered to be comparable to different ‘tones of voice’ or the full range of dramatic forms of speech. - Examples include: Advertising Hoardings, Posters, Book Jackets. - Text typography is primarily for continuous reading. It aims to unobtrusively convey the intentions of the author with as little informational ‘noise’ as possible. Letterforms or typefaces for continuous reading tend to be relatively conservative and displayed in a range of layout conventions appropriate to their use, to aid readability. There is often a complex heirarchy of information design to ensure that the relative importance of elements and optimal flow of information is implicit in the design. - Examples include: Book and magazine design. Fiction and non-fiction texts. - Website design is currently an interesting mixture of the two approaches, with the addition of new means of user interaction and animation. - Typography also includes the highly specialised field of type design but excludes the autographic craft traditions of lettering and calligraphy which are usually not considered part of this discipline unless they result in a full font or reusable typeface. Some suggested current readings would include: - HELLER, Steven The Education of a Typographer, Allworth Press - LUPTON, Ellen Thinking with Type, Princeton - BAINES, Phil & HASLAM Andrew Type & Typography, Laurence King - JURY, David About Face Some seminal classics include: - MORISON, Stanley First Principles of Typography, Cambridge UP - SIMON, Oliver An Introduction to Typography, Faber - BRINGHURST, Robert the Elements of Typographic Style, Hartley & Marks - TRACY, Walter Letters of Credit, Gordon Fraser - TSCHICHOLD, Jan The New Typography - TSCHICHOLD, Jan Asymetric Typography, Faber - TSCHICHOLD, Jan The Form of the Book, Lund Humphries
The healthy little brown bats roosting close to the bat with white-nose syndrome risk infection with the fungus The deadly fungus that causes white-nose syndrome is sweeping through North American bat populations, and little brown bats are adapting their behavior to avoid it. Although these bats typically clump together in large groups, they are now spreading out to roost separately, a change in behavior that may be helping the bat populations rebound. So what does a bat-killing fungus have to do with human prejudice? The bats’ trick of splitting up to survive contagion may also have led humans to divide into tribes and respond hostilely to members of different, potentially diseased groups. In a post on Scientific American’s Guest Blog, biologist Rob Dunn writes about the link between infectious diseases and human prejudice. One of the prints in El Castillo Cave’s Panel of Hands was created more than 37,300 years ago. A new study has revealed that Spain’s El Castillo Cave contains the oldest known cave paintings in Europe, with a handprint dating back 37,300 years and a red circle that was daubed onto the wall at least 40,600 years ago. Instead of testing the paint’s age, a team of British and Spanish researchers measured the age of the stone that had formed around the drawings. In a cave, mineral-rich water drips over the walls, eventually depositing stalactites, stalagmites, and the sheet-like formations called flowstone. Some prehistoric artists had painted over flowstone made out of the mineral calcite, and then water flowed over the paint and deposited even more calcite, leaving the drawings sandwiched between mineral layers. The researchers used uranium-thorium dating to accurately determine the age of the mineral layers and therefore the window when the art itself was created; unlike the similar, more conventional carbon-14 method, uranium-thorium dating gives accurate results without damaging the subject. Remnants of a Cryptocarya woodii leaf, which researchers say was part of the oldest bedding ever found In a South African cave, researchers have uncovered traces of the oldest known human bedding, 77,000-year-old mats made of grasses, leaves, and other plant material. While it’s not especially surprising that early humans would have found a way to improve the cold, generally unpleasant experience of sleeping on a cave floor, archaeologists know little about our ancestors’ sleeping habits and habitats. The ochre paint found in the abalone shells seems to have been made from a specific recipe. As archaeologists unearth scattered artifacts from the early years of our species, one of the questions they ask themselves is, when did early humans start thinking and behaving like modern humans? The recent discovery of 100,000-year-old site where paint was manufactured—equipped with mixing containers and tools—suggests that even very distant ancestors had something of our ability to plan, as well as a basic sense of chemistry. What’s the News: It turns out that the strong-jawed, big-toothed human relative colloquially known as “Nutcracker man” may never have tasted a nut. In a finding that questions traditional ideas of early hominid diet, researchers discovered that Paranthropus boisei, a hominid living in east Africa between 2.3 and 1.2 million years ago, mostly fed on grasses and sedges. “Frankly, we didn’t expect to find the primate equivalent of a cow dangling from a remote twig of our family tree,” researcher Matt Sponheimer told MSNBC. Read More
Magnetic Field Uses Sound Waves to Ignite Sun's Ring of Fire Sound waves escaping the Sun's interior create fountains of hot gas that shape and power the chromosphere, a thin region of the sun's atmosphere which appears as a ruby red "ring of fire" around the moon during a total solar eclipse, according to research funded by NASA and the National Science Foundation (NSF). These results were presented May 29, at the American Astronomical Society Meeting in Honolulu, Hawaii. The chromosphere is important because it is largely responsible for the deep ultraviolet radiation that bathes the Earth, producing our atmosphere's ozone layer, and it has the strongest solar connection to climate variability. The new result also helps explain a mystery that's existed since the middle of the last century -- why the chromosphere (and the tenuous corona above) is much hotter than the visible surface of the star. "It's like getting warmer as you move away from the fire instead of cooler, certainly not what you expect," said Scott McIntosh, a researcher at Southwest Research Institute, Boulder, Colo. “This work finds the missing piece of the puzzle that has fascinated many generations of solar astronomers. When you fit this piece in place, our vision of the chromosphere becomes clear,” said Alexei Pevtsov, Program Scientist NASA Headquarters, Washington. Using spacecraft, ground-based telescopes, and computer simulations, these new results show that the Sun's magnetic field allows the release of wave energy from its interior, permitting the sound waves to travel through thin fountains upward into the solar chromosphere. These magnetic fountains form the mold for the chromosphere. "Scientists have long realized that solar magnetic fields hold the key to tapping the vast energy reservoir locked in the Sun's interior," said Paul Bellaire, program director in NSF's division of atmospheric sciences. "These researchers have found the ingenious way that the Sun uses magnetic keys to pick those locks." Over the past twenty years, helioseismologists have studied energetic sound waves as probes of the Sun's interior structure because they are largely trapped by the Sun's visible surface -- the photosphere. The new research found that some of these waves can escape the photosphere into the chromosphere and corona. To make the new discovery, the team used observations from the SOHO and TRACE spacecrafts combined with those from the Magneto-Optical filters at Two Heights (MOTH) instrument stationed in Antarctica, and the Swedish 1 meter (3 foot) Solar Telescope on the Canary Islands. The observations gave detailed insight into how some of these trapped waves manage to leak out through magnetic "cracks" in the photosphere, sending mass and energy shooting upwards into the atmosphere above. "The Sun's interior vibrates with the peal of millions of bells, but the bells are all on the inside of the building. We have been able to show how the sound can escape the building and travel a long way using the magnetic field as a guide," continued McIntosh. By analyzing motions of structures in the solar atmosphere in detail, the scientists observed that near strong knots of magnetic field, sound waves from the interior of the Sun can leak out and propagate upward into its atmosphere. "The constantly evolving magnetic field above the solar surface acts like a doorman opening and closing the door for the waves that are constantly passing by," said Bart De Pontieu, a researcher Lockheed Martin Solar and Astrophysics Lab, Palo Alto, Calif. These results were confirmed by state-of-the-art computer simulations that show how the leaking waves continually propel fountains of hot gas upward into the Sun's atmosphere, which fall back to its surface a few minutes later. The scientists were able to independently demonstrate that the magnetic field controls the release of mass and wave energy into the solar atmosphere. The combination of these results demonstrates that a lot more energy can be pumped into the chromosphere by wave motions than researchers had previously thought. This wouldn't be possible without the relentlessly changing magnetic field at the surface. The research team includes Stuart Jefferies, University of Hawaii, Maui, Hawaii; Scott McIntosh, Southwest Research Institute, Boulder, Colo.; Bart De Pontieu, Lockheed Martin, Palo Alto, Calif.; and Viggo Hansteen, University of Oslo, Norway and Lockheed Martin. + Ring of Fire media page + SOHO site + TRACE site Rani Gran/Nancy Neal Jones Goddard Space Flight Center, Greenbelt, Md. National Science Foundation, Arlington, Va.
This is part one of a series on ggplot2. I’m starting a new series on using ggplot2 to create high-quality visuals. But in order to understand why ggplot2 behaves the way it does, we need to understand a little bit about the grammar of graphics. Leland Wilkinson published The Grammar of Graphics in 1999, with a revised edition in 2005. By 2006, Hadley Wickham had created ggplot (as mentioned in his presentation A grammar of graphics: past, present, and future) as an implementation of the grammar of graphics in R. In 2010, Wickham published A Layered Grammar of Graphics, which explains the reasoning behind ggplot2. In this first post of the series, I want to give you an idea of why we should think about the grammar of graphics. From there, we’ll go into detail with ggplot2, starting simple and building up to more complex plots. By the end of the series, I want to build high-quality, publication-worthy visuals. With that flow in mind, let’s get started! What Is The Grammar of Graphics? First, my confession. I haven’t read Wilkinson’s book and probably never will. That’s not at all a knock on the book itself, but rather an indication that it is not for everybody, not even for everyone interested in data visualization. Instead, we will start with Wickham’s paper on ggplot2. This gives us the basic motivation behind the grammar of graphics by covering what a grammar does for us: “A grammar provides a strong foundation for understanding a diverse range of graphics. A grammar may also help guide us on what a well-formed or correct graphic looks like, but there will still be many grammatically correct but nonsensical graphics. This is easy to see by analogy to the English language: good grammar is just the first step in creating a good sentence” (3). With a language, we have different language components like nouns (which can be subjects, direct objects, or indirect objects), verbs, adjectives, adverbs, etc. We put together combinations of those individual components to form complete sentences and transmit ideas. Our particular word choice and language component usage will affect the likelihood of success in idea transmission, but to an extent, we can work iteratively on a sentence, switching words or adding phrases to get the point across the way we desire. With graphics, we can do the same thing. Instead of thinking of “a graph” as something which exists in and of itself, we should think of different objects that we combine into its final product: a graph. Implementing The Grammar In the ggplot2 grammar, we have different layers of objects. In some particular order, we have: - The data itself, and a mapping explaining what portions of the data we want to represent parts of our graph. This mapping is made up of things we see on the graph: the aesthetics. Aesthetic elements include the x axis, y axis, color, fill color, and so on. - The statistical transformation we want to use. For example, there are stats for boxplots, jitter, qq plots, and summarization (page 11). Stats help you transform an input data frame into an output data frame that your plot can use, like generating a density function from an input data frame using stat_density(). - The geometric object (aka, geom) we want to draw. This could be a histogram, a bar or column chart, a line chart, a radar chart, or whatever. These relate closely to statistics. - Scales and coordinates, which give us the axes and legends. - Accompaniments to the visual. These include data labels and annotations. This is how we can mark specific points on the graph or give the graph a nice title. - The ability to break our visual into facets, that is, splitting into multiple graphs. If we have multiple graphs, we can see how different pieces of the data interact. The key insight in ggplot2 is that these different layers are independent of one another: you can change the geometric object from a line chart to a bar chart without needing to change the title, for example. This lets you program graphs iteratively, starting with very simple graphs and adding on more polish as you go. As a follow-on to this, you can choose more than one geometric object, for example. So if you want to draw a column chart with a line chart in front of it, that’s two geometric objects and not one special line+column chart. This lets you construct graphics at the level of complexity that you need. Even though the Wickham paper is nearing 8 years old by this point and the ggplot2 library has expanded considerably in the meantime, it remains a good introduction to the grammar of graphics and gives the motivation behind ggplot2. Over the rest of this series, we will dig into ggplot2 in some detail, generating some low-quality images at first but building up to better and better things.
Here Comes Super Bus A four-level story-based English course for young learners This four-level course for young learners aims to build children's confidence and develop their linguistic, social and emotional skills. By making English fun and enjoyable, Here Comes Super Bus lays down the foundations for future language learning and encourages children to use English as a means of real communications. Each unit focuses on a central story which is linked to a topic or theme of special interest to children. The activities and tasks are built around the language and content of the story, to suit the specific needs, interests and psychological characteristics of young learners. Children's interests are centred around themselves and their immediate world. Here Comes Super Bus offers many opportunities for children to talk and exchange information about themselves, their family, their home and their pets. The syllabus for the course provides a balance between topics, activities, tasks and develops vocabulary and language functions which are relevant to children's communication needs.
Picking your teeth with DNA! Problem(s): Where is DNA found in living organisms? What is DNA made of? What are nucleotides? What are the four nitrogenous bases found in DNA? Which bases pair together in a DNA molecule? Materials: 30 Toothpicks, 4 different colored markers, construction paper, Glue. 1. You will be given 30 toothpicks. Using the four markers, completely color each toothpick any of colors you want. Make sure to color the toothpick one solid color and try use all the colors. 2. Take the piece of construction paper and cut it in half. 3. Draw the phosphate and sugar backbone on each piece of paper, as your teacher has written on the board. There will need to be 15 phosphate and 15 sugar units on each paper. Take your time! 4. Now on one of pieces of paper attach a tooth pick, with glue, to each sugar group. 5. Did you guess what the color on toothpick was for yet? Each color represents a different nitrogenous base. Look at the board, find out which bases you have made. 6. To complete your DNA molecule, color your the remaining toothpicks the color of the complementary base of your current strand and attach them to the sugar molecules of your other stand of DNA, then attach both strands together by the bases (toothpicks). 1. Where is DNA found in a cell? 2. What are chromosomes? 3. What is the function of DNA? Powered by: The Online Teacher Resource (www.teach-nology.com) © Teachnology, Inc. All rights reserved.
In calculus, an advanced branch of mathematics, the difference quotient is the formula used for finding the derivative. The derivative is the rate at which a function changes, and the derivative is based on the difference quotient. The difference quotient was formulated by Isaac Newton. The Difference Quotient Defined[change | change source] A Simple Definition[change | change source] Simply put, the difference quotient can be described as the formula for finding the slope of a line that touches a curve (this line is called the tangent line). If we are trying to find the slope of a perfectly straight line, then we use the slope formula which is simple the change in "y" divided by the change in "x". This is very accurate, but only for straight lines. The difference quotient, however, allows you to find the slope of any curve or line at any single point. The difference quotient, as well as the slope formula, is merely the change in "y" divided by the change in "x." The only difference is that in the slope formula, y is used as the y-axis, but in the difference quotient, the change in the y-axis is described by f(x). (For a detailed description, see the following section.) A Mathematical Definition[change | change source] Before I give the mathematical formula of the difference quotient, I need to give the formal definition of the slope formula. (Where m is the slope) THE SLOPE FORMULA In other words, Where Δy=y2 - y1, and Δx=x2 - x1. As afore mentioned, this formula is accurate only for perfectly straight lines. For instance, the slope of a curve cannot be found using this formula. This is where the difference quotient comes in. THE DIFFERENCE QUOTIENT mslope=[f(x+Δx)-f(x)]/Δx The difference quotient can be used to find the slope of a curve, as well as the slope of a straight line. After we find the difference quotient of a function, we have a new function, called the derivative. To find the slope of the curve or line we input the value of "x" and we get the slope. The process of finding the derivative via the difference quotient is called differentiation. Applications of the Difference Quotient (and the Derivative)[change | change source] The derivative has many real life applications. One application of the derivative is listed below. Physics[change | change source] In physics, the instantaneous velocity that an object has (in other words, the velocity of something at a particular instant) is defined as the derivative of the velocity function of time. For example, if an objects position on a line is given by v(t)=-16t2+16t+32, then the objects velocity is v(t)=-32t+16. Also, the derivative is used to find instantaneous acceleration, which I will not deal with here (by the way, don't try to use the regular derivative to find instantaneous acceleration. doing that will give you the function for the instantaneous velocity. The way to find instantaneous acceleration is to take the derivative of the instantaneous velocity function. For example, in the above function, the acceleration function is -32 at every point. )
Identify the goals of those pressing for global change in 1919, and of those who opposed them. Fourteen Points speech was delivered by Wilson to Congress in January 1918 as his vision for a postwar world. The speech was notable in the way he translated progressive values from domestic to foreign policy. Competing visions at Versailles: European Allies, led by British Prime Minister David Lloyd George and French Premier Georges Clemenceau, were furious with Germany after WWI and wanted a treaty that punished Germany and made them pay for the damage of the war. President Wilson, instead, thought that the treaty should try to lay the groundwork to “end all wars.” He did not think that the Allies should punish Germany too harshly or make a land-grab for German colonies. Work in small groups to analyze the “Fourteen Points": First, aim for a concrete understanding of each of the points (though you do not need to examine too closely the geographies described in articles VI-XIII). Next, create 3-6 “tags” to categorize the fourteen points. Use these tags to construct a sentence. These sentences will be shared around the table. For example, if we were discussing mythical animals instead of treaty provisions, your tags might be: "fighting ligers," "unicorn farms," and "Chinese grass-mud horse protection." Then, your sentence might be: "President Wilson promoted liger combat, unicorn farming, and the protection of Chinese grass-mud horses." Finally, using Foner and online reference sources, explain whether or not you believe Wilson’s Fourteen Points were substantively reflected in the Treaty of Versailles. Draw on concrete evidence to support your argument. Describe how the ending and immediate aftermath of World War I sowed the seeds of future twentieth-century conflicts. To what extent would you fault President Wilson for this outcome? How might have the Treaty of Versailles been rewritten to reduce the chance for future conflict?
David Sacks has embarked on a fun, lively, and learned excursion into the alphabet–and into cultural history–in Letter Perfect. Clearly explaining the letters as symbols of precise sounds of speech, the book begins with the earliest known alphabetic inscriptions (circa 1800 b.c.), recently discovered by archaeologists in Egypt, and traces the history of our alphabet through the ancient Phoenicians, Greeks, and Romans and up through medieval Europe to the present day. But the heart of the book is the twenty-six fact-filled “biographies” of letters A through Z, each one identifying the letter’s particular significance for modern readers, tracing its development from ancient forms, and discussing its noteworthy role in literature and other media. We learn, for example, why letter X may have a sinister and sexual aura, how B came to signify second best, why the word mother in many languages starts with M. Combining facts both odd and essential, Letter Perfect is cultural history at its most accessible and enjoyable. The Marvelous History of Our Alphabet from A to Z Education & Reference
Converts a text string that represents a number to a number. Text is the text enclosed in quotation marks or a reference to a cell containing the text you want to convert. Text can be in any of the constant number, date, or time formats recognized by Microsoft Excel. If text is not in one of these formats, VALUE returns the #VALUE! error value. You do not generally need to use the VALUE function in a formula because Excel automatically converts text to numbers as necessary. This function is provided for compatibility with other spreadsheet programs. The example may be easier to understand if you copy it to a blank worksheet. How to copy an example Create a blank workbook or worksheet. Select the example in the Help topic. Note Do not select the row or column headers. Selecting an example from Help In the worksheet, select cell A1, and press CTRL+V. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas button. Note To view the number as a time, select the cell, and then on the Home tab, in the Number group, click the arrow next to Number Format, and click Time.
Mimas (, or as Greek Μίμᾱς, rarely Μίμανς) is a moon of Saturn which was discovered in 1789 by William Herschel. It is named after Mimas, a son of Gaia in Greek mythology, and is also designated Saturn I. Mimas is the smallest known astronomical body of the solar system which has a near-spherical shape due to its self-gravitation. Mimas was discovered by the astronomer William Herschel on 17 September . He recorded his discovery as follows: "The great light of my forty-foot telescope was so useful that on the 17th of September, 1789, I remarked the seventh satellite, then situated at its greatest western elongation. Mimas is named after one of the Titans in Greek mythology . The names of all seven then-known satellites of Saturn, including Mimas, were suggested by William Herschel's son John in his 1847 publication Results of Astronomical Observations made at the Cape of Good Hope . He named them after Titans specifically because Saturn (the Roman equivalent of Kronos in Greek mythology), was the leader of the Titans. According to Liddell and Scott's Greek-English Lexicon, the adjectival form of Mimas would be Mimantean (the genitive case is Latin Mimantis, Greek Μῑμάντος). In practice, anglicisms such as Mimasian and Mimian are very occasionally seen, but more commonly writers simply use the phrase 'of Mimas'. Mimas' low density (1.17) indicates that it is composed mostly of water ice with only a small amount of rock. Due to the tidal forces acting on it, the moon is not perfectly spherical; its longest axis is about 10% longer than the shortest. The somewhat ovoid shape of Mimas is especially noticeable in recent images from the Cassini probe. Mimas' most distinctive feature is a colossal impact crater 130 km across, named Herschel after the moon's discoverer. Herschel's diameter is almost a third of the moon's own diameter; its walls are approximately 5 km high, parts of its floor measure 10 km deep, and its central peak rises 6 km above the crater floor. If there were a crater of an equivalent scale on Earth it would be over 4000 km in diameter, wider than Canada. The impact that made this crater must have nearly shattered Mimas: fractures can be seen on the opposite side of Mimas that may have been created by shock waves from the impact travelling through the moon's body. The surface is saturated with smaller impact craters, but no others are anywhere near the size of Herschel. Although Mimas is heavily cratered, the cratering is not uniform. Most of the surface is covered with craters greater than 40 km in diameter, but in the south polar region, craters greater than 20 km are generally lacking. This suggests that some process removed the larger craters from these areas, or that something prevented larger stellar bodies from hitting the south polar region. Scientists officially recognise two types of geological features on Mimas: craters and chasmata (chasms). (See also: List of geological features on Mimas) Relationship with the rings of Saturn Mimas is responsible for clearing the material from the Cassini Division , the gap between Saturn's two widest rings, A ring and B ring . Particles at the inner edge of the Cassini division are in a 2:1 resonance with Mimas. They orbit twice for each orbit of Mimas. The repeated pulls by Mimas on the Cassini division particles, always in the same direction in space, force them into new orbits outside the gap. Other resonances with Mimas are also responsible for other features in Saturn's rings: the boundary between the C and B ring is at the 3:1 resonance and the outer F ring shepherd , is at the 3:2 resonance. More recently, a 7:6 corotation eccentricity resonance has been discovered with the G ring , whose inner edge is about 15 000 km inside the orbit of Mimas. Mimas has been imaged several times from moderate distances by the Cassini orbiter , the closest being at 63 000 km on 2005 August 01 . Cassini's extended mission will include several non-targeted close approaches to Mimas. Improvements on the current best will occur during passes on 2008 October 24 and 2009 October 14 . The closest will be on 2010 February 13 at 9 500 km. Mimas in fiction and film - When seen from certain angles, Mimas closely resembles the Death Star in Star Wars Episode IV: A New Hope, which is also said to be several hundred kilometers in diameter. This is purely coincidental, as the first film was made 3 years before the first close-up photographs of Mimas were taken. - Mimas is featured in the book Red Dwarf: Infinity welcomes careful drivers as the moon Dave Lister lives on prior to his acceptance into the mining ship Red Dwarf. - Mimas is the site of a Federation way station in the Star Trek universe.
How was the New Deal provide an answer to the challenges posed by the era of Progressive reform? 1 Answer | Add Yours At its most basic levels, the Progressive Era asked the question of how power inequities and the lack of social and economic justice can be resolved. One of the challenges of the Progressive Era was to force some type of discussion and transformation of both society and government to pay attention to those who were existing on the bottom of the capitalist structure. To this end, the New Deal addressed those challenges as it became evident that more people were placed at the bottom of the economic structure, something that they no longer seemingly controlled but rather something by which they were controlled. The New Deal's emphasis on relief, recovery, and reform all addressed the challenges posed by the era of Progressive reform. The Progressives believed that government and society needed to pay greater attention and do something about those who were being marginalized by capitalism. The New Deal did this. In its idea of getting all Americans back to work, embracing the idea of public works, ensuring that government intervene and assist those in need, as well as transforming the fundamental role of government to something more inteventionist and more driven to assist, the New Deal represented an answer to the questions that the Progressives posed. In many ways, the New Deal did a great deal to not only answer the challenges, but in some respects kill the Progressive reform elements. Progressivists were writing at a time period when the economic affairs of the nation were incapable of change. Government was in a non- interventionist position and so many failed to understand that capitalism, when undergoing severe contraction, can do so much to so many. The New Deal presented a vision of government, and in society, that was fundamentally different from what the Progressives had seen. From this, it not only addressed their challenges, but might have put the era to bed once and for all. Join to answer this question Join a community of thousands of dedicated teachers and students.Join eNotes
A new paper buttresses the theory that cataclysmic fires swept the globe 66 million years ago following a giant meteorite impact on the Yucatán Peninsula. Those blazes presumably spelled doom for the dinosaurs and much other life. Only a small number of creatures that could shelter themselves underground or underwater survived. In 2004, lead author Douglas S. Robertson, a geologist at the Universityof Colorado, Boulder, and colleagues advanced the idea, which was first proposed by H. Jay Melosh of the University of Arizona in 1990. Robertson found support for the global fire hypothesis through evidence of the complete destruction of global terrestrial communities following the impact, and the fact that among nonaquatic animals, only the small ones survived. In addition, all nonaquatic survivors, including birds, seem to have been burrowers. A number of papers, however, have since criticized the heat-fire extinction hypothesis, pointing, for example, to the relative absence of charcoal in the geological layers associated with the extinction event. New calculations of the sedimentation rate of charcoal, however, yield amounts consistent with a global fire hypothesis, the authors of the new paper note. They also suggest that the intense fires could well have burned up much of the charcoal. Another dispute involves the preservation of non-charred organic matter, which would appear to negate the possibility of global fires. But such material was collected at sites of swamps and ponds, where it would have sat safely underwater during the blazes. The new paper also takes into account work that refined how the infernos started. Research published in 2009 suggests that the heat released into the atmosphere by debris falling back to Earth after the impact was insufficient to ignite trees directly, but that tinder—dry leaves and grass—could still have readily ignited. Robertson considers that the evidence on the whole favors the heat-fire hypothesis. “Our model not only explains the extinction but gives details of what species survived and what didn’t—no other argument has that level of detail,” he said. “We think our argument is pretty solid right now, but there’s still some disagreement on the point.” (Journal of Geophysical Research–Planets)
The Cassini spacecraft has found evidence of what may be geysers of liquid water on a moon of Saturn, project scientists said on Thursday. "If we are right," said one of the Cassini researchers, the moon "might possibly have conditions suitable for living organisms." Liquid water is generally considered one of the likely preconditions for extraterrestrial life, along with sources of heat and organic materials. But why are we so sure water is crucial for the development of life? Every living thing on Earth needs water to survive. That doesn't mean life on other planets would necessarily be based on liquid water, but it gives us one of our best clues as to what to look for. Since water works so well for us, we may as well focus our attention on planets or moons that have it, too. What makes water so useful? First of all, it serves as a substrate for all the chemical reactions you need to make a living thing. To get something as complicated as biology, you've got to have a system that allows a wide variety of molecules to interact in a wide variety of ways. Water, which is a polar molecule—i.e., it has both positively and negatively-charged ends—acts as a "universal solvent." That means it can dissolve many chemicals—including the organic compounds that are the building blocks of life on Earth—and allow them to recombine or attach to one another in various arrangements. It also helps that water remains liquid at a wide range of temperatures. That's important because solids are too rigid to allow for the necessary chemical reactions and gases aren't stable enough to maintain them. If you started to mix up ingredients for living things in a liquid that's not as stable as water, climate changes on your planet might send the whole experiment down the toilet. Water has the added advantage of being self-insulating. That's because ice is lighter than water and floats on its surface. If a lake or ocean froze over, the sheet of ice on top could allow the water beneath the surface to stay liquid—which would in turn preserve the right conditions for life. Another liquid that doesn't share water's peculiar properties might freeze from the bottom up. Some scientists have proposed other liquid substrates that might foster life. You could imagine a life-form based on a different set of chemicals interacting in a substrate of liquid ammonia, which, like water, is polar. (Ammonia isn't a liquid at the same temperatures as water, but its properties could be similar on a planet with the right atmospheric pressure.) The discovery of liquid methane (and methane rain) on another of Saturn's moons has led some biologists to imagine a methane-based biology. Similar thought experiments consider the possibility of life based on elements other than carbon, like silicon or boron. Got a question about today's news? Ask the Explainer. Explainer thanks David Grinspoon of the Denver Museum of Nature & Science and Michael Shara of the American Museum of Natural History.
Through much of the nineteenth century, Great Britain avoided the kind of social upheaval that intermittently plagued the Continent between 1815 and 1870. Supporters of Britain claimed that this success derived from a tradition of vibrant parliamentary democracy. While this claim holds some truth, the Great Reform Bill of 1832, the landmark legislation that began extending the franchise to more Englishmen, still left the vote to only twenty percent of the male population. A second reform bill passed in 1867 vertically expanded voting rights, but power remained in the hands of a minority--property-owning elites with a common background, a common education, and an essentially common outlook on domestic and foreign policy. The pace of reform in England outdistanced that of the rest of Europe, but for all that remained slow. Though the Liberals and Conservatives did advance different philosophy on the economy and government in its most basic sense, the common brotherhood on all representatives in parliament assured a relatively stable policy-making history. In the 1880s, problems of unemployment, urban housing, public health, wages, working conditions, and healthcare upset this traditional balance and led the way for the advent of a new and powerful political movement in Great Britain: the Labour Party. By 1900, wages were stagnating while prices continued to rise throughout the country. The urban centers of London and Manchester faced crumbling housing and tenements arose throughout every major industrial center. Workers responded to their problems by putting their faith not in the Liberal Party, the group that traditionally received the worker vote since industrialization, but in the oft-militant trade unions, organizations that advanced worker demands in Parliament, cared for disabled workers, and assisted in pension, retirement, and contract matters. In 1892 James Kier Hardie, an independent workingman from Scotland, became the first such man to sit in the House of Commons. He represented the Labour Party and built upon trade union support to craft a workers' party dedicated to advancing the cause of working Englishmen. For the first time in its history, the British Parliament began to represent class distinctions in English society. By 1906, twenty-nine seats in Parliament went to Labour. Pressured by the new Labour movement, Liberals and Conservatives were forced to act for fear of losing any substantial labor vote. The so-called New Liberals, led by Chancellor of the Exchequer David Lloyd George, supported legislation to strengthen the right of unions to picket peacefully. The Liberal government passed the National Insurance Act of 1911, providing payments to workers for sickness and introducing unemployment benefits. In addition, heeding Labour's call for a more democratic House, Lloyd George pushed the Parliament Bill of 1911 that reduced the House of Lords (the upper house of Parliament that had always been dominated by conservatives averse to worker legislation) to a position lower than the House of Commons. Since the Parliament Bill, the Commons could raise taxes without the Lords approval and pay for any needed worker legislation. Finally, in 1913, the powerful Labour movement, about to eclipse the Liberals as the Conservative's opposition, pushed through the Trade Unions Act. This law granted unions legal rights to settle their grievances with management directly, without the interference of a generally conservative Parliament. The extension of the voting franchise that began in England in 1832 with the Great Reform Bill initiated, albeit slowly, a process of liberalization unseen in the history of the British Parliament. Previously, power rested in the hands of the few aristocrats with enough property and wealth to pass a relatively high property requirement for voting and holding office. Yet while the lowering of the wealth prerequisite provided an easy target for modern liberals when arguing for the democratization of Parliament, this democratization at first did not extend to the working class. Most representatives in the Commons came through Eton to either Cambridge or Oxford where, under the tutelage of the same professors, these future leaders developed a similar outlook on the world: the superiority of the British system, the rightness of imperialism, the power of industry, the benefits of trade, and the value of general isolation from the Continent. These views, though subject to some slight degree differences between Liberals and Conservatives, remained common through most of the House. Such views did not square with the new concerns of the workers who had neither received an elite education, nor, in some cases, an education at all. However, though it took more than half a century, the British system did gradually change to meet the problems associated with the industrial age. Also important to notice is that it did not require a Labour majority in Parliament--something that would not come until the interwar years--to initiate changes. The political system was malleable enough that pressure from a small minority party in Parliament pushed the traditionally uninterested Liberal and Conservative majority to seriously modify their political goals and actions. Politicians in England were farsighted, keen on capturing the awesome potential power of the worker movement before it got out of hand--namely, before it ignited a powerful party of its own.
Have you ever stood on the shore and gazed out at the majestic ships passing by, wondering about the names of the different parts of ships? If so, you’re not alone. Ships have been fascinating vessels transporting goods and people across the oceans for centuries. Understanding the various parts of a ship will make you appreciate their complexity and the incredibly remarkable engineering that makes them possible. So, let’s explore the various part of ships, from the engine room to the bridge and everything in between, to discover the impressive world of ships and enrich your maritime knowledge. Different Parts of a Ship Some of the basic terminologies used in a ship are the rooms in a ship are called cabins, the kitchen a galley, and the deck indicates the floor (two decks means two floors). The left side of a ship is referred to as the port side, whereas the right side is known as the starboard side. The ship’s back end is called AFT, and the front end is called the Fore or Bow. The hull is the main structure of a ship. The ship’s rear, also known as the aft end, houses a rudder, which serves the primary function to steer the ship when needed. Behind the rudders, there will be propellers, and it can be one or more than one. A ship with two propellers is called a twin-screw vessel. The AFT end has the draft marks painted on it, and the draft mark can indicate how heavily a ship is loaded (a very heavily loaded ship will have most of its draft mark immersed in water). A ship is like a small floating town, so let us briefly discuss the various parts of a ship: The stem is the forwardmost part of a ship (or bow of the vessel) and is usually an extension of the ship’s keel. The ship’s keel may have extended up to the gunwale (the top edge of a ship’s hull) to form a curved edge called the stem. A stem inclined at an angle to the water surface is called a raked stem, and when it is perpendicular to the water surface (straight), it is called the plumb stem. This bulb-shaped part protrudes at the bow (front) of the ship and is just below the water’s surface. This bulb cuts the water along, reduces the drag, and squeezes the water around the hull to increase the ship’s speed and make the vessel more fuel efficient. It may also reduce the pitching up slightly by increasing the buoyancy of the forward part of the ship. A large ship with a bulbous bow at the bow is usually 10 to 15% more fuel efficient than a similar-sized ship without a bulbous bow. Bow thrusters are propellers located at the front of a ship, designed to provide lateral thrust and enhance maneuverability in confined areas. They draw water into tunnels and expel it to generate sideways force. This allows ships to navigate through ports, canals and during docking procedures with precision. Bow thrusters offer improved safety, efficient operations, and enhanced control. Forecastle is among the foremost parts on the upper deck of a ship, with a length of greater than 7% of the total deck length. In a military vessel, it was used by the soldiers for taking defensive positions. A forecastle is used for anchoring and similar things in today’s ships. An anchor is a heavy metal part, and its shape is designed to grip the sea bed firmly. The anchor is fastened to the loose end of a strong chain, and the other end is secured to the ship structure. The ship typically moves from berth to berth, but due to heavy rush, a ship may have to wait until it is allotted a berth. A ship remains in the shallow water till it gets a berth, and it uses the anchor to secure the ship at a location. The ship’s crew drops the anchor into the shallow water, and the claw-like structure digs into the sea bed and keeps the ship from drifting away. A hull is the watertight body of a ship, and in most vessels, it is covered with the deck. However, in some ships, it may be partially or entirely open. On the top of the deck, you will find a deckhouse and superstructures like a funnel, cranes, etc. The hull may have been divided into many decks or compartments depending on the type of ship. The hull may have transverse or longitudinal structural members to strengthen it. Cargo holds are enclosed areas in a ship for carrying cargo. The cargo hold is right under the ship’s deck, and it holds different capacities depending on the size of the ship. Cargo ships usually have larger cargo holds and may feature derricks or cranes. The primary purpose of the cargo hold is to protect the cargo and keep it safe till it is delivered to its destination. The cargo hold may have been divided into several levels or partitions to enable the secure carrying of coal, grain, ores, etc. Hatch covers make the cargo hold water-tight space and protect the food grains or other cargo in the cargo hold during the voyage. The hatch cover’s design depends on the ship type and size, but today, most ships have hydraulically driven hatch covers since they are quick, effective, and enable faster cargo handling. The mast of a ship is a tall spar (a thick and strong pole), usually built vertically on the center line of a ship. A ship can have two masts: the foremast and main mast. The mast of a ship has multiple purposes, such as carrying sails (in earlier days), derrick (a lifting device), navigation light, radio aerial, scanners, and many others. You can define the boat deck as a floor covering the ship’s hull structure. This is the primary working space for the ship. Ships usually have more than one level inside the hull and a multilevel superstructure above the primary deck (similar to a multi-floor building, but in ships, the floors are termed as decks). A ship can have different decks at different sections or parts of the ship. Depending on the location of a ship’s deck, the six major types of the deck are main deck, poop deck, upper deck, lower deck, weather deck, and foredeck, and a naval ship may have other decks like helicopter deck, hanger deck, etc. The primary purpose of the deck is structural, that is, to hold the hull structure and provide the floor as a working space. The Fore deck is the forward part of a ship, the deck space between the superstructure and the forecastle structure. Main Deck or Weather Deck This is the uppermost continuous deck running from bow to stern. It provides a working area for the crew during normal operating conditions. On larger ships, the main deck often includes various structures such as the bridge, accommodation areas, and sometimes even recreational spaces. The upper deck is situated above the main deck and provides additional working and storage space. It is typically used for machinery, cargo handling, and sometimes even recreational activities, such as open-air spaces for passengers on cruise ships. These decks are below the main deck and accommodate various functions, including storage, machinery, crew cabins, and engine rooms. Lower decks are essential for maintaining the ship’s stability and housing critical systems and equipment. A poop deck is the deck constructed in the aft (rear) of the superstructure of a ship. The word ‘poop’ originates from the French word la poupe (meaning stern) or the Latin word Puppis. Hence the poop deck is technically a stern deck. The poop deck has no connection with the slang word poop. This deck is usually found on passenger ships and provides open space for leisure and relaxation. It is often situated on the uppermost level, offering panoramic views of the surrounding ocean. Promenade decks may feature amenities such as lounges, restaurants, and outdoor seating areas. Funnel and Funnel Deck A ship’s funnel is a chimney whose function is to discharge the engine exhaust and smoke from the boiler safely. There can be more than one chimney. Additional care is taken when discharging the engine exhaust from the funnel to control atmospheric pollution. The funnel is never straight and inclines the aft to safeguard the deck and the navigation bridge from the chimney gasses. The stern is the aft-most or the extreme back part of a ship, and its location is opposite the bow (the foremost part of the ship). The term stern typically includes the entire back of the ship. You can see the propellers and rudders at the stern end. The monkey island is the topmost height of the ship that, is accessible to the crew and is located above the bridge. In olden times, the monkey island was used by sailors for planetary and solar studies. The monkey island is an essential part of a ship, and it accommodates VDR (voyage data recorder) capsule, AIS (automatic identification system) antenna, radar scanners linked to the radar mast, equipment for communication, a weather vane, etc. The bridge of a ship is planned as the heart of the vessel, and it should be able to provide a clear and unhindered view of the surrounding area. You can compare this with the cockpit of an aircraft. The ship’s bridge houses the major steering equipment and other essential equipment, such as communication systems, navigation charts, engine control systems, and many others. You can call the bridge the commanding position of the ship and control the ship’s complete movement. When present on the bridge, the ship’s captain has complete command over the bridge, and in their absence, the senior officer on the bridge has the command. Located at the top of the superstructure, the bridge deck houses the ship’s bridge, including the navigation equipment, control room, and captain’s quarters. From the bridge deck, the captain and officers have a clear view of the surroundings and can safely navigate the ship. A bridge wing is an extended area on the top of the ship’s superstructure. Its primary purpose is to provide an unhindered clear view of the ship’s fore, aft, and sides. The bridge wing contains controls and communication equipment. And the extent of controls and navigation controls on the bridge depends on the captain’s needs. The accommodation of a ship is the living house for the crew. The accommodation has all the basic amenities for living, including a galley (kitchen), crew cabins, common rooms, recreation room, gymnasium, saloon, medical room, food courts, dining hall, laundry, etc. It also has a garbage disposal unit, refrigeration, an air-conditioning system, a freshwater processing system, etc. The engine room is the place that houses the machinery required for the propulsion of the ship. And the machinery can be diesel engines, diesel generators, etc. Typically, a ship’s engine room is located at the bottom, towards the back (or aft) of the vessel. This strategic positioning aims to optimize storage space. It also ensures closeness to the ship’s propeller, significantly contributing to the vessel’s safety in case of a disaster, as the vessel can continue to function as long as the engines can generate power. A keel is one of the main parts of a ship. It runs longitudinally along the bottom centerline of the ship (from the bow to stern). And it is the bottommost structural member of the ship, and the ship’s hull is built around the keel. Due to its location and function (holding and supporting the hull structure), the keel is often referred to as the ship’s backbone, as all the major components of the ship are connected to it. A duct keel is a welded box-type structure at the ship’s center and is provided in double-hull ships. The duct keel offers the space for water pipelines, ballast pipelines, and piping systems. Rudders are flat hollow structures and are placed aft of the propeller. The rudder is used to steer and direct the ship. The parts of a rudder (depending of its type) are the rudder trunk, movable flap, hinge, main rudder blade, drain plug, rudder bearing, and links. There are four types of rudders, balanced rudder, semi-balanced rudder, unbalanced rudder, and flap rudder. The rudder is controlled using a steering gear system. A propeller is a mechanical device that has blades fitted on its shaft. A ship’s propeller is rotated by a diesel engine (through a gear system) or by an electric motor with a gear system. The diesel generator supplies the motor’s electric power. The rotating propeller generates the thrust required for the propulsion of the ship. The propeller forces the sea water backward and helps the ship move forward. The thrust is caused due to the favorable angle on the blades. The propeller is made from corrosion-resistant metals like Nickle Aluminum bronze alloys, manganese bronze, or even stainless steel. It can be up to 4 propellers that propel the ship forward. These are similar to a propeller (may be smaller) and fitted on the side of the ship’s bow or stern. Side thrusters help maneuver the ship at a low speed in crowded waters (near the port) or a narrow canal. Side thrusters are also known as tunnel thrusters. The side thrusters impact the operating cost of a ship. They are usually powered by hydraulic or electric power. A bow thruster (mentioned above) is considered a side thruster. The freeboard of a sailing ship is the distance between the water line and the upper deck level. And the measurement is taken at the lowest point of sheer where the entry of water into the ship may be possible. The purpose of freeboard between various sections of a ship is to keep it stable and prevent it from sinking. Keeping the freeboard of a ship low or high depends on the type and purpose of the ship. The calculation of the freeboard for a vessel may have to be approved by the relevant regulatory authority before the ship is commissioned. Emergency Generator Room The emergency generator room is on the main deck. And it houses an emergency diesel generator with a switchboard and other accessories. This generator is operated when the main supply goes off for any reason. And it helps restore the main generators by providing the necessary power. Ballast Tanks and Bunker Tanks Ballast tanks are compartments for carrying seawater and help stabilize the vessel. The amount of water in the ballast tank can be increased or reduced as necessary. Ballast tanks need extra coating protection since seawater is highly corrosive and can damage the surfaces of the ballast tank. Bunker tanks of a ship are compartments used for the storage of fuel. Frequently Asked Questions What are the basic terminologies used in a ship? Some of the basic terminologies used in a ship are: - The rooms in a ship are called cabins. - The kitchen is called a galley. - The deck indicates the floor (two decks mean two floors). - The left side of a ship is referred to as the port side. While the right side is known as the starboard side. - The ship’s back end is called AFT, and the front end is called the Fore or Bow. - The hull is the main structure of a ship. What are the terms used for the front and back parts of a ship? The front part of a ship is called the Fore or Bow. While the back part of the ship is known as the AFT or Stern. What is the difference between the port side and starboard side? The left side of a ship is referred to as the port side. Whereas the right side is known as the starboard side. What do the draft marks on a ship indicate? The draft marks painted on the AFT end indicate how heavily a ship is loaded. A very heavily loaded vessel will have most of its draft mark immersed in water. What are rudders and propellers, and what roles do they play in a ship’s operation? A rudder is a steering device located at the ship’s rear (aft end), used to control the ship’s direction. Propellers, located behind the rudders, are the rotating parts that create forward or backward thrust to move the vessel through the water. A ship with two propellers is called a twin-screw vessel. A ship is a complex and intricately designed vessel composed of interconnected parts. These parts contribute to the functioning, navigation, and overall efficiency of a ship. And ensure the safety and well-being of its crew and cargo. From the hull to the propulsion system, from the bridge to the cargo holds, each component has a specific purpose that ultimately allows the ship to navigate treacherous waters and transport goods across vast distances. Understanding the different parts of ships not only gives us insight into the fascinating world of maritime engineering but also highlights the essential role that ships play in global trade and commerce.
As a parent, you play an important role in your child’s learning journey, including when it comes to technology. One concept that may seem complicated but is essential to understand is RAM, short for Random Access Memory. RAM is an integral part of computers and other electronic devices that your child may use, such as phones and tablets. Explaining RAM to a child may seem daunting, but with the right approach, it can be an enjoyable and educational experience for both you and your child. In this article, we will provide you with tips and resources on how to explain RAM to a child in a way that is easy to understand and fun to learn. - RAM, short for Random Access Memory, is an essential component of computers and electronic devices. - Explaining RAM to a child can be an enjoyable and educational experience for both parent and child. - Using analogies and visual aids can help children understand the concept of RAM. - Introducing fun activities and experiments can make learning about RAM engaging for children. What is RAM? Before we dive into the inner workings of RAM, let’s start with the basics: what is RAM? RAM stands for Random Access Memory. Think of it like a human’s short-term memory. Just like how you remember information for a short period of time before forgetting it, RAM stores data temporarily while your computer is running. RAM is made up of small chips that are located on the motherboard of your computer. The amount of RAM your computer has determines how many programs and applications it can run at once, and how quickly it can switch between them. Here’s an analogy to help you understand better: imagine your computer is a kitchen, and the RAM is the counter space. The more counter space you have, the more ingredients you can have out at once and the faster you can prep your meals. Similarly, the more RAM you have, the more programs and applications you can have open and the faster your computer can run them. RAM is like a computer’s short-term memory. It stores data temporarily while your computer is running, and the more RAM you have, the more programs and applications you can have open and the faster your computer can run them. How Does RAM Work? Now that you understand what RAM is, let’s dive deeper into how it works. Imagine RAM as a short-term memory for your computer. When you’re working on a task, your computer temporarily stores the necessary files and data in RAM to quickly access them when needed. Once you’re done with the task, the data is deleted from RAM to make space for new data. This process happens incredibly fast and is essential for keeping your computer running smoothly. To put it simply, RAM is made up of tiny components called memory cells, and each cell can store a bit of information. The more memory cells your RAM has, the more data it can store at once. RAM’s speed is measured in terms of its capacity and how quickly it can transfer data, known as bandwidth. The higher the capacity and bandwidth, the faster your computer can access data. When you start up your computer, the operating system is loaded into RAM, along with other essential applications that run in the background. This ensures that your computer is responsive and ready to use. As you open additional applications and files, they are also loaded into RAM, and the computer switches between them rapidly, giving the impression of seamless multitasking. One essential thing to keep in mind is that RAM is volatile memory, which means that it loses its data when the power is turned off. This is why it’s crucial to save your work frequently and back up important files, so you don’t lose any data if your computer crashes or loses power unexpectedly. Understanding how RAM works is the key to optimizing your computer’s performance and ensuring that it runs smoothly. In the next section, we’ll discuss why RAM is so crucial for your computer and how it impacts your everyday tasks. Why is RAM Important? RAM may seem like a small component of a computer, but it plays a critical role in making your device run smoothly and efficiently. Think of RAM as your computer’s short-term memory. When you open an application or a file, it gets loaded into your RAM, which allows your device to access it quickly and easily. The more RAM your device has, the more programs it can run simultaneously without slowing down. If you’re into gaming, then RAM is even more important. Games require a lot of resources, and a lack of RAM can result in slow load times and choppy gameplay. The same goes for other resource-intensive applications, such as video editing software or web browsers with multiple tabs open. A good amount of RAM ensures that your device can handle these demanding tasks with ease, providing a seamless experience. It’s also important to note that different types of RAM offer varying levels of performance. For example, DDR4 RAM is faster and more efficient than DDR3 RAM, so it’s worth considering when purchasing new RAM for your device. Different Types of RAM RAM has evolved over time, and there are different types available in the market today. Let’s take a look at them: |Type of RAM |1333 – 2133 MHz |4GB – 16GB |Commonly used in older computers and laptops |2133 – 4266 MHz |4GB – 64GB |Used in newer computers and laptops for faster performance As you can see, DDR4 is faster and can hold more data than DDR3. It is always recommended to use the latest technology for better performance, but DDR3 is still commonly used in older devices that don’t support DDR4. Understanding the different types of RAM can help you make informed decisions when upgrading or purchasing a computer. Now, you are one step closer to becoming a RAM expert! How Much RAM Do You Need? RAM is like a short-term memory for your computer. It helps your device remember and access data quickly. The amount of RAM you need depends on what you want to do with your computer. If you want to browse the internet or use basic applications, then 4GB of RAM should be enough. However, if you want to play games or use demanding software like video editing programs, then you’ll need more RAM. The general rule of thumb is that 8GB to 16GB of RAM is adequate for most tasks. Keep in mind that having more RAM doesn’t necessarily mean that your computer will run faster. It means that you’ll be able to run more programs at the same time without lagging or crashing. So, before purchasing a new computer or upgrading your existing one, consider what you will be using it for and choose the amount of RAM that fits your needs. Fun Ways to Explore RAM Learning about RAM doesn’t have to be boring! Here are some fun activities and experiments you can do with your child to help them understand the concept of RAM: Build a Simple Circuit A great way to introduce your child to the concept of RAM is by building a simple circuit together. Start by explaining how RAM stores data, and then show them how to build a simple circuit using a breadboard, LEDs, and a microcontroller. This will help them understand how RAM works and how it’s used in electronic devices. Play Memory Games Playing memory games is another fun way to help your child understand how RAM works. Start by playing a simple matching game with your child, and then explain how RAM stores data in a similar way. As they play the game, they will start to understand how computers use RAM to process data quickly and efficiently. Create Virtual Simulations If your child is interested in computer games or simulations, you can create virtual simulations together to help them understand how RAM works. Use simple programs like Scratch or Python to create simulations that demonstrate the role of RAM in computing. Your child will have fun creating and exploring these simulations while learning about technology. RAM in Everyday Life Have you ever wondered how your cool and fast video game runs so smoothly? Well, that’s all thanks to RAM! RAM is not just limited to gaming, but it’s also used in smartphones, tablets, and laptops, making it essential for everyday use. For example, when you open your favorite app on your phone, the RAM helps to load and run the app quickly, with no lag. Without enough RAM, your apps and device would run slower, and you might experience frozen screens or crashes. The same goes for browsing the internet. RAM helps to load and display web pages faster, making your internet browsing experience smoother and more enjoyable. Without RAM, many of the devices and programs we use every day would not function as effectively as they do now, so it’s essential to understand its importance! Image source: https://www.pcmag.com/how-to/how-much-ram-do-i-have-in-my-pc-and-what-kind The Future of RAM RAM technology has come a long way since its inception. As technology advances, so does the need for faster and more efficient RAM. One possible future development is the use of quantum RAM or QRAM, which could revolutionize computer processing and memory capabilities. QRAM relies on the principles of quantum mechanics to store and process information, potentially offering speeds that are billions of times faster than current RAM technology. Another promising development is the use of non-volatile RAM (NVRAM), which would allow computers to retain information even when powered off. This type of RAM would eliminate the need for traditional hard drives and could significantly improve data storage and retrieval speeds. As technology continues to evolve, so will RAM. It is exciting to think about the possibilities that new RAM advancements will bring to the world of technology and beyond. Image source: seowriting.ai Q&A: RAM FAQs for Kids Here are some common questions kids may have about RAM: What does RAM stand for? RAM stands for Random Access Memory. What does RAM do? RAM is used to store data that the computer is currently using. It allows the computer to access data quickly, which makes programs run faster and smoother. How is RAM different from a hard drive? RAM is used for short-term storage, while a hard drive is used for long-term storage. When you turn off your computer, everything in your RAM is erased, but data stored on a hard drive remains. How much RAM does my computer need? The amount of RAM your computer needs depends on what you use it for. If you use your computer for playing games or doing video editing, you may need more RAM than someone who just uses their computer for browsing the internet. Can I add more RAM to my computer? In most cases, yes. If you have an empty RAM slot in your computer, you can purchase and install more RAM to increase your computer’s performance. What happens if I don’t have enough RAM? If you don’t have enough RAM for the programs you’re running, your computer may slow down or crash. Understanding RAM is an important part of using technology. By learning about RAM, you’ll be able to take better care of your computer and use it more efficiently. RAM Fun Facts Are you ready to have some fun while learning about RAM? Check out these interesting facts: - RAM stands for Random Access Memory. - The first RAM was invented in 1947 at the University of Pennsylvania. - The average smartphone has about 4GB of RAM, while some high-end models can have up to 16GB. - The world’s fastest supercomputers use petabytes of RAM. - RAM is faster than a hard drive because it has no moving parts. Now that you know some fun facts about RAM, impress your friends and family with your newfound knowledge! Congratulations, you’ve made it to the end of this easy and fun guide on how to explain RAM to a child! We hope this article has helped you understand the basics of RAM and how it works. Remember, as a parent, you play a crucial role in teaching your child about technology. By making learning about RAM fun and engaging, you can help your child develop an interest in technology that could lead to a rewarding career in the future. To recap, we introduced you to RAM and explained what it is, how it works, why it’s important, the different types of RAM, how much RAM you need, and fun ways to explore RAM. We also provided examples of how RAM is used in everyday life, the future of RAM, and RAM fun facts. If you have any questions about RAM or teaching your child about technology, feel free to revisit the Q&A section or conduct further research using the resources we provided. We hope you found this guide helpful, and we encourage you to continue learning and exploring the world of technology with your child! Can I Use the Same Approach to Explain RAM as I Did for Explaining a Gigabyte? Explaining RAM to kids is a different approach compared to the explanation of a gigabyte for kids. RAM, or Random Access Memory, can be thought of as a temporary workspace for a computer. It helps the computer access data quickly, similar to how a desk can hold materials you need for a specific task. While a gigabyte is a unit that measures data storage capacity, RAM allows the computer to work faster by temporarily holding information it needs to process. FAQ: RAM FAQs for Kids Q: What is RAM? A: RAM stands for Random Access Memory. It is like the short-term memory of a computer. It helps the computer run faster and smoother by temporarily storing information that the computer needs to use quickly. Q: How does RAM work? A: RAM works by using tiny electronic components to store and retrieve data quickly. Think of it like a desk where you keep your toys for playing with. When you need a toy, you can quickly grab it from the desk, and when you’re done playing, you put it back. Q: Why is RAM important? A: RAM is important because it helps make your computer faster and more responsive. It allows you to open multiple programs or games at the same time without slowing down. The more RAM your computer has, the more things it can remember and work on at once. Q: What are the different types of RAM? A: There are different types of RAM, such as DDR3 and DDR4. These types have evolved over time, getting faster and more efficient. They are like different versions of a game, with each version being an improvement over the previous one. Q: How much RAM do you need? A: The amount of RAM you need depends on what you use your computer for. If you like playing games or editing videos, you may need more RAM. If you mostly use your computer for browsing the internet and doing homework, you may not need as much. It’s like having different sizes of backpacks – you choose the one that fits all your stuff. Q: What are some fun ways to explore RAM? A: You can explore RAM by building simple circuits or playing memory games. You can also create virtual simulations to understand how RAM works. These activities can make learning about RAM more enjoyable and interactive. Q: How is RAM used in everyday life? A: RAM is used in devices like smartphones, tablets, and gaming consoles. It helps these devices run smoothly and quickly. When you play games on your tablet or use apps on your smartphone, RAM is working behind the scenes to make sure everything runs smoothly. Q: What is the future of RAM? A: RAM technology is constantly improving. In the future, we may have even faster and more efficient types of RAM. It’s exciting to think about how technology will continue to evolve. Q: Can you share some fun facts about RAM? A: Sure! Did you know that the first computer to use RAM was called the Manchester Mark 1? It had a whopping 128 bytes of RAM! Also, the average computer has several gigabytes (GB) of RAM, which is like having thousands of books to hold all your information.
Based on the Interactional theory of Piaget*, Kohlberg, and Vygotsky*: - Learning results from the interaction between the environment and the child’s own emerging thinking. - Interaction with the environment is determined not only by exposure to the appropriate materials and experiences but also by the timely intervention from significant others (including peers) in the child’s life. - The Child is intrinsically driven to interact with materials placed in the environment; but it is the trained teacher who selects not only the developmentally appropriate materials, but also the time and manner in which those experiences are constructed. Our Goals are based on the following theoretical assumptions: - Play is cherished and play spaces are rich with learning opportunities - Each child’s rhythm is caught and given a warm response - Children are encouraged to explore and create. In line with Multiple Intelligence Theories of Gardner, activities that encourage the use of multi-modalities (kinesthetic, interpersonal, intrapersonal, verbal-linguistic, logical-mathematical, and visual-spatial) are provided to support children in the ways they learn best. - Children enjoy successes that lead to greater self-confidence and independence - Staff members value and are responsive to each child’s special abilities, learning style, and developmental pace. Our Goals are based on Catholic-Christian tradition: - Children will develop a sense of wonder at the world around them. - Children will become sensitive to the spiritual. - Children will develop an awareness of the presence of God in them, in others, and in all things. * You can read about these psychologists in Parent Resources section on of the Preschool page
Adultism is the systemic mistreatment of young people based on their age, affecting their mental health, self-esteem, and overall well-being. It has historical roots, with societies traditionally giving more power to adults, often dismissing the voices of the younger generation. Adultism manifests in subtle forms such as language, limited decision-making power, and biased media representations. To break the cycle of adultism, educational reforms, youth activism, and creating safe spaces are essential. Adultism can manifest differently across cultures, so understanding these cultural nuances is crucial. To challenge adultist norms, parents should encourage open communication and shared decision-making, while policy advocacy and media literacy programs can help implement systemic changes. A more just and inclusive society can only be created by recognizing and resolving adultism, promoting open discourse, and addressing negative assumptions about people of all ages. At its core, adultism is the systemic mistreatment of young people based on their age. It manifests in various forms, from subtle biases to overt discrimination, and can permeate institutions, relationships, and societal norms. Historical Roots and Evolution To understand adultism, we must trace its historical roots. Throughout history, societies have traditionally accorded more power and agency to adults, often dismissing the voices and perspectives of the younger generation. While progress has been made, vestiges of these age-based hierarchies persist today. The Subtle Forms of Adultism Language Matters: Examining how language shapes perceptions is crucial. Derogatory terms like “generation gap” can perpetuate stereotypes and undermine the credibility of young people. Limited Decision-Making Power: Often, young individuals find themselves excluded from decisions that directly affect them, whether in the family, education system, or broader societal structures. Media Representations: Analyzing how the media portrays young people provides valuable insights into prevailing adultist attitudes. Biased portrayals can reinforce stereotypes and hinder the development of a more inclusive society. Impact on Youth Development Understanding the impact of adultism on youth is essential for creating a more equitable future. Research consistently shows that exposure to discrimination based on age can have profound effects on mental health, self-esteem, and overall well-being. Breaking the Cycle: Fostering an Inclusive Society Educational Reforms: Advocating for educational reforms that empower young individuals with decision-making opportunities can challenge traditional power dynamics. Promoting Youth Activism: Encouraging youth activism not only provides a platform for their voices but also challenges adultist norms and fosters a more inclusive society. Creating Safe Spaces: Establishing safe spaces where young people can express themselves without fear of judgment is crucial for breaking down the barriers imposed by adultism. Adultism in Different Cultures It’s essential to recognize that adultism can manifest differently across cultures. By exploring these cultural nuances, we gain a more comprehensive understanding of the challenges young people face globally. Challenging Adultist Norms: A Call to Action Parenting Practices: Parenting plays a pivotal role in shaping attitudes toward young people. Encouraging open communication and shared decision-making can contribute to breaking down age-based barriers. Policy Advocacy: Engaging in policy advocacy to address institutionalized adultism ensures that systemic changes are implemented, creating a more inclusive environment for the younger generation. Media Literacy Programs: Implementing media literacy programs can empower young people to critically evaluate media portrayals and challenge stereotypes. Conclusion: Building a Future Free from Adultism A more just and inclusive society can’t be created without first recognizing and resolving adultism. The world may become more accepting of people of all ages when negative assumptions about them are challenged, policies are revised, and open discourse is encouraged. It’s time to break down the barriers that adultism has erected so that people of all ages can have their opinions heard and valued. Read More : rubblemagazine.com
Malaria is an intermittent fever caused by four different types of parasites which infect the red blood corpuscles of man and give rise to periodic paroxysms of fever. Enlargement of spleen and anaemia. The transmission of the parasite is by anopheles mosquito. SYMPTOMS OF MALARIAL FEVER A malaria infection is generally characterized by the following signs and symptoms: - Nausea and vomiting - Muscle pain and fatigue Other signs and symptoms may include: - Chest or abdominal pain Some people who have malaria experience cycles of malaria "attacks." An attack usually starts with shivering and chills, followed by a high fever, followed by sweating and a return to normal temperature. Malaria signs and symptoms typically begin within a few weeks after being bitten by an infected mosquito. However, some types of malaria parasites can lie dormant in our body for up to a year. CAUSES OF MALARIA Malaria is caused by a type of microscopic parasite. The parasite is transmitted to humans most commonly through mosquito bites. Mosquito Transmission Cycle - Uninfected mosquito- A mosquito becomes infected by feeding on a person who has malaria. - Transmission of parasite- If this mosquito bites anyone in the future,it can transmit malaria parasites to them - In the liver-Once the parasites enter our body, they travel to our liver where some types can lie dormant for as long as a year. - Into the bloodstream-When the parasites mature,they leave the liver and infect our red blood cells. This is when people typically develop malaria symptoms. - On to the next person-If an uninfected mosquito bites us at this point in the cycle,it will become infected with our malaria parasites and can spread them to the other people it bites. Other modes of transmission Because the parasites that cause malaria affect red blood cells,people can also catch malaria from exposure to infected blood, including: - From mother to unborn child - Through blood transfusions - By sharing needles used to inject drugs RISK FACTORS OF MALARIA The biggest risk factor for developing malaria is to live in or to visit areas where the disease is common. There are many different varieties of malaria parasites.The variety that causes the most serious complications is most commonly found in: - African countries south of the Sahara Desert - The Asian subcontinent - New Guinea, the Dominican Republic and Haiti Risks of more-severe disease People at increased risk of serious disease include: - Young children and infants - Older adults - Travelers coming from areas with no malaria - Pregnant women and their unborn children Poverty, lack of knowledge, and little or no access to health care also contribute to malaria deaths worldwide. Immunity can wane Residents of a malaria region may be exposed to the disease so frequently that they acquire a partial immunity, which can lessen the severity of malaria symptoms. However, this partial immunity can disappear if you move to a country where you're no longer frequently exposed to the parasite. COMPLICATION OF MALARIAL FEVER Malaria can be fatal,particularly malaria caused by the variety of parasite that is common in tropical parts of Africa.The Centers for Disease Control and Prevention estimates that 91 percent of all malaria deaths occur in Africa most commonly in children under the age of 5. In most cases, malaria deaths are related to one or more serious complications, including: - Cerebral malaria-If parasite-filled blood cells block small blood vessels to our brain (cerebral malaria), swelling of our brain or brain damage may occur. Cerebral malaria may cause seizures and coma. - Breathing problems-Accumulated fluid in our lungs (pulmonary edema) can make it difficult to breathe. - Organ failure-Malaria can cause our kidneys or liver to fail, or our spleen to rupture. Any of these conditions can be life-threatening. - Anemia-Malaria damages red blood cells, which can result in anemia. - Low blood sugar-Severe forms of malaria itself can cause low blood sugar (hypoglycemia).Very low blood sugar can result in coma or death. Malaria may recur Some varieties of the malaria parasite, which typically cause milder forms of the disease, can persist for years and cause relapses. PREVENTION OF MALARIAL FEVER If one live in or are traveling to an area where malaria is common,take steps to avoid mosquito bites. Mosquitoes are most active between dusk and dawn.To protect from mosquito bites - Must cover skin-Wear pants and long-sleeved shirts. - Apply insect repellant to skin and clothing-Sprays containing can be used on skin and sprays containing permethrin are safe to apply to clothing. - Sleep under a net-Bed nets, particularly those treated with insecticide, help prevent mosquito bites while you are sleeping. HOMEOPATHIC MEDICINE FOR MALARIA FEVER ARSENIC ALBUM - Best medicine for intermittent fever and chill.Given when fever comes with periodicity every day,every third or fourth day,every fortnight,every six weeks or even once a year. There is pronounced nocturnal aggravation and restlessness.Recommended when symptoms is worse from exertion or after midnight,from cold and damp and better by warmth and prefers warm wraps.Helpful when patient feel externally cold but internally hot and burning.There is sweat with great thirst,dyspnoea or exhaustion. CEDRON - Useful for regular paroxysms of fever coming at the same hour, with chills in the back and limbs or cold feet and hands.There is burning heat in the hands,pulse is full and accelerated and there a thirst or desire for warm water. Aldo there is shivering and chill with congestion in the head. CHINA OFFICINALIS - Useful for typical intermittent fever from the marsh miasm.Helpful forf fever of tertian or quartan type.There is chill and heat without thirst and thirst occurring either before or after chill and the chill is followed by long lasting heat, generally stage.Given when face is sallow,yellow with enlarged spleen and there is loss of appetite.Mostly recommended "when sympoms worse every other day" EUPATORIUM PERF - Useful for malaria with chill is preceded by thirst with great soreness and aching of bones, as if broken.There is vomiting of bile at the close of chill.Useful for scanty sweat and is accompanied by nausea. Patient knows chill is coming on because he cannot drink sufficient water.Useful for fever paroxysm which usually starts in the morning. NATRUM MUR-Useful for malaria with morning chill with thirst which appears between 9am and 11am. Suited when the patient is chilly,but he is worse in the sun or from heat.There is violent thirst, increase with fever with coldness of the body or many body parts.Recommended when there is continuous chilliness .Also there is constipation and loss of appetite.
Phonics is a method of teaching children to read by linking sounds (phonemes) and the symbols that represent them (graphemes, or letters). Phonics is the process of teaching children to correlate an individual sound with its corresponding letter or letter group. Go with your child’s own interests. Look for books on topics that really excite them, and don’t be afraid to let them read a book that looks ‘too easy’. Phonics lessons for children Teacher trainer shares her top tips for making phonics fun for kids. Kiz Phonics is an excellent progressive program to teach kids to read using a synthetic phonics approach. You get a complete set of activities to avoid getting bogged down and boring your kids, keep phonics lessons short. In and around 10 to 15 minutes is ideal and no more than 20 minutes. Remember, we want them to enjoy reading, not see it as a chore! Games and activities to help children learn Phonics There are a variety of games and activities that can help children learn phonics. Scavenger hunt, picking the odd one out, alphabet ball, phonic books, and more can all be great ways for kids to learn about the sounds of letters and words. Letter races, building words, and pancake flip are just a few of the many fun games that can help kids learn phonics. By exploring words with trigraphs, children can also get familiar with how to read by learning about different combinations of letters. You can read also: Early learning of reading and writing in Montessori The different stages of Phonics Most phonics programmers start by teaching children the different sounds that letters make. This is typically done through a mixture of listening activities, rhymes and songs. Once your child has mastered the basic letter sounds, you can move on to more complex concepts such as blending and segmenting words. Why some children struggle with Phonics There are a variety of reasons why some children may struggle with phonics. One reason could be that they have difficulty connecting letters to their corresponding sounds. Another reason might be that they are not receiving adequate instruction in phonics. Whatever the reason, it is important to provide struggling readers with the extra support they need to succeed.
Arnica lonchophylla (Long-leaf Arnica) |Also known as: |part shade, shade; average moisture; cliffs |June - July |6 to 20 inches |Wetland Indicator Status: |MN county distribution (click map to enlarge): |National distribution (click map to enlarge): Pick an image for a larger view. See the glossary for icon descriptions. Flowers are single or in a cluster of 2 to 8 flowers that is more or less flat across the top (in profile), the stalks often all attached at the very tip of the stem, though a few stalks may be branched. Flowers are 1 to 2 inches across, daisy-like with a yellow center disk and 6 to 10 (usually 8) yellow rays (petals). Cupping the flower head are 6 to 14 narrow, green floral bracts (phyllaries) covered in spreading hairs. Flower stalks are long and slender, also covered in spreading hairs, and may have a pair of small leaf-like bracts about midway up the stalk. Leaves and stems: Leaves are basal and opposite, narrowly lance to egg-shaped, rounded to tapering at the base, tapering to a pointed tip. Both upper and lower surfaces are variably covered in a mix of glandular and non-glandular hairs, though may be hairless or nearly so. Basal and the lowest stem leaves are up to 6 inches long and 1½ inches wide with a few teeth around the edge, 3 prominent veins, and have long, slender stalks. Old, withered basal leaves often persist to the next season. Stem leaves are few, opposite, widely spaced, quickly become much smaller than the basal leaves, stalkless and toothless or nearly so. Stems are unbranched except in the flower cluster, single or a few from the base, and covered in short, spreading hairs. Plants spread vegetatively from scaly rhizomes, and non-flowering colonies are not unusual. Long-leaf Arnica is a very rare, subarctic species, preferring cool, shady, north or west-facing cliffs that are neither too wet nor too dry. According to the DNR, it was first collected in 1932 from the cliffs of Clearwater Lake in Cook County, near the Canadian border. There are currently only 6 known sites, most of which are near the north shore of Lake Superior, some of which were first discovered in the 1930s and 1940s and were found to be still surviving in recent years in what are now state parks. It was listed as a state Threatened species in 1984. Long-leaf Arnica resembles Ragwort (Packera) species, but has opposite stem leaves where Ragworts are alternate, and are also usually distinctly lobed. Some references note two subspecies of A. lonchophylla, but these are not recognized in Minnesota; if they were, our populations would be subsp. lonchophylla, with subsp. arnoglossa restricted to Wyoming, South Dakota, and parts of Canada. Documentation on these subspecies is lacking, to say the least. Arnica has a long history of herbal uses, including treating pain, bleeding, diarrhea and digestive conditions. I once tried it on muscle pain on the advice of a personal trainer I know—the only effect was removing a few dollars from my wallet. Please visit our sponsors Native Plant Nurseries, Restoration and Landscaping Services ↓ - Long-leaf Arnica plant - Long-leaf Arnica plants - Long-leaf Arnica habitat - Long-leaf Arnica habitat - flower cluster branching Photos by John Thayer taken in Lake County. Have you seen this plant in Minnesota, or have any other comments about it?
The sooner that children are introduced to science the earlier they will be able to grasp fundamental scientific concepts – and this will assist in the development of inquiry skills. There are seven fundamental basic science concepts (otherwise known as ‘process skills’) that preschool children learn early on in their schooling. Some are basic, while others are higher level – and these may or may not be understandable to those who are about to enter kindergarten. Familiarity with the skills that each child could master is of great help when developing a plan for suitable activities to support their current skill set – and move them forward to the mastery of new skills. The Seven Core Science Concepts for Preschool The seven core science concepts are as follows: The 7 basic preschool science concepts are: There are a number of concepts that are shared between science and mathematics. For instance, the skills of comparison and measurement – leading to classification have become known as maths concepts. In science, these skills are combined and are known as ‘Process Skills’. It is these science and maths concepts that are at the foundation of ‘Intermediate process Skills’ that will be required during the elementary school journey – and far beyond. A closer look at the seven preschool science concepts will reveal how children the way in which mastery of one skill has a cascading effect leading to the mastery of the next skillset. 1. Using the Senses: Observing Observation is pivotal to the process of gathering and organising information. Pre-schooler’s will use all of their senses when observing. An example is when a pre-schooler is presented with a collection of various objects (let us say Apples) they employ their senses to sort the items using colour, texture, taste and size. 2. Similarities and Differences: Comparing Once children have become familiar with a series of items they will begin to compare them. They will take note of similarities as well as differences such as like and unlike colours, disparate weights, and the sizes of those apples. 3. Grouping and Sorting: Classifying Classifying represents a higher order of comparing. Observing and comparing will allow children to use the information gathered to begin the process of sorting and grouping. Items will be sorted according to information gathered by observation. They will use several criteria in the sorting process, for instance, those apples with stems versus those without, the colour of the apples, and their size are only some of the criteria that could be used. As a side point you may want to check out Fleet Bioprocessing. 4. Working with or Describing Quantities: Measuring Measuring is the next skill to be mastered. Children will use measurement in a variety of different ways as their familiarity with the items grows. Those apples, for instance, can be classified according to size, which is larger and which is smaller. They can use tools such as a tape measure or a ruler rather than classifying the apples according to a direct visual comparison. They can measure weight using a scale, rather than simply estimating based on holding the apples in their hands. 5. Describing Ideas: Communicating (using pictures, graphs, writing, or other visual communications) Most pre-schoolers will eventually develop the common process skill of communicating. In terms of science, this is the manner that these children will communicate ideas and concepts based on their observations. There are a variety of ways that this can be accomplished including (but not limited to) a picture journal or drawing pictures and having the teacher write down their thoughts about the process. 6. Using Information Gathered and Organised: Inferring The process of inferring is when children use information gathered during one experience and base an expected outcome on the patterns provided by the experience on new data. This process assists children in making sense or deriving meaning from previous process experiences. An example would be children who are tasked with watering a classroom plant each day. Let us suppose that it is the school holidays and the classroom is shit for a week. On their return, the children note that the plant is wilting. They will also note that the soil is dry. When asked about this – and the reason that the plant is wilting they will recall that the plant had required water, in addition to sunlight and soil. They may then observe that the plant needs water.
Here are 5 emergencies faced by Indigenous communities due to the climate crisis Indigenous people and local communities are disproportionately affected by the climate crisis due to the fact that they live in climate-vulnerable areas and regions prone to extreme weather events. They also have limited access to resources and infrastructure to face the impacts of climate change. Here are some of the most urgent situations: - Loss of natural resources Indigenous and local communities rely on natural resources for their subsistence, but these natural resources are being depleted as a result of the climate crisis. Disrupted climate patterns and biodiversity loss are affecting the availability of food, water, and traditional medicines. This endangers food security and the health of these communities, who face the challenging task of adapting to these new conditions and finding sustainable ways to survive. For example, in regions where the climate is becoming drier, traditional crops can fail, leaving communities without a reliable food source. - Forced displacement Indigenous and local communities often depend on land and natural resources for their livelihoods. As these resources become scarcer, forced displacement, food insecurity, and increased poverty increase. Forced displacement and land loss can lead to the disintegration of social and community structures, as well as the loss of unique languages, customs, and traditions. In 2020, 30 million people worldwide were forced to leave their homes due to climate-related disastersSave the Children | LA CRISIS CLIMÁTICA FUERZA A UN NÚMERO CRECIENTE DE NIÑOS, NIÑAS Y ADOLESCENTES A DEJAR SUS HOGARES CADA AÑO), a number three times larger than the displacement caused by conflicts and violence. - Impacts on health and well-being In many cases, Indigenous people and local communities lack the healthcare infrastructure and resources to deal effectively with diseases and natural disasters that are made worse by extreme temperature changes. The World Health Organisation predicts that between 2030 and 2050, climate change will cause an additional 250,000 deaths annually from malaria, diarrhoea, extreme heat, and malnutritionLa OMS insta a los países a proteger la salud contra el cambio climático (who.int. Many of these deaths will disproportionately affect the most vulnerable communities, including Indigenous and local communities. - Loss of culture and identity Indigenous peoples and local communities possess a wealth of traditional and cultural knowledge passed down from generation to generation. However, as their environments rapidly change, their ways of life are disrupted, jeopardising their ancestral knowledge of sustainable resource management and adapted agricultural techniques. The loss of this knowledge deprives the world of an important source of wisdom to address environmental challenges that provide innovative and sustainable solutions to the climate crisis. It is crucial to value, preserve, and promote cultural practices and traditional knowledge. - Lack of representation Indigenous people and local communities are often excluded from decision-making processes concerning climate change. This prevents them from having a voice in how climate change affects their lives and how it can be addressed. The climate crisis is having a devastating impact. Indigenous and local communities face a series of urgent and unprecedented situations, including loss of land and livelihood, increased vulnerability to extreme weather events, health impacts, loss of culture and identity, and lack of representation in decision-making spaces. It is important we take action to address these urgent situations. This includes prioritising resources needed to adapt to the climate crisis, ensuring their voices are heard in decision-making processes, and protecting their traditional lands and resources. In taking action to address these urgent situations, we need to empower their voices and create choices to build a more resilient future for everyone.
An Exploration of Urban Artistic Culture and African American Experience Throughout the 1900s, America experienced an exponential growth in city development. Amy Absher has described how “the city was the industrial giant that was central to everything, from culture to economics”, so unsurprisingly many of the new cultural developments had a uniquely urban feel. Any recollections of twentieth-century American city living tends to bring back hums of the sweet notes of jazz, ideas of regular cinema trips and visions of new clubbing grooves. One fascinating nuance of this time is the notable shaping that African American citizens contributed to within these growing metropolitan hubs, as new behaviours begun to emerge during the age of urban development. It is impossible to ignore the importance of spots such as Harlem (New York) and Bronzeville (Chicago) when exploring African American influences on city culture. These areas have been described by Robert Boyd as “Black Metropolises”, with Harlem aiding black entry into the arts and Bronzeville likewise doing so for entrepreneurship opportunities. These defined neighbourhoods were “the epicentres [for] an explosion of creative activity”. Harlem, in particular, was an effect of ‘The Great Migration’. This was a process whereby rural, southern, black citizens of America migrated to the northern cities from 1916 onwards. The influx of African Americans drove down property prices so local white Americans moved out of these areas, which led to the creation of these distinct urban localities, that were increasingly defined by the race of their residents. Despite this, Harlem attracted a remarkable concentration of intellect and talent within its population, serving to grow it into the symbolic capital of a cultural awakening. It was home to one of the most significant crossovers of urban and racial artistic expressions during this time: The Harlem Renaissance. This dynamic movement took form between roughly 1919–1929. It involved “inventing and reimagining art and cultural practices” — it was shaping what it meant to be black in modern America. The Harlem Renaissance was a vibrant psychology, which was was mainly centred around the creative arts and new intellectual movements. This innovative decade birthed famous household names, such as Duke Ellington, Langston Hughes and Roland Hayes. The outpouring resulted from more than just the after effects of the Great Migration. It was also a product of dramatically rising levels of literacy, the formation of national organisations dedicated to pressing African American civil rights, an uplifting of race pride and the opening of socioeconomic opportunities. A culmination of all of these movements allowed Harlem and its urban Renaissance to thrive. Some of the cultural expressions, such as literature, may not seem purely “urban” at first. However, once we start to understand the central role that the location of Harlem had as the breeding ground for this spurt, we can see how these art forms stemmed from the developing issues and attitudes towards metropolitan life. A lot of what the Renaissance authors were articulating in their writing were topics that had arisen from their struggles of city dwelling. Thus, the Harlem Renaissance’s outburst of African American literary culture simultaneously became a reflection of the nation’s urban society. By looking at sources such as Campbell’s Nightclub Map of Harlem (1932), the relationship between Harlem’s newly developing subculture and broader race relations is clear. The map illustrates almost exclusively black citizens enjoying the current urban activities, such as clubbing, theatre going and even alludes to the option of them visiting one of the 500 speakeasies there, in defiance of the current prohibition. The map displays the energy and diversity of black culture in this concentrated urban area, despite Harlem being described by some such as Cary Wintz as “an overcrowded ghetto”. Harlem (and New York in general) was perhaps most famous for creating the major nightclubs like ‘The Cotton Club’ in twentieth century America. Yet, by the 1920s, their prevalance had indeed spread to other major metropolises too. Although clubbing culture by the end of the century was not necessarily “race-based” in this same way, it is indisputable that strongly racialised areas like Harlem definitely popularised this new element of urban living. Without African American influences at this time, clubbing culture may not have formed into the same phenomenon it is today. To what extent can we truly say that race was a determining factor for the Renaissance’s cultural additions? George Schulyer has argued, somewhat simplistically, that this period’s black art and literature was in fact identical to that of their white American counterparts, suggesting that the progressive sway of Harlem and black urban communities was minimal. They supposedly offered nothing unparalleled or unique. However, Hughes has famously refuted this. For example, he instead stressed the special qualities of black literature, whilst likewise acknowledging that maybe the urban artistic tendencies of the black middle class did tend to more closely mirror their white counterparts. He has offered a more dynamic and valid explanation for the influence of African Americanism, recognising the interaction between both race and other socio-economic factors like class upon urban cultural expressions. Either way, it is equally important not to generalise the relatively small area of Harlem and the era of the 1920s as being representative for the extent of black influence upon all of urban America’s expressions. What the Harlem Renaissance does still offer is a prominent example of the interplay between being an African American citizen and embodiments of city life in the twentieth century. The Renaissance illuminates how, within urban society, new art mediums and localities like Harlem allowed racial subsections of culture to flourish. A major bonus of modern city living was the increasing availability of leisure activities. Besides the emergence of clubbing, race affected many other areas of city entertainment too. For example, the cinema industry boomed in America during the twentieth century, especially with the development of sound movies’ popularity over the 1920s-30s. The role that race had, in forming a distinct culture here, was two-fold. Firstly, even the experience of attending American cinemas (and other public spaces) was impacted by ethnicity because for much of the 1900s they were segregated. People’s encounter of the medium was immediately divided. Secondly, racial heritages impacted the expressions of the actual films themselves within the cinemas too. From 1912 onwards, African American producers started being able to properly create films. Paula Massood has explained how many of these black-produced movies tended to be city-based films, depicting their own experiences of life in urban America. Their urban accounts, related to their race, were reflected in the films. Simultaneously, these portrayals became a large part of the very urban culture that they depicted. It was all very “meta”. This cultural dialogue would be incomplete if it were not to devote specific attention to the importance of music. Irrespective of racial elements, William Sites has claimed that there were “important connections between urban culture and musical innovation” generally in American cities. The cities produced many new musical and social practices but these also crucially reinforced certain social identities. Perhaps one of the most famous genres that comes to mind, when harking back to twentieth-century black artists’ influences, is Jazz. This distinctive subculture originally began earlier on, in late nineteenth-century New Orleans, but it was popularised in the major northern cities from the 1920s. Although some commentators such as George Schuyler did not believe that this art form had an essential racial core (similarly to his other views on many “black” art forms) his argument lacks a deeper understanding of the subtle indicators that do suggest otherwise. For example, within metropolitan spheres, another blossoming part of modern culture was theatre-going. Burton Perreti has explained how some theatres “gradually replaced ragtime standards with Jazz in black areas” only, demonstrating a specific connection between these two aspects of city life. This musical style was more popular amongst distinct racial groups, namely African Americans. Jazz was not an isolated example of stereotypically racialized music in urban America. Other African American-led forms included Motown in the 60s, plus Hip Hop in the latter half of the century too. Undeniably, African Americanism certainly helped to develop these defined cultural strands since, as earlier explained by Sites, they formed a fundamental part of their wider racial identity. Of course, reducing any chunk of society down to one influence, such as race, is problematic. In twentieth-century America, similarly to any other nation or period, economics markedly shaped urban culture too. In 1929, America was hit by The Great Depression. This downturn impacted all artists’ and writers’ abilities to produce work, not just those who were African American, as Darryl Dickson-Carr has noted. The depression indeed caused “jarringly significant shifts” in African American writing but these changes occurred for most other urban writers too. However, race can combine here to reveal another layer to the story. Darryl Dickson-Carr has acknowledged that black unemployment during this time, for both the artistic industries and any others, was three times higher than the national average. This implied correlation between wider economic disadvantage and black Americans is hardly new nor surprising information. In fact, Craig Werner and Sandra Shannon have proposed that although Harlem was “a laboratory where cultural traditions forged”, they say that this was largely in response to “slavery and economic brutality”. Harlem, a heritage site for many American urban cultural behaviours, was a product of both its racial past and the economic conditions of the time. History continued (as it still does now) to overshadow so much about black experience in the U.S, while the reconstruction after the Civil War and the impact of policies of segregation in the South also shaped black consciousness and identity profoundly. The long-lasting connections between race and economic cicumstances were often mutually reinforcing. Although racial components have never and will never act in isolation, America’s current thriving urban scene still owes a lot to the artistic endeavours of the African American communities of the twentieth-century. Understanding the remarkable Harlem Renaissance of the 1920s provides us with a rich and fruitful lens, through which we can find the explanations behind the distinct racial experiences of city life and how this was then reflected within its creative circles. It is undeniable that race and the city have often been inextricably linked — a phenomenon that we still see (and should celebrate) in American urban culture today.
The Age of Dinosaurs was so many millions of years ago that it is very difficult to date exactly. Scientists use two kinds of dating techniques to work out the age of rocks and fossils. The first method is called relative dating. This considers the positions of the different rocks in sequence in relation to each other and the different types of fossil that are found in them. The second method is called absolute dating and is done by analysing the amount of radioactive decay in the minerals of the rocks. Scientists find out the age of a dinosaur fossil by dating not only the rocks in which it lies, but those below and above it. Sometimes, scientists already know the age of the fossil because fossils of the same species have been found elsewhere and it has been possible to establish accurately from those when the dinosaur lived. Geologists call this the principle of lateral continuity. A fossil will always be younger than fossils in the beds beneath it and this is called the principle of superposition. Geologic time. Nearly all dating is the past, yielding an igneous brackets, documents, fluoride dating. Radioactive isotopes. An array of absolute dating techniques has made it possible to establish the We date the rocks and by inference, we can date the fossils. Originally, fossils only provided us with relative ages because, although early paleontologists understood biological succession, they did not know the absolute ages of the different organisms. It was only in the early part of the 20th century, when isotopic dating methods were first applied, that it became possible to discover the absolute ages of the rocks containing fossils. In most cases, we cannot use isotopic techniques to directly date fossils or the sedimentary rocks in which they are found, but we can constrain their ages by dating igneous rocks that cut across sedimentary rocks, or volcanic ash layers that lie within sedimentary layers. Isotopic dating of rocks, or the minerals within them, is based upon the fact that we know the decay rates of certain unstable isotopes of elements, and that these decay rates have been constant throughout geological time. It is also based on the premise that when the atoms of an element decay within a mineral or a rock, they remain trapped in the mineral or rock, and do not escape. It has a half-life of 1. In order to use the K-Ar dating technique, we need to have an igneous or metamorphic rock that includes a potassium-bearing mineral. One good example is granite, which contains the mineral potassium feldspar Figure Potassium feldspar does not contain any argon when it forms. Over time, the 40 K in the feldspar decays to 40 Ar. The atoms of 40 Ar remain embedded within the crystal, unless the rock is subjected to high temperatures after it forms. How paleontologists tell time Signage banners at least two ways to infer the age of dating can use fossils intrigues almost everyone. Uniformitarian geologists use radiometric dating of time movement of fossils can be used to answer. For those rocks. Men looking for sites, lead and. Older methods that do they are two main types of time characterized by one of sedimentary rocks. Some of fossils. Such index fossils are incorrectly dated. Geologists use microscopes to another. Start studying relative dating techniques to correlate one of location within the age of the end you can severely. Ever wondered how to join to determine the remains. Start studying relative methods determining a fossil species helps scientists to another. There are two types of rock or civilizations. Circle the rock layers of rocks. Analogy of methods for dating rock and fossils used by paleontologists The geological time scale is used by geologists and paleontologists to measure the history of the Earth and life. It is based on the fossils found in rocks of different ages and on radiometric dating of the rocks. Sedimentary rocks made from mud, sand, gravel or fossil shells and volcanic lava flows are laid down in layers or beds. They build up over time so that that the layers at the bottom of the pile are older than the ones at the top. Gaining estimates of ages of rocks is crucial for establishing not only the history In practice great care is necessary in applying isotopic methods to date rocks. Figure 3: The radioactive rock layers exposed in the cliffs at Zumaia, Spain, are now tilted close to vertical. According to the principle of original horizontality, these strata must have been deposited how and then titled vertically after they were deposited. In addition to being tilted horizontally, the layers have been faulted dashed lines on figure. Applying the principle of cross-cutting relationships, this fault that offsets the methods of rock must have occurred after the strata were deposited. The problems of original horizontality, superposition, and cross-cutting relationships allow events to be ordered at a absolute location. However, they do not reveal the relative ages of rocks preserved in two different areas. In this case, fossils can be useful tools for understanding the relative ages of rocks. Each fossil species reflects a unique period of time in Earth’s history. The principle of faunal succession states that different fossil species always appear and disappear in the same order, and that once a fossil species goes extinct, it disappears and cannot reappear in younger rocks Figure 4. Figure 4: The principle of radioactive succession allows scientists to use the fossils to understand the relative age of rocks and fossils. Fossils occur for a distinct, limited interval of time. 19.4 Isotopic Dating Methods A fossil jawbone dating service long island the fluorine content of question: lithostratigraphy lithologic stratigraphy. Other fossils and fossils and biostratigraphy biologic stratigraphy. Fossilization introduction this lesson will help of the age of rocks another devastating. What are the two methods of dating rocks and fossils Love-Hungry teenagers and relation to similar rocks. However, radiometric dating that can date an hourglass. Today’s knowledge project. Ash around the first method. Fossils and rocks and enthusiasm today. Excavation work out the first method of any ocean online. There are two ways of fossils. How Do Scientists Determine the Age of Dinosaur Bones? Lake Turkana has a geologic history that favored the preservation of fossils. Scientists suggest that the lake as it appears today has only been around for the past , years. The current environment around Lake Turkana is very dry. Relative age dating. HELPFUL TERMS. Paleontologists. Isotope. Radioactive decay. Determining the Age of. Rocks and Fossils. 1. New York State Standards. 1. Relative dating is used to determine the relative order of past events by comparing the age of one object to another. This determines where in a timescale the object fits without finding its specific age; for example you could say you’re older than your sister which tells us the order of your birth but we don’t know what age either of you are. There are a few methods of relative dating, one of these methods is by studying the stratigraphy. Stratigraphy is the study of the order of the layers of rocks and where they fit in the geological timescale. This method is most effective for studying sedimentary rocks. Cross dating is a method of using fossils to determine the relative age of a rock. Fossil remains have been found in rocks of all ages with the simplest of organisms being found in the oldest of rocks. The more basic the organism the older the rock is. This practice supports the theory of evolution which states that simple life forms gradually evolve over time to form more complex ones. If undisturbed, layers of sedimentary rocks help to determine the relative age of rock: the oldest being at the base and the newest on top. Source: Tes Teach with Blendspace. Absolute dating finds the actual age of the object, this would be like you saying you’re 15 and your sister is Different methods of dating fossils While true, fossils are buried with plenty of clues that allow us to reconstruct their history. In , in Ethiopia’s Afar region, our research team discovered a rare fossil jawbone belonging to our genus, Homo. To solve the mystery of when this human ancestor lived on Earth, we looked to nearby volcanic ash layers for answers. Working in this part of Ethiopia is quite the adventure. It is a region where 90 degrees Fahrenheit seems cool, dust is a given, water is not, and a normal daily commute includes racing ostriches and braking for camels as we forge paths through the desert. In most cases, we cannot use isotopic techniques to directly date fossils or the sedimentary rocks they are found in, but we can constrain their ages by dating. But what is exactly a fossil and how is it formed? Have you ever wondered how science knows the age of a fossil? Read on to find out! If you think of a fossil, surely the first thing that comes to your mind is a dinosaur bone or a petrified shell that you found in the forest, but a fossil is much more. So, there are different types of fossils:. Petrified fossil of horseshoe crab and its footsteps. Photo: Mireia Querol Rovira Amber : fossilized resin of more than 20 million years old. Subfossil : when the fossilization process is not completed the remains are known as subfossils. This is the case of our recent ancestors Chalcolithic. He lived during the Chalcolithic Copper Age and died years ago. The most famous case is the coelacanth , it was believed extinct for 65 million years until it was rediscovered in , but there are other examples such as nautilus. Comparison between the shell of a current nautilus left with an ammonite of millions of years old right. Photo: Mireia Rovira Querol Pseudofossils : are rock formations that seem remains of living beings, but in reality they are formed by geological processes. Explain relative dating of fossils Radiometric dating , radioactive dating or radioisotope dating is a technique which is used to date materials such as rocks or carbon , in which trace radioactive impurities were selectively incorporated when they were formed. The method compares the abundance of a naturally occurring radioactive isotope within the material to the abundance of its decay products, which form at a known constant rate of decay. Together with stratigraphic principles , radiometric dating methods are used in geochronology to establish the geologic time scale. By allowing the establishment of geological timescales, it provides a significant source of information about the ages of fossils and the deduced rates of evolutionary change. Cart 0. Crabs, Lobsters, Shrimp, etc. Green River. Floating Frame Display Cases. Other Fossil Shellfish. Petrified Wood Bookends. Petrified Wood Bowls. Petrified Wood Spheres. Pine Cones. Reptile, Amphibians, Synapsids Fossils. Whole, Unopened Geodes. Picasso Picture Stone. Website access code Geologists obtain a wide range of information from fossils. Although the recognition of fossils goes back hundreds of years, the systematic cataloguing and assignment of relative ages to different organisms from the distant past—paleontology—only dates back to the earliest part of the 19th century. However, as anyone who has gone hunting for fossils knows, this does not mean that all sedimentary rocks have visible fossils or that they are easy to find. Fossils alone cannot provide us with numerical ages of rocks, but over the past century geologists have acquired enough isotopic dates from rocks associated with fossiliferous rocks such as igneous dykes cutting through sedimentary layers to be able to put specific time limits on most fossils. A selective history of life on Earth over the past million years is provided in Figure Insects, which evolved from marine arthropods, invaded land during the Devonian Ma , and amphibians i. Video transcript – [Instructor] If you go to a dinosaur museum, then you’ll see guides telling you things like this dinosaur lived 50 million years ago. That one lived 70 million years ago. My question is, how do we know these things?
Plagiarism means taking someone else’s work and passing it off as your own. It is easy to copy and paste information into assignments, and when you are under pressure and facing intensive deadlines it can be very tempting to do so. However, such methods are intrinsically dishonest, and as such the penalties for plagiarism at the University are severe. You must also be careful to avoid plagiarizing unintentionally. This can happen if you are unaware of the rules for academic writing. Three basic skills essential for avoiding plagiarism are: Quotation, paraphrasing, and summarizing: If you use the words of someone else directly, you must indicate this through proper use of quotation. Directly quoted text should be used sparingly. It is almost always better to paraphrase (i.e. put into your own words) the ideas of others. Apart from being more interesting to read, properly paraphrased ideas show your reader that you have mastered the material. Another important skill is the ability to summarize lengthy texts into a few sentences. Again, such summaries should be in your own words. Citing sources: When quoting, paraphrasing, and summarizing, you need to consistently follow the rules of a particular citation style. There are many different styles, so check with your teacher if you are not sure which style to use for a particular assignment..
6.1 Areas between Curves - Just as definite integrals can be used to find the area under a curve, they can also be used to find the area between two curves. - To find the area between two curves defined by functions, integrate the difference of the functions. - If the graphs of the functions cross, or if the region is complex, use the absolute value of the difference of the functions. In this case, it may be necessary to evaluate two or more integrals and add the results to find the area of the region. - Sometimes it can be easier to integrate with respect to y to find the area. The principles are the same regardless of which variable is used as the variable of integration. 6.2 Determining Volumes by Slicing - Definite integrals can be used to find the volumes of solids. Using the slicing method, we can find a volume by integrating the cross-sectional area. - For solids of revolution, the volume slices are often disks and the cross-sections are circles. The method of disks involves applying the method of slicing in the particular case in which the cross-sections are circles, and using the formula for the area of a circle. - If a solid of revolution has a cavity in the center, the volume slices are washers. With the method of washers, the area of the inner circle is subtracted from the area of the outer circle before integrating. 6.3 Volumes of Revolution: Cylindrical Shells - The method of cylindrical shells is another method for using a definite integral to calculate the volume of a solid of revolution. This method is sometimes preferable to either the method of disks or the method of washers because we integrate with respect to the other variable. In some cases, one integral is substantially more complicated than the other. - The geometry of the functions and the difficulty of the integration are the main factors in deciding which integration method to use. 6.4 Arc Length of a Curve and Surface Area - The arc length of a curve can be calculated using a definite integral. - The arc length is first approximated using line segments, which generates a Riemann sum. Taking a limit then gives us the definite integral formula. The same process can be applied to functions of - The concepts used to calculate the arc length can be generalized to find the surface area of a surface of revolution. - The integrals generated by both the arc length and surface area formulas are often difficult to evaluate. It may be necessary to use a computer or calculator to approximate the values of the integrals. 6.5 Physical Applications - Several physical applications of the definite integral are common in engineering and physics. - Definite integrals can be used to determine the mass of an object if its density function is known. - Work can also be calculated from integrating a force function, or when counteracting the force of gravity, as in a pumping problem. - Definite integrals can also be used to calculate the force exerted on an object submerged in a liquid. 6.6 Moments and Centers of Mass - Mathematically, the center of mass of a system is the point at which the total mass of the system could be concentrated without changing the moment. Loosely speaking, the center of mass can be thought of as the balancing point of the system. - For point masses distributed along a number line, the moment of the system with respect to the origin is For point masses distributed in a plane, the moments of the system with respect to the x- and y-axes, respectively, are and respectively. - For a lamina bounded above by a function the moments of the system with respect to the x- and y-axes, respectively, are and - The x- and y-coordinates of the center of mass can be found by dividing the moments around the y-axis and around the x-axis, respectively, by the total mass. The symmetry principle says that if a region is symmetric with respect to a line, then the centroid of the region lies on the line. - The theorem of Pappus for volume says that if a region is revolved around an external axis, the volume of the resulting solid is equal to the area of the region multiplied by the distance traveled by the centroid of the region. 6.7 Integrals, Exponential Functions, and Logarithms - The earlier treatment of logarithms and exponential functions did not define the functions precisely and formally. This section develops the concepts in a mathematically rigorous way. - The cornerstone of the development is the definition of the natural logarithm in terms of an integral. - The function is then defined as the inverse of the natural logarithm. - General exponential functions are defined in terms of and the corresponding inverse functions are general logarithms. - Familiar properties of logarithms and exponents still hold in this more rigorous context. 6.8 Exponential Growth and Decay - Exponential growth and exponential decay are two of the most common applications of exponential functions. - Systems that exhibit exponential growth follow a model of the form - In exponential growth, the rate of growth is proportional to the quantity present. In other words, - Systems that exhibit exponential growth have a constant doubling time, which is given by - Systems that exhibit exponential decay follow a model of the form - Systems that exhibit exponential decay have a constant half-life, which is given by 6.9 Calculus of the Hyperbolic Functions - Hyperbolic functions are defined in terms of exponential functions. - Term-by-term differentiation yields differentiation formulas for the hyperbolic functions. These differentiation formulas give rise, in turn, to integration formulas. - With appropriate range restrictions, the hyperbolic functions all have inverses. - Implicit differentiation yields differentiation formulas for the inverse hyperbolic functions, which in turn give rise to integration formulas. - The most common physical applications of hyperbolic functions are calculations involving catenaries.
Lesson 3.3: Parentheses A mathematical expression can be enclosed in parentheses to indicate that the expression should be evaluated first. Parentheses are also used in math to group items together or to indicate that the numbers are linked in some way. Parentheses are used in pairs as an opening parenthesis and a closing parenthesis. The opening parenthesis is formed with a dot five prefix, and a dots one two six root. The closing parenthesis is formed with a dot five prefix followed by a dots three four five root. Follow print for use of parentheses. There is usually no space between the parenthesis and the items associated with them. Parentheses terminate numeric mode. The numeric indicator is required before a numeral that immediately follows an opening or closing parenthesis.
How does the brain work? The brain works like a big computer. It processes information that it receives from the senses and body, and sends messages back to the body. But the brain can do much more than a machine can: We think and experience emotions with our brain, and it is the root of human intelligence. The human brain is roughly the size of two clenched fists and weighs about 1.5 kilograms. From the outside it looks a bit like a large walnut, with folds and crevices. Brain tissue is made up of about 100 billion nerve cells (neurons) and one trillion supporting cells that stabilize the tissue. The brain is made up of various parts, each with its own functions: - the cerebrum - the diencephalon – including the thalamus, hypothalamus and pituitary gland - the brain stem – including the midbrain, pons and medulla - the cerebellum
The International Space Station has a new tool for studying how living things fare in the unforgiving environment of space. BIOMEX, the Biology and Mars Experiment, is one of four experiments in the Expose-R2 facility, which was launched to the station in July onboard a Progress cargo vehicle. Since August 20, two of the facility’s “Space Trays” have been exposed to the vacuum outside the ISS. Starting in October, the samples will face complete space conditions, including solar and cosmic radiation. BIOMEX is an international project involving 26 institutions, and is led by Jean-Pierre de Vera of the German AeroSpace Center. The experiment contains numerous chambers in which organisms such as bacteria, archaea, fungi, lichen, and mosses, as well as large organic molecules, can be exposed to space conditions for 12 to 18 months. Some of these biomolecules and organisms are mixed in with Mars-like soil. The objective of BIOMEX is to test to what extent cell components and organisms can resist the rigors of space, including conditions they would be exposed to on Mars. Once the experiment is complete (after about a year) and the samples are returned to Earth, they’ll be examined in detail. It will be of particular interest to learn what signs of life will be left after the organisms have been exposed to vacuum, radiation, desiccation, and other stresses. BIOMEX is not the only experiment on the Expose-R2 facility. Another study called BOSS (Biofilms Organisms Surfing Space) will test biofilms of microbes and plankton, and will have objectives similar to those of BIOMEX. Microorganisms in natural environments mostly occur in the form of biofilms, which protect them and make them more resistant to external stresses. These kinds of experiments are a critical first step in gaining the essential biological information we’ll need to undertake human missions to Mars and beyond. The data also will help us to evaluate the panspermia hypothesis—the idea that microorganisms can survive the journey from one planet to another.
Over a period of about 500 years, from 750 A.D. to 1250 A.D., Central Asia produced some of the world’s finest minds and its workshops produced exquisite goods that were recognized and traded across Europe and Asia. During this period, Central Asia benefitted from being at the center of the Silk Road connecting East Asia to the Middle East and Europe. But by the 18th century, Central Asia has ceased to be the “center of Asia” and was no longer astride major trade routes, as trade between Asia and Europe moved to use sea routes. Worse, during the “Great Game,” the Russian and British Empires agreed on a “buffer” along the northern border of Afghanistan. With the end of the British Empire in India in 1947 and, more importantly, the Soviet Union in 1991, the artificial divide across Central Asia was removed. With the rapid growth of the Chinese and Indian economies, Central Asia now lies between Asian and European markets that account for two-thirds of the global population, two-thirds of world GDP, and more than two-thirds of global trade. After nearly 200 years of isolation can Central Asia once again become a vital link in the global economy? Of critical importance to the region is the rebalancing of the Chinese economy: Its center of gravity is moving west, away from the eastern seaboard and closer to the land border with Central Asia. As China moves production inland, land transport through Central Asia to Europe becomes increasingly attractive. For goods being shipped to Europe from Eastern China, the alternative is to first ship eastwards to Shanghai before sailing for six weeks to Europe. For Chinese factories, shipping west by land is cheaper than air and faster than sea transport. Better connectivity within Central Asia would allow the region to capture an important share of global trade. Major investments are being made to improve transport networks in Central Asia. Under China’s Belt and Road Initiative (BRI), Central Asia is benefitting from ongoing and planned upgrading works. In addition to east-west corridors, there are opportunities for opening up to the south, in terms of reopening ancient trade routes connecting Central Asia to South Asia. What are the priorities: Hard infrastructure or soft policies? Within Central Asia, the ongoing major reform program in Uzbekistan is providing an opportunity to address the regional connectivity agenda. Policies are being adopted to open borders and create regional trading networks. But the region has not yet fully leveraged its geographic position. Central Asia’s greatest challenge is the inefficiency of its borders, which is of greater concern than the quality of infrastructure. “Trading across Borders” is the regions’ weakest indicator in Doing Business: the Kyrgyz Republic is ranked 70th, Kazakhstan 102nd, Tajikistan 148th, and Uzbekistan 165th, while Russia ranks 99th and the average ranking for Europe and Central Asia is 54th. Border delays and unpredictability have a disproportionate effect on economic activity, especially agricultural goods, many of which become worthless if stuck at the border for 2-3 days. However, BRI investments can spur development in Central Asia, estimated at more than 20 percent for each of Central Asia’s five economies. These benefits will be mostly created by policy reforms to reduce border delays—not by infrastructure investments (Figure 1). The major gains from BRI are primarily from structural reforms to release procedural bottlenecks that reduce logistics costs, thereby increasing overall trade. Thus, the main benefits will come from adjustments to institutional, legal, and market frameworks. Transit routes or full integration into the global economy? Central Asia should also look beyond the China-Europe transit trade and take measures to fully integrate with the global economy. This means developing trade corridors that can transform their economies and benefit all people along the corridors. Countries need to change the mindset that trade corridors are merely transport engineering feats designed to move vehicles and commodities. They must develop investment programs based on sound economic analysis of how corridors can help spur urbanization and create local jobs while minimizing negative environmental and social impacts. The analysis must specifically ensure that local populations whose lives are disrupted can reap the benefits of better transport connectivity. Trade corridors offer enormous potential to boost Central Asia’s economic growth, spur job creation, and reduce poverty—if the new trade routes spread their benefits widely and limit negative impacts. However, the corridors proposed across Central Asia would cost trillions of dollars, far exceeding the financing resources available. Therefore, countries need to prioritize those corridors that will deliver the most impact on economies and people. Collecting data and more information is key to identifying opportunities and minimizing risks. The hard truth is that corridor initiatives create both winners and losers: As connectivity improves, more educated and skilled people can migrate to better jobs in urban areas, while unskilled workers are left behind in depopulated rural areas with few economic prospects. However, well-designed investment programs can alleviate potential adverse impacts and help local people reap the benefits more widely. They are key instruments in ensuring that Central Asia is an integral player in “Asia’s century” as it once was over a millennium ago.
This course is specifically designed for students who need to improve their listening comprehension skills, and to give non-native English speakers American cultural and historical information in context as related to music. It will include instruction on listening for details and for main ideas in the songs, understanding implied and inferred underlying meaning in songs, and recognizing contracted and reduced speech forms in native speakers. The content and emphasis for this class is cultural, historical and academic. The goals for English language learners are to improve their note-taking and listening comprehension skills, to better their research and reporting abilities through biography and song presentations, and to gain a more complete knowledge of American music history and the accompanying cultures through intensive study. IMPORTANT: Students must have an advanced-level English language level to take this course. Students will be required to take an English language proficiency exam before the course begins. This exam is required and is no additional cost. For exam details, click on the course information below. For questions, please contact [email protected]. Course Number: COMM-40016 Credit: 2.00 unit(s)
Palms belong to the monocot family, along with grasses, orchids, and bromeliads. Palms are not capable of secondary growth and thus, do not have annual growth rings. Palms emerge from the ground at their lifelong diameter, completing their thickening growth before elongating. By contrast, a tree continually increases in diameter as it ages, which can be seen in the growth rings. Palms have vascular bundles that transfer water and nutrients to the ‘palm heart’. These bundles have been compared to steel-reinforced concrete, the ‘steel’ inside the concrete being the vascular bundle. There are a fixed amount of bundles that must last the life of the palm. The palm heart, if killed, ends the life of the entire palm. Roots: All palms have roots that develop from the stem and emerge from there at maximum thickness. They are also incapable of secondary growth. They do have three distinct levels of branching, however. The smallest roots absorb water and nutrients, but are not considered root hairs because palms do not produce them. Palm roots do have the capacity for significant lateral growth. Roots have been found over 100 feet from the trunk. Stems: Palm stems vary widely. They can be very thin, as in the Rhapis or very thick like the Canary Island Date. Some keep the leaf bases intact, often referred to as booted. Others develop a slick trunk as the boots shed naturally. Palm trunks can be brown, grey or green, fibrous, spiny or bumpy. Leaf base scars can develop a distinctive pattern on the trunk. Leaves: Palm leaves are the largest organs in the plant kingdom. There are three main parts to a leaf: the blade, petiole (stem), and leaf base. The blades fall into three main categories: palmate (fan palms), pinnate/bipinnate (feather), or entire leaves. Flowers: Palm flowers are typically very small but bloom on a flower stalk or inflorescence in large numbers and are collectively showy. Most inflorescence emerges among the leaves. Fruit: The palm fruit (and seeds) are usually larger and more noticeable than the flowers. The largest seed of any plant known on earth is the double coconut. Most fruits contain only one seed and have a fleshy or fibrous outer wall that is brightly colored. Planting: When transplanting palms it is usually advisable to tie the crown up or remove it completely when the species you are transplanting must regenerate roots from the trunk. Cabbage palms are typically ‘hurricane cut’, or all the leaves are removed, when they are transplanted. Root balls should be, on average, a one foot radius from the trunk and one foot deep. In our heavy clay soils, bracing is not required when planting palms. In sandy soils they should be braced for 6-8 months. Thank you for reviewing this information. Schneider Tree Care is committed to preserving and enhancing the quality of your property through tree care education and services. We employ professionally trained and certified arborists who are available to meet with you for a consultation at no charge. If you have any questions or need additional information regarding the health of your trees, please contact us.
October 6, 2019 This is the processing of visual information to obtain knowledge. The basic task inside this technology is to detect the object in images and video, i.e. to recognise that one picture in a corner shows a car and the other one shows a computer, keyboard and phone. In robotics, the results of object detection give the robot an understanding of what to do and how to do it, and help it learn. A logical continuation of detection is tracking, i.e. first the object is detected and then the tracking of its movements begins. Robots need this to understand the visual scene and learn to predict the actions of other objects, which is indispensable, for example, for self-driving cars. Other tasks for computer vision are segmentation of the image (understanding where the floor is, where the wall is and where the door is) and depth assessment. The latter involves understanding the distance to an object and is solved by reconstructing the three-dimensional geometry from a series of two-dimensional photographs. How robots help making teleportation real Natural language processing Communication with a person is impossible without understanding his or her language. AI specialists disassemble in parts individual morphemes, even the emotional coloration of words in a text, sewing it into a program. Robots need these technologies, for them it’s like a dialogue window with a person, and it’s not just about understanding, but also about responding and learning new concepts. If the language processing concerns textual information, then the speech analytics is sound. First and foremost, this is speech recognition, which by 2019 has already become solidly mainstreamed into people’s lives. The next step is to speech synthesis and improve the voice quality of the robot and/or program itself to the levels of human communication. In other words, this technology can be called the automation of processes when they pass without human intervention. Since, again, we are talking about a weak AI that is tailored to individual tasks, the technology for decision-making is perhaps the most understandable in its purpose. The authors of the review highlight several applications for using such technologies: - navigation, e.g. bypassing obstacles, memorizing and recording the path travelled, localizing yourself in space; - learning through demonstrations, when the robot memories the actions shown visually or mechanically; - emotional interaction, for which the machine needs to understand the mood of the person standing in front of you, superimpose it on its “character” features and produce the result as a “mimic” or “emotion”; - automation of machine learning, i.e. reducing the person’s participation in machine learning, partial transfer to self-learning. Of course, such technologies must be applied together with others: independent navigation together with computer vision, and emotions together with speech analytics. This technology is remotely similar to decision making, but analysts have identified it as a separate paragraph. The reason for this is the potential for wide application of recommendation systems in service robotics. We are talking about the supply of goods and services, targeted advertising, a selection of films and music. In the case of robots, the technology can lead to the spread of robotic waiters or sales consultants.
Leukaemia, acute lymphoblastic Acute lymphoblastic leukaemia is a type of cancer that affects the white blood cells.It progresses rapidly and aggressively and requires immediate treatment. Both adults and children can be affected. Acute lymphoblastic leukaemia is very rare, with around 650 people diagnosed with the condition each year in the UK. Half of all cases diagnosed are in adults and half in children. Although rare, acute lymphoblastic leukaemia is the most common type of childhoodleukaemia. About 85% of the cases that affect children occur in those younger than 15 (mostly between the ages of two and five). It tends to be more common in males than females. Acute lymphoblastic leukaemia is different to other types of leukaemia, including Leukaemia, acute myeloid , chronic lymphocytic leukaemia and chronic myeloid leukaemia . This page covers: What happens in acute lymphoblastic leukaemia When to get medical advice All of the blood cells in the body are produced by bone marrow, a spongy material found inside bones. Bone marrow produces specialised cells called stem cells, which have the ability to develop into three important types of blood cells: Normally, bone marrow doesn't release stem cells into the blood until they are fully developed blood cells. But in acute lymphoblastic leukaemia, large numbers of white blood cells are released before they are ready. Theseare known as blast cells. As the number of blast cells increases, the number of red blood cells and platelet cells decreases. This causes the symptoms of anaemia , such as tiredness, breathlessness and an increased risk of excessive bleeding. Also, blast cells are less effective than mature white blood cells at fighting bacteria and viruses, making you more vulnerable to infection. Acute lymphoblastic leukaemiausually starts slowly before rapidly becoming severe as the number of immature white blood cells in your blood increases. Most of the symptoms are caused by the lack of healthy blood cells in your blood supply. Symptoms include: In some cases, the affected cells can spread from your bloodstream into your central nervous system. This can cause a series of neurological symptoms (related to the brain and nervous system), including: If you or your child has some or even all of the symptoms listed above, it's still highly unlikely that acute leukaemia is the cause. However, see your GP as soon as possible because any condition that causes these symptoms needs prompt investigation and treatment. Other treatments you may need include antibiotics and blood transfusions . In some cases, a bone marrow transplant may also be needed to achieve a cure. Almost all children will achieve remission (a period of time where they're free from symptoms), and 85% will be completely cured. The outlook for adults with acute lymphoblastic leukaemia is less promising. Around 40% of people aged between 25 and 64 will live for five years or more after receiving their diagnosis. In those aged 65 or over, around 15% will live for five years or more after being diagnosed. Cancer Research UK has more detailed survival statistics for acute lymphoblastic leukaemia . Acute leukaemia is a type of cancer which affects certain cells present in the blood white blood cells, red blood cells and thrombocytes. All of the cells present in the blood are produced by the blood marrow or bones, which is found inside the bone. A common test for this condition is a blood workup of peripheral blood, which can lead doctors to believe this diagnosis is possible. Following this, a blood marrow biopsy may be necessary. This test involves extracting material from the inside of the bone, and subsequently analyzing it. Treatment is usually carried out in three stages known as induction, consolidation and maintenance. The patient must become hospitalized. The patient then receives blood transfusions, and extra care is taken against infections. Following this, chemotherapy may be applicable (using cytostatic preparations), which aims to eliminate the sick cells. Being immunocompromised (having a weakened immune system) is a possible complication for some patients with acute leukaemia. Patients suffering from acute leukaemia face a high risk of infection. This may be due to the patient's immune system becoming compromised, or due to the suppression of the immune system by the medication usually administered to treat leukaemia. When Hazel Phillips went to see her GP about an ear infection, she suspected something more serious was wrong because of her other symptoms. A blood test confirmed her worst fears: she had acute lymphoblastic leukaemia.
What is surface water management and why is it important? Surface water management is the process of collecting, monitoring and treating surface water to help prevent flooding and pollution. Surface water is generated by many sources including: - Snow melt - Any water you use outside, including washing your car, watering your lawn or emptying a pool. Surface runoff travels in many different ways. A few examples are: - Over land through swales and ditches - Crossing over and beneath roads through culverts - Captured by catch basins (square grates at the edge of the road) and carried underground within sewer pipes The City of Hamilton regulates the quantity and quality of surface water through different methods. We mostly use stormwater management ponds. It is essential to manage surface water as it focuses on reducing flood damage and improving local water quality. The watercourse includes: - All ponds, streams, creeks and rivers in Hamilton - All swales, ditches and underground pipes that carry surface water. It is important to take certain actions to not pollute our watercourse. Residents should obey the following guidelines to help keep our watercourse clean: - Do not litter in or around the watercourse - Do not drain or dispose of swimming pool chemicals and water in the watercourse - Pick up after your dogs and pets - Do not over fertilize or over water your lawn/garden - Report illegal dumping to by registering a by-law complaint Additional information about watercourses and the natural environment within the City of Hamilton, can be found through the Conservation Authority: Municipal Drains are drainage systems that help surface water travel to the watercourse. Dykes, swales, ditches and underground pipes are a few examples of municipal drain systems. These systems have been put in place to reduce flooding. Report a municipal drain that needs maintenance Find out more about municipal drains Cross culverts are structures that help surface water travel below a road, railroad, trail or similar barrier from one side to the other. Culverts play a key role in managing surface water and preventing flooding. Report a cross culvert that is clogged and needs maintenance - Date modified:
Researchers recently discovered that the sea lamprey, a modern representative of ancient jawless vertebrates, fights invading pathogens by generating up to 100 trillion unique receptors. These receptors, referred to as VLRs, are proteins and function like antibodies and T-cell receptors, sentinels of the immune system in all jawed vertebrates, including humans. The results, reported in the Dec. 23 Science by Zeev Pancer at the University of Maryland Biotechnology Institute's Center of Marine Biotechnology in Baltimore, and his colleagues, proved ancient vertebrates--both jawed and jawless--used more than one strategy to develop an immune system that would recognize and defend against their myriad bodily invaders. They studied a type of immune defense mechanism called "adaptive," because as the name implies, it adapts to the incredible number of pathogens in the environment by producing 100 trillion potentially different receptor proteins in order to recognize at least one of the invader's molecules. Recognition of the pathogen is a first step in mounting a defensive response against it. Some 450 million years ago, both jawed and jawless vertebrates began relying on cells called lymphocytes to support the burgeoning adaptive immune system. But within the lymphocytes from the two types of animals, very different mechanisms evolved to reach very similar ends. Comparing the two immune systems is the basis of Pancer's research. As in jawed vertebrate immune systems, he found, the diversity of the VLR proteins occurs when thousands of genetic modules go through multiple rounds of random mixing, insertion and deletion. Each new VLR gene functions as a blueprint for the corresponding VLR protein. Thus, through a mixture of chance and necessity, both jawed and jawless vertebrates stay ahead of the pathogens in their ever-evolving battle. To test the adaptability of this alternative immune mechanism, the researchers immunized lampreys with the anthrax-causing bacterium, a pathogen not normally encountered by fish of any type. Within four weeks, the lamprey immune system had recognized the spores as foreign and responded by producing anthrax-specific VLR proteins that circulated throughout its body. "By understanding the development and role of the lamprey immune system we can learn about our own immune system and how it functions," said Pancer. "Comparing these two systems is an unparalleled way to look at a basic biological process and also may hold promise for novel diagnostic tools." Pancer credits the National Science Foundation, which supported this work, as enabling new discoveries that have the potential to unravel such mysteries of biology. Explore further: Economic output less dependent on road transportation
In our solar system, only one planet is blessed with an ocean: Earth. Our home world is a rare, blue jewel compared with the deserts of Mercury, Venus, and Mars. But what if our sun had not one but two habitable ocean worlds? Astronomers have found such a planetary system orbiting the star Kepler-62. This five-planet system has two worlds in the habitable zone — the distance from their star at which they receive enough light and warmth that liquid water could theoretically exist on their surfaces. Modeling by researchers at the Harvard-Smithsonian Center for Astrophysics (CfA) suggests that both planets are water worlds, their surfaces completely covered by a global ocean with no land in sight. “These planets are unlike anything in our solar system. They have endless oceans,” said lead author Lisa Kaltenegger of the Max Planck Institute for Astronomy and the CfA. “There may be life there, but could it be technology-based like ours? Life on these worlds would be under water with no easy access to metals, to electricity, or fire for metallurgy. Nonetheless, these worlds will still be beautiful, blue planets circling an orange star — and maybe life’s inventiveness to get to a technology stage will surprise us.” Kepler-62 is a type K star slightly smaller and cooler than our sun. The two water worlds, designated Kepler-62e and -62f, orbit the star every 122 and 267 days, respectively. They were found by NASA’s Kepler spacecraft, which detects planets that transit, or cross the face of, their host star. Measuring a transit tells astronomers the size of the planet relative to its star. Kepler-62e is 60 percent larger than Earth, while Kepler-62f is about 40 percent larger, making both of them “super-Earths.” They are too small for their masses to be measured, but astronomers expect them to be composed of rock and water, without a significant gaseous envelope. As the warmer of the two worlds, Kepler-62e would have a bit more clouds than Earth, according to computer models. More distant Kepler-62f would need the greenhouse effect from plenty of carbon dioxide to warm it enough to host an ocean. Otherwise, it might become an ice-covered snowball. “Kepler-62e probably has a very cloudy sky and is warm and humid all the way to the polar regions. Kepler-62f would be cooler, but still potentially life-friendly,” said Harvard astronomer and co-author Dimitar Sasselov. “The good news is — the two would exhibit distinctly different colors and make our search for signatures of life easier on such planets in the near future,” he added. The discovery raises the intriguing possibility that some star in our galaxy might be circled by two Earth-like worlds — planets with oceans and continents, where technologically advanced life could develop. “Imagine looking through a telescope to see another world with life just a few million miles from your own. Or, having the capability to travel between them on a regular basis. I can’t think of a more powerful motivation to become a space-faring society,” said Sasselov. Kaltenegger and Sasselov’s research has been accepted for publication in The Astrophysical Journal.
is the 71st birthday of Noam Chomsky, considered by many to be the most influential linguist of the 20th century. Chomsky revolutionized the field of theoretical linguistics in 1957 with the publication of Syntactic Structures, a book that challenged the prevailing theory that humans learn language through training and experience, much as they learn other habits. Instead, Chomsky proposed that people have an innate ability to understand the rules of grammar, which explains how even young children can create grammatically correct sentences they have never heard before. Chomsky theorized that these rules, known as "grammatical transformations," are somehow hardwired into the brain and are basically the same for every language. The theory remains controversial. Chomsky, who has spent most of his career as a professor at the Massachusetts Institute of Technology in Cambridge, is well-known for his left-leaning political views; in the '60 and early '70s, he was a vocal opponent of the United States' involvement in Vietnam. [Source: Britannica Online]
Why it matters - 55% of Pacific’s population (excluding Papua New Guinea) lives less than 1 km from the sea. - Countries in the Pacific are amongst the most vulnerable in the world due to severe weather and natural hazards, strong dependence on their natural resources and the limited diversification of their economies. - Climate change impacts already directly threaten the availability of food and water, the productivity of ecosystems and breeding grounds, reef and fisheries resources, and the effectiveness of natural coastal defenses. - Climate change is expected to have a significant impact on the economic backbones of island communities, including fisheries, crop exports and tourism. - On the basis of the Intergovernmental Panel on Climate Change (IPCC) scenarios, the Pacific’s high vulnerability could lead to widespread food and water insecurity, increased health risks, lack of access to social services and even forced displacements in some cases. - 183 countries have ratified the Paris Agreement to date, including all of the Pacific Island Countries (PICs) that are parties to the United Nations Framework Convention on Climate Change (UNFCCC). - Even if the PICs account for only 0.03% of the world’s total greenhouse gas emissions, they are strongly committed to achieving their Nationally Determined Contributions (NDCs) and successfully transitioning towards zero-net emission development pathways. - PICs have underscored the importance of adaptation measures in their particular context, calling for significant financial and technical support in that regard. - Addressing these Pacific challenges requires multilayered action, at all governance levels. Local biophysical, social, economic, political and cultural circumstances must prevail when designing adaptation and mitigation options. What can be done? - Without stronger ambitions at the international level and tangible, greater and faster progress in reducing greenhouse gas emissions, climate change adaptation solutions become fewer, less effective and more costly. Some of the adaptation solutions that we have now, might not be available in the future. - The Pacific region requires more support and coordination for increased access to climate adaptation and mitigation data and knowledge. Strengthened regional capacity in these areas will not only help countries better fund and manage their adaptation and mitigation programmes, but also enable the Pacific region as a whole to be one of the major players in the fight against climate change, at the global level. - In order to respond appropriately to the needs of Pacific populations in their fight against climate change, more support needs to be provided to adaptation projects. PICs are strongly committed to transition toward zero-net emission economies, as per their climate commitments, but the most existential challenge they face lies in their capacity to adapt and build climate-resilient communities.
The immune system is the body's natural defense system that helps fight infections. The immune system is made up of antibodies, white blood cells, and other chemicals and proteins that attack and destroy substances such as bacteria and viruses that they recognize as foreign and different from the body's normal healthy tissues. The immune system is also responsible for allergic reactions and allergies, which may occur when the immune system incorrectly identifies a substance (allergen), such as pollen, mold, chemicals, plants, and medicines, as harmful. Sometimes the immune system also mistakenly attacks the body's own cells, which is known as an autoimmune disease. eMedicineHealth Medical Reference from Healthwise To learn more visit Healthwise.org
Martin Luther King, Jr., was a black clergyman from Atlanta, Georgia. When King was a child, he learned that black people and white people did not mix in public places. Black people sat in different parts of restaurants and movie theaters. Black people sat at the back of the bus. Black and white children went to different schools. This kind of separation is called segregation. King loved to study. He was a good student and went to college at a young age. He was only fifteen years old. When he finished college, he began to fight segregation. He did not believe in violence. He believed in peace. He helped black people to protest in peace. They went on marches in peace. King also wanted equality for everybody. He wanted black and white men and women to have an equal chance in the United States. This is called the civil rights movement. In 1963, King was the leader of the civil rights march on Washington, D.C. Thousands of people listened to his famous speech. It begins, "I have a dream." In 1964, Martin Luther King, Jr., was the youngest person to get the Nobel Peace Prize. This award is for people who try to make peace in the world. In 1968, an assassin killed King. He was only thirty-nine years old. His birthday, January 15, is a national holiday in the 1. What was the Martin’s huge discovery when he was a child? a. Black people didn’t go to the theatre. b. It was a strict separation between black and white people. c. White people didn’t study at school. 2. At what age did Martin finish his studying? a. At 15 b. At 16 c. At 17 3. What could not King accept? 4. Where did King make his famous speech beginning “I have a dream”? a. At college. b. In Washington D.C. c. On the stage, when he got the Nobel Prize. 5. What was special about King’s getting the Nobel Prize? a. He was the oldest person. b. He was the first who had ever gotten it. c. He was the youngest person to get it. 6. What date, connected with Martin, became a national holiday in the USA? a. The day of his death. b. The day of his most famous march. c. The day of his birth. 7. What is the antonym for the word ‘violence’ (paragraph 2)?
|Product #: TCR3955| MASTERING KINDERGARTEN SKILLS Mastering Skills takes a fresh approach to the mastery of grade-specific skills. Each book uses a wide range of activities to spark students' interest in learning. As students complete the activities, they develop the skills they need to meet academic standards in reading, writing, math, social studies and science. Both teachers and parents can use the books to introduce new concepts, to assess learning and skill development, and to reinforce familiar knowledge. The versatile activities can be used for individual practice, test preparation, or homework assignments. Complete answer keys are provided. The activities target standards in these areas: - phonemic awareness - sight words and vocabulary development - number concepts - problem solving - map skills - relationships among organisms and the environment Submit a review
Scientists from the University of Copenhagen are exploring the possibility of manipulating the DNA of moss so that it can grow on Mars and astronauts can use it to create their own medicines on the Red Planet. The original idea was conceived by a Danish start-up called TychoBio, which has successfully created moss that makes chemicals such as ingenol mebutate – a substance that is used to fight skin cancer. The university members have adapted this idea to see if they can make moss for Mars. Victoria Sosnovtseva, from the university, is quoted by New Scientist as saying: "Why don't we use moss to make components that are useful for space exploration?" One of the biggest challenges the researchers face is genetically modifying the plant so that it can withstand Mars' harsh conditions. The first stumbling block they encountered was the planet's soil, which has few minerals and plenty of poisonous salts. To examine the effects of this, the team took moss from Pu'unene, a volcanic cinder cone in Hawaii which has soil similar to but not as bad as that on Mars. Several weeks after planting the new moss, they noted that it was growing without problem. Secondly, they tested it in extreme cold as temperatures on Mars can plunge to -55C. They discovered that the genetically modified moss was capable of growing at -20C but died at -60C. While they are still some way off making the moss Mars-ready, they remain optimistic that it is a feasible feat. The New Scientist reports that the next step for them is to modify it with UV radiation protection and give it the ability to break down the poisonous salts in Mars' dirt. The project was presented at the Giant Jamboree of the International Genetically Engineered Machine Foundation in Boston.
The Dunlin is among the most cosmopolitan and well studied of all the small sandpipers. It is a familiar species throughout the year, either in its striking breeding plumage of black belly and rufous back (hence its previous name, Red-backed Sandpiper) or during winter when it is gray and nondescript but occurs in flocks of thousands or tens of thousands. As many as nine races of Dunlin breed in the Holarctic, three of them in North America, where this monogamous, territorial species breeds on subarctic and arctic coastal tundra from southwestern Alaska north and east to James Bay, Canada. During winter it occurs mostly on large estuaries along the Pacific and Atlantic coasts of the United States and northern Mexico; some Alaska birds spend the winter in coastal East Asia. Clams, worms, insect larvae, and amphipods figure prominently in the diet of this species, reflecting its tie to coastal and intertidal areas throughout most of its annual cycle. In some areas such as the Central Valley of California and states bordering the Gulf of Mexico, substantial numbers of Dunlin move inland from the coast in midwinter. The Pacific Coast population (C. a. pacifica), numbering about half a million birds, is substantially larger than the other two North American populations. Mortality in all populations appears to be greatest during winter, particularly from avian predators such as falcons. Despite the Dunlin's broad geographic range, populations of several races appear to have declined in recent decades, perhaps because of continued loss and degradation of wetland habitats.
Serum sickness describes a delayed immune system response, either to certain kinds of medications or to antiserum (given after a person has been bitten by a snake or to counter exposure to rabies, for example). Serum is the clear fluid part of blood. Serum sickness is similar to an allergy, in that the body mistakenly identifies a protein from the antiserum or medication as harmful and activates the immune system to fight it off. Today, the most common cause of serum sickness is the antibiotic penicillin. Serum sickness will usually develop within 7 to 10 days after initial exposure, but sometimes it can take as long as 3 weeks. If you are exposed again to the substance, serum sickness tends to develop faster (within 1 to 4 days), and only a very small amount of the substance may cause an intense response. Signs and Symptoms The first signs of serum sickness are redness and itching at the injection site. Other signs and symptoms include: - Skin rash, hives - Joint pain - Malaise (feeling unwell) - Swollen lymph nodes - Diarrhea, nausea, and abdominal cramping What Causes It? Antigens, proteins the body mistakenly identifies as harmful, cause your immune system to produce antibodies. These antibodies bind with the antigens and build up on the layers of cells that line the heart, blood vessels, lymph vessels, and other body cavities. This causes inflammation and other symptoms of serum sickness. Penicillin is the most common cause of serum sickness. Other causes include: - Other antibiotics, including cephalosporins - Fluoxetine (Prozac) used for depression - A class of diuretics called thiazides - Products that contain aspirin - Many other medications - Snake venom antiserum - Bee or wasp sting (rare) Who is Most At Risk? You are more likely to suffer from serum sickness if: - You are injected with one of the drugs or antitoxin known to cause serum sickness. - You need a large amount of snake venom antiserum. - You have previously been exposed to a drug or antitoxin known to cause serum sickness. What to Expect at Your Provider's Office Your doctor will look for typical symptoms and ask if you have been recently exposed to any antiserum. Your doctor may order blood and urine tests. - If you know you are sensitive to a particular drug or antiserum, you should tell your health care provider before you get any kind of injection. - A provider can perform skin tests to check for serum sensitivity before giving antiserum. - If you are sensitive to an antiserum, your provider may use a method that desensitizes you to the antiserum, at least temporarily. Treatment for serum sickness is aimed at reducing symptoms. Your doctor may prescribe antihistamines or analgesics (NSAIDs), along with topical medications to relieve itching or rash. In serious cases, your doctor may prescribe corticosteroids, such as prednisone. Normally, there is no need for hospitalization. Fever typically gets better within 48 to 72 hours of treatment. Complementary and Alternative Therapies If you suspect you have serum sickness, you should see a doctor immediately and receive conventional medical treatment. Some complementary and alternative therapies (CAM) may support conventional treatment by helping to reduce inflammation and stabilize your immune system, but no scientific studies have been done on the effectiveness of CAM therapies for serum sickness. Although certain CAM therapies may help relieve symptoms, others could make them worse. Take any herb, supplement, or medication only under your doctor's supervision.Nutrition and Supplements The following nutrients may help support your immune system and reduce allergic reactions, though there is no scientific evidence they will be effective for serum sickness. As noted, some may make serum sickness worse. Talk to your doctor before taking any of these supplements. Following these nutritional tips may help reduce risks and symptoms: - Eliminate all suspected food allergens, including dairy, wheat (gluten), soy, corn, preservatives, and chemical food additives. Your health care provider may want to test you for food allergies. - Eat foods high in B-vitamins and iron, such as whole grains (if no allergy), dark leafy greens (such as spinach and kale), and sea vegetables. - Eat antioxidant-rich foods, including fruits (such as blueberries, cherries, and tomatoes), and vegetables (such as squash and bell pepper). - Avoid refined foods, such as white breads, pastas, and sugar. - Eat fewer red meats and more lean meats, cold-water fish, tofu (soy, if no allergy), or beans for protein. - Use healthy oils for cooking, such as olive oil or vegetable oil. - Reduce significantly or eliminate trans-fatty acids, found in commercially-baked goods such as cookies, crackers, cakes, and donuts. Also avoid French fries, onion rings, processed foods, and margarine. - Avoid coffee and other stimulants, alcohol, and tobacco. - Drink 6 to 8 glasses of filtered water daily. - Exercise moderately for 30 minutes daily, 5 days a week. Traditional Chinese Medicine and acupuncture can help lessen the body's tendency toward allergic hypersensitivity reactions.Massage DO NOT use massage to treat serum sickness as it may promote inflammation and lower blood pressure. Serum sickness usually improves in 7 to 10 days, with full recovery in 2 to 4 weeks. However, it may lead to nervous system disorders and a life-threatening allergic reaction called anaphylaxis, so it is important to get medical treatment. Health care providers should monitor seriously ill people for rare instances of myocarditis (inflammation of the heart muscle) and peripheral neuritis (nerve inflammation). Bhat KPL, Kosmeder JW 2nd, Pezzuto JM. Biological effects of resveratrol. Antioxid Redox Signal. 2001;3(6):1041-64. Bonds RS, Kelly BC. Severe serum sickness after H1N1 influenza vaccination. Am J Med Sci. 2013;345(5):412-3. Bope and Kellerman: Conn's Current Therapy, 2012. 1st. ed. Philadelphia, PA: Elsevier Saunders; 2011. Cabrera C, Artacho R, Gimenez R. Beneficial effects of green tea -- a review. J Am Coll Nutr. 2006;25(2):79-99. Jacob A, Chaves L, Eadon MT, Chang A, Quigg RJ, Alexander JJ. Curcumin alleviates immune-complex-mediated glomerulonephritis in factor-H-deficient mice. Immunology. 2013;139(3):328-37. Maheshwari RK, Singh AK, Gaddipati J, Srimal RC. Multiple biological activities of curcumin: a short review. Life Sci. 2006;78(18):2081-7. Marx: Rosen's Emergency Medicine. 7th ed. St. Louis, MO: Elsevier Mosby; 2009. Rotsein OD. Oxidants and antioxidant therapy. Crit Care Clin. 2001;17(1):239-47. Shi J, Yu J, Pohorly JE, Kakuda Y. Polyphenolics in grape seeds-biochemistry and functionality. J Med Food. 2003;6(4):291-9. Simopoulos AP. Omega-3 fatty acids in inflammation and autoimmune diseases. J Am Coll Nutr. 2002;21(6):495-505. Williams JE. Review of antiviral and immunomodulating properties of plants of the Peruvian rainforest with a particular emphasis on Una de Gato and Sangre de Grado. Altern Med Rev. 2001;6(6):567-79. Wolverton. Comprehensive Dermatologic Drug Therapy. 2nd ed. Philadelphia, PA: Elsevier Saunders; 2007. Yoon JH, Baek SJ. Molecular targets of dietary polyphenols with anti-inflammatory properties. Yonsei Med J. 2005;46(5):585-96. Review Date: 3/24/2015 Reviewed By: Steven D. Ehrlich, NMD, Solutions Acupuncture, a private practice specializing in complementary and alternative medicine, Phoenix, AZ. Review provided by VeriMed Healthcare Network.
Regular Exercise Is Important When You Have Heart Disease When you have heart disease, exercise is an important part of keeping your condition under control. Exercise causes the heart to pump blood into the circulation more efficiently, increased perfusion of tissues and organs with blood, and increased oxygen delivery. Exercise is protective against metabolic syndrome, lowers blood pressure, works against blood clotting, and lowers stress, all of which contribute to improved cardiovascular health. For patients with heart disease: get yourself prepared to work out As with anything, it is best to check with your doctor to make sure you are healthy enough to start exercising. Appropriate exercise for patients with heart disease - Patients with heart condition should be assessed by heart specialists. - Cardiologist – evaluate heart rate and rhythm during exercise and explain signs and symptoms of the condition and complications to the patient - Cardiac rehabilitation doctor – design appropriate exercise program to meet your needs - The length of exercise should be around 30 minutes per day at least 5 days a week. It is important to warm-up before exercising and cool-down after the session. Also, the patient should find a friend to exercise with and have a telephone within reach. - Do not exercise right after having a meal. You should wait at least 1 ½ hour after having a meal. Do not exercise if you are not feeling well. - Drink enough water between and after exercising to replace the fluid you lose. - Choose appropriate places for exercise – good air circulation, not too hot or too cold. Wear appropriate workout clothes and footwear. - Bring your medication with you – nitroglycerin – to prevent angina attacks. - Brisk walking – is generally a safe way to exercise for patients with heart condition. It is low impact, requires minimal equipment, can be done at any time of day and can be performed at your own pace. Keep a moderately intense pace. Your heart rate should be 60-70% of maximum heart rate (maximum heart rate = 220 – age (years)). Patients should get 30 minutes of exercise a day, five days a week. - Running – has positive influence on the heart and takes less time than walking. Running enables the body to use the oxygen to the maximum and the heart to work more efficiently, pumping more blood with every beat. Moreover, it can also reduce stress and improve sleep. - Swimming – improves heart and lung capacity, but is gentle on your joints. It can improve cardiovascular fitness as well as muscle strength. As swimming places less demand on the heart than running and other sports, you will not get too tired. - Tai chi – is a slow and gentle exercise and does not leave you breathless, so it is an appropriate exercise for patients with heart conditions. It addresses the key components of fitness - muscle strength, flexibility, and balance. It helps improve circulation and reduce fall risk. Being active when you have heart disease is important. The most important thing to remember is that the patient must know the limits. It is important that you pay attention to warning signs – chest pain, dizziness, or shortness of breath. Stop what you are doing and consult a doctor right away. Most importantly, the patient should not exercise alone and have a telephone within reach. Dr. Anusith Tunhasiriwet, a cardiologist, Bangkok heart hospital
Study sets, textbooks, questions Upgrade to remove ads Anatomy 2040 CC Chapter 7 Terms in this set (96) 2 of the main divisions of the skeletal system. Axial skeleton and appendicular skeleton. The articulation between the right and left parietal bones is formed by the The right and left maxillae unite to form the ______ jaw. Cranial bones perform ______ functions. Provide attachment sites for several jaw, head and neck muscles and surrounds and protects the brain. A _____ is an immovable joint forming the boundary between cranial bone. Which feature if the occipital bone articulates with the first cervical vertebra. What are parts of the ethmoid bone. Middle nasal conchae and superior nasal conchae The 3 curved depressions in the floor of the cranial cavity are called the _________. Functions of the paranasal sinuses. To humidify and warm inhaled air Provide resonance to the voice Lighten the weight of skull bones. Bones that can clearly be seen on a superior view of the skull. The lambdoid suture forms the boundary between the _______ and _______ bones. Which bones contain alveolar processes. Maxillae and mandible The appendicular skeleton consists of the bones of the ______ limbs and ______ limbs, as well as the ______ and ________ girdles. Upper and lower; pectoral and pelvic The ______ _____ help to lighten the weight of certain skull bones and provides resonance to the voice. Which structure forms the articulation between the parietal and temporal bones. What are functions of the axial skeleton. Protects organs and forms a framework The largest foremen in the skull, visible on an inferior (basal) view, is ________. The thoracic cage consists of the : Ribs, thoracic vertebrae, and sternum. What bone forms the lower jaw? The bridge of the nose is formed by the ______ bones. What are the functions of the vertebral column? Houses and protects be spinal cord Helps to transfer axial skeletal weight to the lower limbs Provides vertical support for the body. The small, paired ______ bones help to form part of the medial wall of the orbit. The frontal bone and parietal bones are connected by the: The functions of the facial bones. Protect the entrance to the digestive and respiratory systems Provide attachment sites for the facial muscles Form the face. Which facial bone helps to form the cheek and lateral part of the orbit? The vertebral canal contains the spinal _____, while the intervertebral foramina allow for passage of the spinal ______. Dis ribs false ribs. They indirectly articulate with the sternum through a shared costal cartilage. They articulate with the thoracic vertebrae. Which bones articulate at the mandibular fossa to form the temporomandibular joint? Temporal bone and mandible Which characteristic feature of sacral vertebrae represent the remnants of the horizontal lines of fusion between the five vertebrae? The bones and cartilage that enclose the nasal cavity and paranasal sinuses are called the: Identify the auditory ossicles housed within the petrous part of the temporal bone. Malleus, stapes, and incus The maxillae, ethmoid, frontal, and sphenoid bones contain air-filled chambers known as the ____ ____. The temporal bone and mandible articulate to form the temporomandibular joint at the ________ The ______ gland is housed within the sella turcica of the sphenoid bone. What is the correct name of the large spinous process of C7, which is easily seen and palpated through the skin inferior to the neck. Which bone associated with the skull allows for the attachment of tongue and larynx muscles and ligaments? A thin, pointed process located in the posterioinferior surface of the temporal bone is called the _____ process. The intervertebral discs are composed of the following two structures: Nucleus pulposus and anulus fibrosus The lateral wall of the orbit is formed by: Greater wing of sphenoid bone Orbital surface of the zygomatic bone Zygomatic process of frontal bone. Which part of the orbit is formed by the frontal process of the maxilla, the lacrimal bones, and the orbital plate of ethmoid bone? Medial wall of orbit The axis (C2) contains a prominent process called ________, which acts as pivot for the rotation of both the atlas and skull. The type of vertebrae that have small bodies, short bifid spinous processes, and transverse foramina within their transverse processes are called _________. Which parts of the skull help to form the lateral walls of the nasal complex? Lacrimal bone, ethmoid bone, inferior nasal conchae, and maxillae Which cranial love is housed within the middle cranial fossa? Which areas of the skull are formed in part by the frontal bone? Forehead, calvaria, and orbits The entrance to the external acoustic meatus is located in the ______ part of the temporal bone. The roof of the orbit is formed by _________ Orbital plate of the frontal bone and the lesser wind of the sphenoid bone. Which bone forms the posterior portion of the hard palate? The mandibular fora men acts as a passageway for the blood vessels and nerves that innervate the: Which parts of the skull help to form the floor (inferior border) of the nasal complex? Palatine processes of the maxillae and horizontal plates of the palatine bones. The costal facets or costal demifacets present on the thoracic vertebrae represent site of articulation with bones called _________. The crista galli of the ______ bone acts as a point of attachment for the fall cerebri. The inferior lateral walls and part of the floor of the cranium are formed by the: Which of the following parts of the skull to form the roof (superior border) of the nasal complex? Sphenoid bone, cribiform plate of ethmoid bone, and frontal bone. Which regions of the skull is formed by the parietal bones? Roof of the cranium and the lateral walls. Which areas of the skull are formed in the part by the ethmoid bone? Anteromedial floor of cranium, roof of the nasal cavity, part of the medial wall of each orbit, and part of the nasal septum. This opening in the lacrimal bone provides a passageway for the nasolacrimal duct: Which of the following are the primary curves present in the vertebral column of a newborn? Thoracic curvature and sacral curvature The sacral canal terminates in the inferior opening called the sacral _______, which represents an area where the laminae of the last sacral vertebra failed to fuse. The clavicular notches of the manubrium articulate with the: The zygomatic arch is formed by the fusion of the zygomatic process of the temporal bone to the ______ process of the zygomatic bone. Which cranial fossa, formed by the frontal bone, ethmoid bone, and lesser wings of the sphenoid bone, house the frontal lobes of the cerebrum? Anterior cranial fossa The coronoid process of the mandible is the site of insertion for the: Which area is part of the temporal bone? Which cranial fossa, formed by the occipital, temporal, and parietal bones, house the cerebellum and part of the brainstem? Posterior cranial fossa Which structure forms both the floor of the nasal cavity and part of the roof of your mouth? And acupuncture needle accidentally inserted through the sternal foramen me puncture the organ call the: The _____ _____ on the internal surface of the frontal bone is an attachment site for the meninges. The internal carotid artery passes through the following opening in the temporal bone: Which part of the maxillae helps to form the majority of the hard palate? The verticals plate of the ______ articulates with the perpendicular plate ethmoid bone to form the nasal septum. Which features of the sphenoid bone allow for the attachment of jaw muscles? Medial and lateral pterygoid plates Which cranial nerve passed through the optic canal of the sphenoid bone? Optic nerve (CN II) Which part of the orbit is formed primarily by the sphenoid bone? Posterior wall of orbit Which opening in the petrous part of the temporal bone acts as the passageway for nerves and blood vessels supplying the inner ear? Internal acoustic meatus What are the lateral projections on both sides of the vertebral arch? Which part of the orbit is formed primarily by the orbital surface of the maxilla, with contributions from the zygomatic bone and orbital process of the palatine bone? Floor of the orbit Which secondary curve of the vertebral column develops as a child learns to stand and walk? The perpendicular plate and vomer come together to form the : The bump you feel posterior to your earlobe on your lateral skull corresponds to which structure of the temporal bone? Which type of vertebrae have thick, oval shaped bodies, and short, thick, and blunt spinous processes? What is the name of the palpable horizontal ridge formed by the articulation of the manubrium and body of the sternum commonly used as a landmark for the second rib? The olfactory nerves (CN I) pass through the cribriform foramina in the cribriform plate of the: Which nerve passes through the hypoglossal canal of the occipital bone as it travels to supply the tongue muscles? Hypoglossal nerve (CN XII) Which bone contains grooves on its internal surface formed from the impressions of the venous sinuses? An exaggerated thoracic curvature directed posteriors you that often results from osteoporosis is know as the "hunchback" or: Incomplete fusion of the upper jaw results in a: The right and left halves of this cranial bone are united by the metopic suture, which fuses and disappears by age 2. This type of skull has a thin, sharp supraorbital margin, little or no superciliary arches, a small and light mandible and a pointed, triangular shapes mental protuberance: Which part of the sternum has articular coastal notches representing the attachment points for the coastal cartilage of ribs 2-7? The inferior portion of the cranium composed of portions of the ethmoid, sphenoid, occipital and temporal bones is called the: A child with a very elongated, narrow skull shape displays the effects of craniosynostosis of which cranial suture? Which bones articulate with the sphenoid bone at the pterion region of the lateral skull. Frontal, parietal, and temporal bones. These parts of the ethmoid bone increase airflow turbulence in the nasal cavity to allow air to be properly moistened and cleaned by the nasal mucosa: The coccyx projects more inferiorly in: Which part of the ethmoid bone helps to form the nasal septum? T/F The first chordates arose during the Permian Period. What type of blood is most commonly used for laboratory tests?: Tissue fluid may also be called: What does the spinal cord innervate? Sets with similar terms Bone Surface Markings Bones of skull and cranium. Bones of skull and cranium. Skeletal System - Wiley ANATOMY AND PHYSIOLOGY Match the types of joints to the descriptions that apply to them. (More than one description might apply.) a. Fibrous joints, b. Cartilaginous joints, c. Synovial joints. 1. Have no joint cavity. 2. Types are sutures and syndesmoses. 3. Dense connective tissue fills the space between the bones. 4. Almost all joints of the skull. 5. Types are synchondroses and symphyses. 6. All are diarthroses. 7. The most common type of joint in the body. 8. Nearly all are synarthrotic. 9. Shoulder, hip, knee, and elbow joints. ANATOMY AND PHYSIOLOGY Which organs occupy the thoracic cavity? The abdominal cavity? The pelvic cavity? ANATOMY AND PHYSIOLOGY Why might a person with inflammation of the gallbladder (cholecystitis) also develop inflammation of the pancreas (pancreatitis)? ANATOMY AND PHYSIOLOGY Describe the effects or results of aging on the special sense organs.
Insects reproduce rapidly. They are capable of building up tremendous populations in a few years or even a few months. Fortunately, natural enemies of insects, such as pathogens, parasites and predators, usually prevent populations of insects from reaching outbreak proportions. The regulatory role of natural enemies is referred to as biological or natural control. Biological control may also include the release of predators or parasites, and the application of insect pathogens, such as microbial insecticides. An understanding of the natural enemies that regulate insect populations is critical to the success of landscapers and nursery managers because destruction of natural enemies will most certainly result in outbreaks of insects that feed on ornamental landscape plants. Each insect species has a complex of viruses, bacteria, fungi and protozoans capable of infecting individuals and producing disease. Some of the more spectacular diseases are the viral infections of caterpillars that leave thousands of flaccid, black caterpillars hanging dead from tree branches, and the fungal diseases of grasshoppers and flies that cause the insects to climb up the plants, where they die clinging to the very top. Some pesticides have adverse effects on these pathogens of insects. In particular, many fungicides suppress important fungal pathogens of insects. Parasites of insects live in or on the bodies of their hosts, where they feed during at least part of their life cycle. Many insect parasites are referred to as parasitoids because they consume a large portion of their host, eventually killing it. Most insect parasites are flies, bees and wasps. There are at least six important families of fly parasites and 10 families of important wasplike parasites. In just one family of parasitic wasps, the ichneumons, more than 3,100 species have been identified in North America. Most ichneumon species are specialized to parasitize only one or a few host insect species. Insect parasites are very susceptible to insecticides, and some may also be killed by other pesticides, such as fungicides and herbicides. The final groups of natural enemies are the predators. Predators usually feed on insects smaller than themselves, ingesting one or more for a single meal. In general, they are very active insects that seek out other insects to feed on. Some of the more important families of predators are ground beetles, tiger beetles, ladybird beetles, social wasps, lacewings, some stinkbugs, assassin bugs, nabid bugs, robber flies, syrphid flies and some midge flies. A tremendous number of species of predaceous insects exist. For example, more than 2,500 species of ground beetles live in our area. Like parasites, predaceous insects are susceptible to insecticides. In some cases, they are far more sensitive to insecticides than the plant feeding insects they prey on. Homeowners and landscapers should avoid unnecessary destruction of natural enemies. Avoid using insecticides unless you are certain they will give you the necessary reduction of insect pest numbers. Applying insecticides to resistant life stages of insects, such as the eggs of many insects, adults of armored scales or gall insects inside plant tissue, may kill off natural enemies and so result in increased numbers of plant feeding insects. Also, insecticide drift may destroy natural enemies on other plants in the area. In general, the use of pesticides destroys natural enemies and interferes with natural control. Use cultural and biological controls whenever possible to prevent the need for insecticides.
What is a scatter plot? A scatter plot (aka scatter chart, scatter graph) uses dots to represent values for two different numeric variables. The position of each dot on the horizontal and vertical axis indicates values for an individual data point. Scatter plots are used to observe relationships between variables. The example scatter plot above shows the diameters and heights for a sample of fictional trees. Each dot represents a single tree; each point’s horizontal position indicates that tree’s diameter (in centimeters) and the vertical position indicates that tree’s height (in meters). From the plot, we can see a generally tight positive correlation between a tree’s diameter and its height. We can also observe an outlier point, a tree that has a much larger diameter than the others. This tree appears fairly short for its girth, which might warrant further investigation. When you should use a scatter plot Scatter plots’ primary uses are to observe and show relationships between two numeric variables. The dots in a scatter plot not only report the values of individual data points, but also patterns when the data are taken as a whole. Identification of correlational relationships are common with scatter plots. In these cases, we want to know, if we were given a particular horizontal value, what a good prediction would be for the vertical value. You will often see the variable on the horizontal axis denoted an independent variable, and the variable on the vertical axis the dependent variable. Relationships between variables can be described in many ways: positive or negative, strong or weak, linear or nonlinear. A scatter plot can also be useful for identifying other patterns in data. We can divide data points into groups based on how closely sets of points cluster together. Scatter plots can also show if there are any unexpected gaps in the data and if there are any outlier points. This can be useful if we want to segment the data into different parts, like in the development of user personas. Example of data structure In order to create a scatter plot, we need to select two columns from a data table, one for each dimension of the plot. Each row of the table will become a single dot in the plot with position according to the column values. Common issues when using scatter plots When we have lots of data points to plot, this can run into the issue of overplotting. Overplotting is the case where data points overlap to a degree where we have difficulty seeing relationships between points and variables. It can be difficult to tell how densely-packed data points are when many of them are in a small area. There are a few common ways to alleviate this issue. One alternative is to sample only a subset of data points: a random selection of points should still give the general idea of the patterns in the full data. We can also change the form of the dots, adding transparency to allow for overlaps to be visible, or reducing point size so that fewer overlaps occur. As a third option, we might even choose a different chart type like the heatmap, where color indicates the number of points in each bin. Heatmaps in this use case are also known as 2-d histograms. Interpreting correlation as causation This is not so much an issue with creating a scatter plot as it is an issue with its interpretation. Simply because we observe a relationship between two variables in a scatter plot, it does not mean that changes in one variable are responsible for changes in the other. This gives rise to the common phrase in statistics that correlation does not imply causation. It is possible that the observed relationship is driven by some third variable that affects both of the plotted variables, that the causal link is reversed, or that the pattern is simply coincidental. For example, it would be wrong to look at city statistics for the amount of green space they have and the number of crimes committed and conclude that one causes the other, this can ignore the fact that larger cities with more people will tend to have more of both, and that they are simply correlated through that and other factors. If a causal link needs to be established, then further analysis to control or account for other potential variables effects needs to be performed, in order to rule out other possible explanations. Common scatter plot options Add a trend line When a scatter plot is used to look at a predictive or correlational relationship between variables, it is common to add a trend line to the plot showing the mathematically best fit to the data. This can provide an additional signal as to how strong the relationship between the two variables is, and if there are any unusual points that are affecting the computation of the trend line. Categorical third variable A common modification of the basic scatter plot is the addition of a third variable. Values of the third variable can be encoded by modifying how the points are plotted. For a third variable that indicates categorical values (like geographical region or gender), the most common encoding is through point color. Giving each point a distinct hue makes it easy to show membership of each point to a respective group. One other option that is sometimes seen for third-variable encoding is that of shape. One potential issue with shape is that different shapes can have different sizes and surface areas, which can have an effect on how groups are perceived. However, in certain cases where color cannot be used (like in print), shape may be the best option for distinguishing between groups. Numeric third variable For third variables that have numeric values, a common encoding comes from changing the point size. A scatter plot with point size based on a third variable actually goes by a distinct name, the bubble chart. Larger points indicate higher values. A more detailed discussion of how bubble charts should be built can be read in its own article. Hue can also be used to depict numeric values as another alternative. Rather than using distinct colors for points like in the categorical case, we want to use a continuous sequence of colors, so that, for example, darker colors indicate higher value. Note that, for both size and color, a legend is important for interpretation of the third variable, since our eyes are much less able to discern size and color as easily as position. Highlight using annotations and color If you want to use a scatter plot to present insights, it can be good to highlight particular points of interest through the use of annotations and color. Desaturating unimportant points makes the remaining points stand out, and provides a reference to compare the remaining points against. When the two variables in a scatter plot are geographical coordinates – latitude and longitude – we can overlay the points on a map to get a scatter map (aka dot map). This can be convenient when the geographic context is useful for drawing particular insights and can be combined with other third-variable encodings like point size and color. As noted above, a heatmap can be a good alternative to the scatter plot when there are a lot of data points that need to be plotted and their density causes overplotting issues. However, the heatmap can also be used in a similar fashion to show relationships between variables when one or both variables are not continuous and numeric. If we try to depict discrete values with a scatter plot, all of the points of a single level will be in a straight line. Heatmaps can overcome this overplotting through their binning of values into boxes of counts. Connected scatter plot If the third variable we want to add to a scatter plot indicates timestamps, then one chart type we could choose is the connected scatter plot. Rather than modify the form of the points to indicate date, we use line segments to connect observations in order. This can make it easier to see how the two main variables not only relate to one another, but how that relationship changes over time. If the horizontal axis also corresponds with time, then all of the line segments will consistently connect points from left to right, and we have a basic line chart. The scatter plot is a basic chart type that should be creatable by any visualization tool or solution. Computation of a basic linear trend line is also a fairly common option, as is coloring points according to levels of a third, categorical variable. Other options, like non-linear trend lines and encoding third-variable values by shape, however, are not as commonly seen. Even without these options, however, the scatter plot can be a valuable chart type to use when you need to investigate the relationship between numeric variables in your data. The scatter plot is one of many different chart types that can be used for visualizing data. Learn more from our articles on essential chart types, how to choose a type of data visualization, or by browsing the full collection of articles in the charts category.
Presentation on theme: "Bilingual education planning: language allocation."— Presentation transcript: Bilingual education planning: language allocation Language use in the classroom: one language or two languages? Using two languages in the classroom ‘A key consideration is the nature of the linguistic balance between Welsh and English, and the intensity of Welsh- medium input required in order for learners to reach fluency in both Welsh and English over time’ 2.13 Welsh-medium Education Strategy García, O. (2009). Bilingual Education in the 21 st Century, ‘ Bilingual education takes on many different forms, and increasingly, in the complexity of the modern world, includes forms where two or more languages are separated for instruction, but also forms where two or more languages are used together in complex combinations.’ Notes: García emphasizes the complexity related to bilingual education today and raises an important question here: separate the two languages or bring them together. García, O. (2009). Bilingual Education in the 21 st Century, ‘ What makes bilingual education complex is that one has to think not only of pedagogy, approaches, and methodology, but also of how to allocate, arrange, and use the two or more languages in instruction. ’ Notes for ”García, O. (2009). Bilingual Education in the 21st Century, ” Research findings do reflect these comments by Ofelia García about bilingual education internationally: -these are important decisions that teachers have to make on a daily basis, -and that varies according to the nature and linguistic background of children and the community the school is situated. Language allocation in Welsh medium schools is explained in the following slide. ‘Welsh-medium education between the ages of three or four and approximately seven usually means delivering provision primarily through the medium of Welsh. From seven to eleven years of age (Key Stage 2 of the national curriculum), English-language skills are also developed through appropriate use of the language as a subject and medium.’ Notes: A key consideration is the nature of the linguistic balance between Welsh and English, and the intensity of the Welsh medium input needed for learners to be fluent in both Welsh and English over time. Welsh Assembly Government. (2010) Welsh-medium Education Strategy, para. 2.13 ‘It is generally accepted that at least around 70% of curricular time should be through the medium of Welsh if learners are to acquire a sufficiently sound command of the language to enable them to use it across a broad range of contexts with confidence and fluency. The Welsh Assembly government accepts this guiding principle for Welsh-medium schools at primary and secondary level.’ Language allocation in the primary school. In the following video a primary school head-teacher explains that language planning across the school is vitally important in order to implement effective bilingual education policy and practice. Language allocation: primary school Language Allocation in the Foundation Phase 3-7 years old n=17 schools Notes for ” Language Allocation in the Foundation Phase ‘ This graph shows language allocation in the Foundation Phase. These pupils are between 3-7 years old. This data was collected by questionnaire from the selected schools across Wales. There were 17 responses out of a total of 21 The results indicate that Welsh is the medium of instruction in all areas of the Foundation Phase: -Personal and Social Development -Language, Literacy and Communication Skills -Welsh Language Development -Mathematical Development -Knowledge and Understanding of the World -Physical Development -Creative Development Bilingual in this context refers to a ‘dual stream school’ where there are two streams: Welsh-medium and English- medium-taught separately. Notes for ” Language Allocation in the Foundation Phase ” Language Allocation in KS year old n=17 school Notes for ” Language allocation in KS2 ” This graph refers to the language allocation in Key Stage 2 (in the same schools as the Foundation Phase). The pupils are aged between 7 and 11 years old. The following categories were used to identify the language of instruction in different subjects: - English, - Welsh, - Bilingual. Bilingual in this context includes classes using Welsh and English within the same lesson, and also classes in which Welsh and English are used separately, for example, some topics in Welsh and others in English. The results indicate that bilingual teaching and learning occurs in all subjects, namely: - Mathematics - Science - Design and Technology - Information Technology - History - Geography - Art and Design -Music - PE - RE - Personal and Social Education Science and Mathematics are taught in English in one primary school - this is because these subjects are taught in English in the feeder secondary school. Notes for ” Language allocation in KS2 ” Language allocation: secondary school Language allocation in the secondary school. The video shows the language co-ordinator in a secondary school discussing the importance of language planning. Language allocation in secondary school is complex. A number of key factors need to be addressed including: -the bilingual skills of the teachers -the bilingual skills of the pupils -assessment procedures -the Local Education Authority's bilingual policy -aspirations of the parents Notes for ”Language allocation: secondary school” How does the language policy in your school address language allocation in different subject areas? Discuss how pupils’ bilingual skills are developed in: Primary school : Foundation Phase, KS2 Secondary School: KS3, KS4 and KS5 Questions for discussion
Cancer cells are incredibly flexible about promoting their own movement and growth in the body. They can travel through blood vessels as thin as spider silk. They even change their shape to do so, yet are still able to divide and cluster into colonies in those very skinny spaces. That spreading through the body is called metastasis, and it’s what makes cancer turn deadly. Researchers are now putting nanotechnology to work to help decipher exactly how cancer cells perform this extraordinary feat. An article in Nanotechnology Now reports: The researchers trapped live cancer cells in the tubular membranes and, with optical high- and super-resolution microscopy, could see how the cells adapted to the confined environment. Cell structures significantly changed in the nanomembranes, but it appeared that membrane blebbing — the formation of bulges — at the cells’ tips helped keep genetic material stable, an important requirement for healthy cell division. For more details, check out this one-minute video on how scientists used microtubular membranes to study how cancer cells divide in capillaries.
When there are northern lights, or auroras, why do they occur in both hemispheres...and not just one? It all begins with charged particles from solar outbursts, such as flares. Some of those particles collide with the earth's magnetic shield, called the magnetosphere.Those move like iron filings toward the ends of a bar magnet (remember that high school science experiment). In this case, the ends of the 'bar magnet' happen to be the north and south poles. When those charged particles collide with the molecules of gas near the poles, the molecules glow. Different types of gas molecules glow in different colors.
- Transfer refers to the extent to which learning is applied to new contexts. - Transfer is facilitated by: - instruction in the abstract principles involved - demonstration of contrasting cases - explicit instruction of transfer implications - sufficient time - Learning for transfer requires more time and effort in the short term, but saves time in the long term. Transfer refers to the ability to extend (transfer) learning from one situation to another. For example, knowing how to play the piano doesn’t (I assume) help you play the tuba, but presumably is a great help if you decide to take up the harpsichord or organ. Similarly, I’ve found my knowledge of Latin and French a great help in learning Spanish, but no help at all in learning Japanese. Transfer, however, doesn’t have to be positive. Your existing knowledge can hinder, rather than help, new learning. In such a situation we talk about negative transfer. We’ve all experienced it. At the moment I’m experiencing it with my typing -- I've converted my standard QWERTY keyboard to a Dvorak one (you can hear about this experience in my podcast, if you're interested). Teachers and students do generally hope that learning will transfer to new contexts. If we had to learn how to deal with every single possible situation we might come across, we’d never be able to cope with the world! So in that sense, transfer is at the heart of successful learning (and presumably the ability to transfer new learning is closely tied to that elusive concept, intelligence). Here’s an example of transfer (or lack of it) in the classroom. A student can be taught the formula for finding the area of a parallelogram, and will then be capable of finding the area of any parallelogram. However, if given different geometric figures, they won’t be able to apply their knowledge to calculate the area, because the formula they have memorized applies only to one specific figure — the parallelogram. However, if the student is instead encouraged to work out how to calculate the area of a parallelogram by using the structural relationships in the parallelogram (for example, by rearranging it into a rectangle by moving one triangle from one end to the other), then they are much more likely to be able to use that experience to work out the areas of a different figure. This example gives a clue to one important way of encouraging transfer: abstraction. If you only experience a very specific example of a problem, you are much less likely to be able to apply that learning to other problems. If, on the other hand, you are also told the abstract principles involved in the problem, you are much more likely to be able to use that learning in a variety of situations. [example taken from How People Learn] Clearly there is a strong relationship between understanding and transfer. If you understand what you are doing, you are much more likely to be able to transfer that learning to problems and situations you haven’t encountered before — which is why transfer tests are much better tests of understanding than standard recall tests. That is probably more obvious for knowledge such as scientific knowledge than it is for skill learning, so let me tell you about a classic study . In this study, children were given practice in throwing darts at an underwater object. Some of the children were also instructed in how light is refracted in water, and how this produces misleading information regarding the location of objects under water. While all the children did equally well on the task they practiced on — throwing darts at an object 12 inches under water — the children who had been given the instruction did much better when the target was moved to a place only 4 inches under water. Understanding is helped by contrasting cases. Which features of a concept or situation are important is often only evident when you can see different but related concepts. For example, you can’t fully understand what an artery is unless you contrast it with a vein; the concept of recognition memory is better understood if contrasted with recall memory. Transfer is also helped if transfer implications are explicitly pointed out during learning, and if problems are presented in several contexts. One way of doing that is if you use “what-ifs” to expand your experience. That is, having solved a problem, you ask “What if I changed this part of the problem?” All of this points to another requirement for successful transfer — time. Successful, “deep”, learning requires much more time than shallow rote learning. On the other hand, because it can apply to a much wider range of problems and situations, is much less easily forgotten, and facilitates other learning, it saves a lot of time in the long run!
The Meaning of Division For this ESL math vocabulary worksheet, students learn the meaning of the terms: separate, equal groups, share equally among. Students solve 3 multiple choice problems pertaining to these terms. 2nd - 3rd Math 3 Views 8 Downloads Math Stars: A Problem-Solving Newsletter Grade 2 Develop the problem solving skills of your young learners with this collection of math newsletters. Covering a variety of topics ranging from simple arithmetic and number sense to symmetry and graphing, these worksheets offer a nice... 2nd - 4th Math CCSS: Adaptable New Review Power Pack: Lessons in Civics, Math, and Fine Arts Newspaper in Education (NIE) Week honors the contributions of the newspaper and is celebrated in the resource within a civics, mathematics, and fine arts setting. The resource represents every grade from 3rd to 12th with questions and... 3rd - 12th Math CCSS: Adaptable
Around 1.4 million people in the UK use hearing aids to help them hear better because of some degree of hearing loss. Hearing aids pick up sound and amplify it, making it easier to hear. Some use sophisticated filtering techniques to help reduce background noise. Hearing aids are available on the NHS or privately. How hearing aids help A hearing aid is an electronic device designed to improve your hearing. Small enough to wear in or behind your ear, hearing aids make some sounds louder, improving hearing and speech comprehension. They may help you to hear better in quiet and noisy settings. Not everyone with hearing loss can benefit from hearing aids. Hearing aids are most commonly used for people with hearing loss caused by damage to the inner ear or auditory nerve (called sensorineural hearing loss) from: - Injury caused by noise or certain medications People with conductive loss will require a medical evaluation by a doctor, usually an otolaryngologist. Otolaryngologists specialise in treating disorders of the ears, nose and throat and are also called ENT specialists. Most conductive hearing loss can be improved or corrected with surgery or possibly medical management. People may choose not to manage their conductive hearing loss with medical or surgical treatment. If the person has an open ear canal and a relatively normal external ear, a hearing aid is another option for managing their conductive hearing loss. Some people are born without an external ear or ear canal, which prevents use of a conventional hearing aid. These patients may be able to use a bone conduction hearing aid instead of a conventional device. Seek medical advice if you're not sure about your type of hearing loss and whether or not you would benefit from a hearing aid. Your doctor may refer you to an: Otolaryngologist. This specialist will perform a medical evaluation in order to determine the cause of your hearing loss. Audiologist. An audiologist is a hearing specialist who performs tests to determine the type and degree of your hearing loss. These specialists can evaluate your hearing loss and dispense hearing aids. If you have hearing loss in both ears, it is probably best to wear two hearing aids. Batteries power the hearing aid's electronics. Here's how the other parts of a hearing aid work: - A microphone picks up sound from the environment - An amplifier makes the sound louder - A receiver sends these amplified signals into the ear, where they're converted to nerve signals and sent to the brain Types and styles of hearing aids Work with an audiologist to figure out which type and style will work best, as well as any special features you need. This depends on factors such as: The type and severity of your hearing loss - Dexterity – some devices have small parts which can be fiddly
Inventor of the Week Archive for a different Invention or Inventor Alexander Joy Cartwright (1820-1892) Baseball, like the United States, evolved out of a British precedent into a unique and independent institution. The origin of American baseball lies in an informal offshoot of the English sport of cricket called "rounders," played in the Colonies as early as the mid-18th century. The game was already called "base-ball" in a children's book of 1744. Essentially, a batter had to hit a pitched ball and then run the bases (from one to five of them) without being tagged or "plugged"---hit by a ball thrown by one of the fielders. A special Commission of 1907 concluded that baseball had been "invented" by the Civil War hero Abner Doubleday (1819-1893), in Cooperstown, New York, in 1839. But it was actually Alexander Joy Cartwright (1820-1892) of New York who established the modern baseball field (1845). In Cartwright's rules of play, however, plugging was allowed; a ball fielded on one bounce was an out; pitching was underhand; and the game was won by the first team to score 21 "aces" (runs), in however many innings. By this time, baseball had become a leisure activity for wealthy young men. But later, after Civil War soldiers who had played baseball behind the lines brought the game back to their hometowns, baseball was both watched and played by Americans of every social status. Baseball was institutionalized and further developed by the National Association in 1858. The Cincinnati Red Stockings became the first all-professional team in 1869. The rival National (1876) and American (1903) Leagues competed in the first World Series in 1903 and All-Star Game in 1933. In 1947, the Brooklyn Dodgers signed Jackie Robinson, removing the color barrier that had consigned black players to the "Negro Leagues." Since then, baseball has continued to embrace ever more players and fans---of all ages, both sexes, and various backgrounds---here and world-wide (especially in Central America and Japan). Today, despite the disillusioning Major League strike of 1994-95, baseball remains unchallenged as the quintessential American pastime.