content
stringlengths
275
370k
Presentation on theme: "RESEARCHERS ARE DEVELOPING NEW METHODS OF TESTING THE OPERABILITY OF PROSTHETICS VIA THE BRAIN. Controlling a Computer With Thought."— Presentation transcript: RESEARCHERS ARE DEVELOPING NEW METHODS OF TESTING THE OPERABILITY OF PROSTHETICS VIA THE BRAIN. Controlling a Computer With Thought Introduction The brain is under constant pressure to learn new skills to complete new tasks When the body fails, this link from the brain to the outside world is broken. For years researchers have been searching for a way to reconnect the conscious brain with the world it exists in. What is being done? Rhesus monkeys can use their thoughts to control a computer curser, via electrodes implanted in their brains. They can control of the mouse, they are able to repeat certain movements day after day. Motor memory that exists outside of its own body. This is a crucial breakthrough in the world of neuroprosthetics. Has this not been done before? Encouragement to the development of natural prosthetics. In previous attempts, subjects had the ability to control a physical object but were unable to retain. Motor memory is crucial to operation. This improvement allowed the subjects to immediately recall skills learned in a previous session. Cont. Previous research used existing connections between the brain and a real limb in order to control an artificial one. New technique relies on a completely different section of the brain, in essence assimilating a new limb into the body. Unlike previous studies, researches relied on the same set of neurons throughout the three week long study. How Was it Done? Arrays of microelectrodes were implanted on the primary motor cortex, about 2-3 mm into the brain. The activity of these neurons was monitored using computer software. The result a subset of 10-40 neurons who’s activity remained constant from day to day. While monitoring the select neurons, the monkeys arm was placed inside a robotic exoskeleton which could track its movement. The exoskeleton controlled a cursor on a screen watched by the monkey. Cont. As the monkey went through assigned tasks two sets of data were recorded; brain signals and corresponding cursor positions. To effectively analyze this data the researchers had to determine whether the monkey could perform the same task using only its brain. This required a decoder which translate brain activity into cursor movement. How to analyze the data The decoder was a mathematical model which multiplied the firing rates of the neurons by certain weights. Next the researchers immobilized the arm and input the neuronal signals into the decoder. Within a week the monkeys performance reached 100%, where it remained for the duration of the experiment. Why is this important? This evidence of consistent performance supports the idea of the importance of tracking the same set of neurons throughout testing. In previous studies, the decoder would have to be reprogrammed every time there was new cortical activity was introduced This prevented creation of a cortical map (pattern of activity). To further back this assertion, researchers repeated the process with a second decoder. Functionality was back up to 100% within three days. Test further test was done but utilizing a shuffled decoder. no connection between physical movements and cursor movements. Able to repeat progression back up to 100% within 3 days. Practice allowed the monkey’s brain to develop a cortical map for the new decoder. Conclusion These result may suggest that sometime in the future with the proper testing and execution, this method could be used to give the disabled an opportunity to control prosthetics through neural to machine connections. Sources Schmidt E M et al. 1978 Fine control of operationally conditioned firing patterns of cortical neurons Exp. Neurol. 61 349–69 J. Vidal, "Toward Direct Brain–Computer Communication", in Annual Review of Biophysics and Bioengineering, L.J. Mullins, Ed., Annual Reviews, Inc., Palo Alto, Vol. 2, 1973, pp. 157-180. GUIZZO, ERICO. "Monkey's Brain Can "Plug and Play" to Control Computer With Thought."IEEE. July 2009. Web. 19 Feb. 2010
Hypertext Markup Language (HTML) is one of the most commonly used language to design and develop web pages. To create a web page using HTML, you must have to be familiar with the HTML basics. HTML tags instruct the web browser about how it should interpret the document. The attributes of the tag provide the additional information such as its behaviour and properties. The content is displayed on the browser according to the properties and behaviours defined by the tag and the attributes. HTML tags can be categorized into following parts depending upon the behaviour that they display on the web page: Here are some HTML Basics example that you can go for: <!DOCTYPE HTML> <html> <head> <title>This is Title</title> </head> <body> <h1>This is main heading</h1> <p>This is a paragraph.</p> <h2>This is sub-heading</h2> <p>This is also a paragraph</p> </body> </html> Write the above HTML code in your text editor (notepad or other) and save it along with .html extension. Now open the saved file in your browser, it will looks like this: Here is one more HTML basic example <!DOCTYPE HTML> <html> <head> <title>HTML Basics Tutorial Example</title> </head> <body> <h1>HTML Basics</h1> <p>This is HTML Basic Tutorial</p> <h2>HTML Basics Example</h2> <p>This is HTML Basic Example</p> </body> </html> Here is the output of the above HTML code:
In 1934, the Indian National Congress made the demand for a Constituent Assembly. During the Second World War, this assertion for an independent Constituent Assembly formed only of Indians gained momentum and this was convened in December 1946. Between December 1946 and November 1949, the Constituent Assembly drafted a constitution for independent India. Free to shape their destiny at last, after 150 years of British rule, the members of the Constituent Assembly approached this task with the great idealism that the freedom struggle had helped produce. The result was the longest written constitution of any sovereign country in the world, which is safeguarding the democratic nature of our country for the last 65 years. The Indian Constitution: Key Features These members of the Constituent Assembly had a huge task before them. The country was made up of several different communities who spoke different languages, belonged to different religions, and had distinct cultures. Also, when the Constitution was being written, India was going through considerable turmoil. The partition of the country into India and Pakistan was imminent, some of the Princely States remained undecided about their future, and the socio-economic condition of the vast mass of people appeared dismal. All of these issues played on the minds of the members of the Constituent Assembly as they drafted the Constitution. They rose to the occasion and gave this country a visionary document that reflects a respect for maintaining diversity while preserving national unity. The final document also reflects their concern for eradicating poverty through socio-econoic reforms as well as emphasing the crucial role the people can play in choosing their representatives. Listed below are the key features of the Indian Constitution. 1. Federalism: This refers to the existence of more than one level of government in the country. In India, we have governments at the state level and at the centre. Panchayati Raj is the third tier of government that exits at grass root level. While each state in India enjoys autonomy in exercising powers on certain issues, subjects of national concern require that all of these states follow the laws of the central government. The Constitution contains lists that detail the issues that each tier of government can make laws on. In addition, the Constitution also specifies where each tier of government can get the money from for the work that it does. Under federalism, the states are not merely agents of the federal government but draw their authority from the Constitution as well. 2. Parliamentary Form of Government: The different tiers of government consist of representatives who are elected by the people. Constitution of India guarantees universal adult suffrage for all citizens. When they were making the Constitution, the members of the Constituent Assembly felt that the freedom struggle had prepared the masses for universal adult suffrage and that this would help encourage a democratic mindset and break the clutches of traditional caste, class and gender hierarchies. This means that the people of India have a direct role in electing their representatives. 3. Separation of Powers: According to the Constitution, there are three organs of the State. These are the legislature, the executive and the judiciary. The legislature refers to our elected representatives. The executive is a smaller group of people who are responsible for implementing laws and running the government. The judiciary refers to the system of courts in this country. In order to prevent the misuse of power by any one branch of the State, the Constitution says that each of these organs should exercise different powers. Through this, each organ acts as a check on the other organs of the State and this ensures the balance of power between all three. 4. Fundamental Rights: The section on Fundamental Rights has often been referred to as the ‘conscience’ of the Indian Constitution. It protect citizens against the arbitrary and absolute exercise of power by the State. The Constitution, thus, guarantees the rights of individuals against the State as well as against other individuals. It also guarantees the rights of minorities against the majority. In addition to Fundamental Rights, the Constitution also has a section called Directive Principles of State Policy. This section was designed by the members of the Constituent Assembly to ensure greater social and economic reform, and to serve as a guide to the independent Indian State to institute laws and policies that help reduce the poverty of the masses. 5. Secularism: A secular state is one in which the state does not officially promote any one religion as the state religion. The Fundamental Rights in the Indian Constitution include: 1. Right to Equality: All persons are equal before the law. This means that all persons shall be equally protected by the laws of the country. It also states that no citizen can be discriminated against on the basis of their religion, caste or sex. Every person has access to all public places including playgrounds, hotels, shops etc. The State cannot discriminate against anyone in matters of employment. The practice of untouchability has also been abolished. 2. Right to Freedom: This includes the right to freedom of speech and expression, the right to form associations, the right to move freely and reside in any part of the country, and the right to practise any profession, occupation or business. 3. Right against Exploitation: The Constitution prohibits trafficking, forced labour, and children working under 14 years of age. 4. Right to Freedom of Religion: Religious freedom is provided to all citizens. Every person has the right to practise, profess and propagate the religion of their choice. 5. Cultural and Educational Rights: The Constitution states that all minorities, religious or linguistic, can set up their own educational institutions in order to preserve and develop their own culture. 6. Right to Constitutional Remedies: This allows citizens to move the court if they believe that any of their Fundamental Rights have been violated by the State. State and Government : The government can change with elections. The "State" refers to a political institution that represents a sovereign people who occupy a definite territory. The Indian State has a democratic form of government. The government (or the executive) is one part of the State. The State refers to more than just the government and cannot be used interchangeably with it. Points to Remember 1. It is the longest written constitution of any sovereign country in the world, containing 450 articles in 22 parts, 12 schedules and 95 amendments. 2. The Constitution was enacted by the Constituent Assembly on 26 November 1949, and came into effect on 26 January 1950. 3. It declares the Union of India to be a sovereign, socialist, secular, democratic republic, assuring its citizens of justice, equality, and liberty, and endeavours to promote fraternity among them. The words "socialist", "secular", and "integrity" were added to the definition in 1976 by constitutional amendment. 4. After coming into effect, the Constitution replaced the Government of India Act 1935 as the country's fundamental governing document. 5. The first president of the Constituent Assembly was Dr Sachidanand Sinha. Later, Rajendra Prasad was elected president of the Constituent Assembly. 6. Harendra Coomar Mookerjee was the Vice President of the Constituent Assembly of India, and Chairman of the Minorities Committee of that assembly. 7. Dr Ambedkar was the chairman of the drafting committie along with six other members. 7. Indian Constitution is heavily influenced by British model of parliamentary democracy and American model of federal structure and the establishment of supreme court 8. Amendments to the Constitution are made by the Parliament, the procedure for which is laid out in Article 368. But the basic structure of the Constitution is immutable.
Grammar Homework Sheet Year 7, Week 2 Grammar rules this week: Punctuation and sentence mood A sentence is a unit of meaning and to end a sentence you can choose from the following three pieces of punctuation: a full stop, a question mark or an exclamation mark. Full stops, obviously, go at the end of a sentence e.g. John was the last to arrive. Full stops usually go at the end of sentences that are statements. We can also say that they are in the declarative mood. E.g. I don’t like cheese. Or Peter put his cat down and picked up the tin of cat food. Question marks go at the end of a question e.g. Do you understand? This is also called the interrogative mood Exclamation marks go at the end of a sentence that shows a strong feeling e.g. Leave me alone! Some of these might be exclamations. We call this the emphatic mood e.g. I loved it! Others could be commands. We call this the imperative mood e.g. Give it to me! There may be some examples of commands (the imperative mood) that don’t convey a strong feeling, however, e.g. Take this to the English office for me please. In this case, you would end an imperative sentence with a full stop. Always remember that you should only use ONE exclamation mark in formal, written English. It was magnificent!!!!!! X It was magnificent!
Recent advances in materials, fabrication strategies and device designs for flexible and stretchable electronics and sensors make it possible to envision a not-too-distant future where ultra-thin, flexible circuits based on inorganic semiconductors can be wrapped and attached to any imaginable surface, including body parts and even internal organs. Robotic technologies will also benefit as it becomes possible to fabricate 'electronic skin' that, for instance, could allow surgical robots to interact, in a soft contacting mode, with their surroundings through touch. Researchers have now demonstrated that they can integrate high-quality silicon and other semiconductor devices on thin, stretchable sheets, to make systems that not only match the mechanics of the epidermis, but which take the full three dimensional shapes of the fingertip - and, by extension, other appendages or even internal organs, such as the heart. Printed electronics has emerged as a key research field to meet the requirements of large area and cost-efficient production. The field of modern electronics is very general and includes not only printable interconnects, but also optoelectronics and magnetoelectronics. In this respect, cost-efficient versatile electronic building blocks, such as transistors, diodes and resistors, are already available as printed counterparts of conventional semiconductor elements. However, the element responding to a magnetic field, which is highly demanded for printable electronics, has not yet been realized and printable electronic sensors and contactless switches operating in combination with magnetic fields have not been reported so far. In new work, researchers in Germany have successfully overcome most of these issues. Researchers have now fabricated the first printable magnetic sensor that relies on the giant magnetoresistance (GMR) effect. The developed magneto-sensitive ink can be painted on any substrate - such as paper, polymers, ceramics, and glass - and retains a GMR ratio of up to 8% at ambient conditions. This value is beyond the state of the art. It has been known for some time that graphene can be used for detection of individual gas molecules adsorbed on its surface - a graphene sensor can detect just a single molecule of a toxic gas. However, the extremely high sensitivity of graphene does not necessarily translate into its selectivity to various molecules. In other words, it can be detected that some molecules attached to the graphene surface change the resistivity of a graphene field-effect transistor but one cannot say what kind of a molecules have attached. Scientists have therefore thought that truly selective gas sensing with graphene devices requires the functionalization of graphene surface with some agents specific for different gas molecules. In new research, though, scientists have now found that chemical vapors change the noise spectra of graphene transistors. The noise signal for each gas is reproducible, opening the way for practical reliable and simple gas sensors made from graphene. Graphene with its distinctive band structure and unique physiochemical properties - such as exceptionally low intrinsic electrical resistivity, high surface area, rapid electrode kinetics and good mechanical properties - is considered an attractive material for analytical electrochemistry. However, one of the key technical challenges for the use of graphene as functional material in device applications is the integration of nanoscale graphene onto micro- or millimeter sized sensing platforms. With a new methodology, a team from Florida International University was able to integrate graphene onto three-dimensional (3D) carbon microstructure arrays with good uniformity and controllable morphology. Single-walled carbon nanotubes arguably are the ultimate biosensor among nanoscale semiconducting materials due to their high surface-to-volume ratio and unique electronic structure. After more than a decade of excitement though, more and more researchers in the nanotube field believe that pristine SWCNTs are very limited as a sensing material. Ironically the ultrahigh sensitivity of SWCNTs is easily compromised by various unintentional contaminants from the device fabrication process as well as the ambient environment. As a result, significant efforts have been focused on all kinds of ways to functionalize or decorate nanotubes with other species in order to improve their sensitivity. Researchers have now shown that applying continuous in situ ultraviolet light illumination during gas detection can enhance a SWCNT-sensor's performance by orders of magnitude under otherwise identical sensing conditions. Early detection of pathogenic bacteria is critical to prevent disease outbreaks and preserve public health. This has led to urgent demands to develop highly efficient strategies for isolating and detecting this microorganism in connection to food safety, medical diagnostics, water quality, and counter-terrorism. A team of scientists has now developed a novel approach to interfacing passive, wireless graphene nanosensors onto biomaterials via silk bioresorption. The nanoscale nature of graphene allows for high adhesive conformality after biotransfer and highly sensitive detection. The team demonstrates their nanosensor by attaching it to a tooth for battery-free, remote monitoring of respiration and bacteria detection in saliva. Optical fibers have revolutionized telecommunications by providing higher performance, more reliable telecommunication links with ever decreasing bandwidth cost. In parallel with these developments, fiber-optic sensor technology has been a major user of technologies associated with the optoelectronic and fiber optic communications industry. Today, with the rapid advance of communications and especially sensing applications, there is an ever increasing need for advanced performance and additional functionalities. This, however, is difficult to achieve without addressing fundamental fabrication issues related to the integration onto optical fibers of advanced functional materials at the micro- and nanoscale. Solving these technical problems will open up the possibility of developing multifunctional labs integrated onto a single optical fiber, exchanging information and combining sensorial data. This could result in auto diagnostic features as well as new photonic and electro-optic functionalities useful in many strategic sectors such as optical processing, environment, life science, safety and security. Various types of nanostructures are used in the development of nanosensors: nanoparticles, nanotubes, nanorods, two-dimensional materials like graphene, embedded nanostructures, porous silicon, and self-assembled materials. For instance, gas sensors often operate by detecting the subtle changes that deposited gas molecules make in the way electricity moves through a surface layer. Researchers have fabricated gas sensors based on carbon nanotube-based field effect transistors, which can detect electrical potential changes around them. While these and related sensing schemes can be all-electronic - i.e., not requiring optical readout - they all require sophisticated nanolithographic techniques to isolate, identify, and integrate electrical contact to the active nanosensor. Researchers have now presented a nanoscale 3D architecture that can afford highly sensitive, room temperature, rapid response, and all-electronic chemical detection.
Major topics covered on beginner Spanish tests include grammar, language function, pronunciation and vocabulary. Grammar is the set of rules that dictate how speakers form words and sentences in all natural languages, while vocabulary refers to the words used on particular occasions and in different spheres. Language function refers to speakers’ purposes for using language to communicate.Continue Reading Beginning Spanish tests typically move from performing simple greetings and introductions to discussing increasingly complicated topics, including people, places, abilities and desires. This includes active vocabulary, which consists of words speakers use with confidence, and passive vocabulary, which are words most people understand but do not necessarily use in daily conversations. In general, language learning requires the continued practice of earlier elements to build a foundation for introducing new ones. Language is used formally and informally, and each situation governs the grammatical rules and vocabulary items needed to carry out language functions. Some examples of these functions include: comparing and contrasting, persuading, forming and answering questions, and expressing likes and dislikes. Other functions – discussing cause and effect, summarizing and predicting – are also found on beginning-level Spanish tests. The English alphabet contains 26 letters, but the Spanish alphabet, called "abecedario," has 30. The Spanish “c” is followed by the letter “ch,” the “l” is followed by the “ll,” the “n” is followed by the “ñ,” and after the “r” comes the “rr.” Each of these letters can change word meanings, so acquiring correct pronunciation is essential to understanding Spanish and being understood.Learn more about Education
Since the mid-20th century, farmers have routinely used fumigants to prepare soil for sensitive annual crops, like strawberries and vegetables, and for planting new orchards on land where previous orchards or vineyards of the same kind were grown. Several of the most effective fumigants — such as methyl bromide, chloropicrin and Telone — work so well, scientists can’t even fully explain why the crops are so much more productive after treatment. But now the use of soil fumigants is falling out of favor. Soil fumigation was first used in the late 1880s when the grape root aphid Phylloxera was exported from the eastern United States to Europe, with devastating effect. In desperation, farmers tried to control the pest with carbon disulfide, a readily ignitable chemical that presented explosive danger to workers applying the pesticide. By the 1930s, 17 chemicals were available to farmers to control nematodes, tiny soil-borne roundworms that harm crops. In 1941, USDA scientists Al Taylor and C.W. McBeth were the first to report a practical application for the chemical compound methyl bromide in agricultural fields. Methyl bromide is a naturally occurring, but toxic compound, produced by the Earth’s oceans, volcanoes and wildfires. It can also be manufactured. It would be almost two decades before UC Berkeley plant pathologist Stephen Wilhelm and businessman William Storkan developed a method for applying the gas to an agricultural field while at the same time spreading a polyethylene tarp onto the field surface to keep the fumigant in place. Methyl bromide treatment solves many agricultural problems. It kills nematodes, fungi, plant pathogens and weed seeds. When properly applied, methyl bromide moves freely through tiny gaps in the soil, creating an optimum environment for plant root growth and development. Treated soil permitted plants to reach their full potential, increasing yields by dramatic proportions. In strawberries, for example, yields more than quadrupled from the early 1960s to the 1990s. However, as the turn of the millennium approached, climate scientists found that the productivity came with a price. Methyl bromide is an ozone-depleting chemical. Its production and use worldwide were severely curtailed by the Montreal Protocol and its use as an agricultural pesticide is being phased out by most countries, including the United States. The agricultural industry has petitioned for extensions of the methyl bromide ban because it lacks an equally effective alternative. Unfortunately, many chemical alternatives have been found to be highly toxic, which has resulted in strict regulations for their use — requiring wide buffer zones, careful timing, and worker safety measures. In many cases, the use of chemicals is impossible. The reduced availability of soil fumigants in agriculture is prompting University of California scientists to look for alternatives.
Edward of Woodstock, the Black Prince, ranks among the most famous figures in British history. A renowned soldier, a paragon of chivalry, a prince of great wealth and largesse, he became a legend within his own lifetime. That he should die before inheriting the Crown only adds to the romance that surrounds this enigmatic warrior. Yet the Black Prince personified the martial age in which he lived. He is the most illustrious of a band of young and ambitious men whose existence was defined and dominated by warfare. The rise of these lords of war fuelled the incessant conflicts of the later fourteenth century, and made possible a series of great English military triumphs in France and Scotland. Within a generation they had transformed England into the most feared military force in Christendom. Such incredible success, however, led these lords to accumulate enormous wealth and power. In the last decades of the century, when military triumph turned to failure, it was the lords of war who deposed an English king and triggered a descent towards civil war. Through the life of the Black Prince the rise of these lords can be followed, and by appreciating the age he exemplified the legendary figure of the Black Prince more fully understood. An era of warfare and chivalry, the lords of war performed acts of both brutality and honour, fighting against the Scots and waging the Hundred Years War, engaging in battles such as Crécy and Poitiers, participating in tournaments, and aspiring to become elite knights of the Garter. The profits of war brought tremendous wealth to knights such as John Chandos, Thomas Dagworth and Robert Knolles, and yet many ‘loved honour more than silver’. In northern England the Percies and Nevilles became all-powerful, while in Scotland the bellicose earls of Douglas and March arose to prominence; the personal ambitions and rivalries of these lords had a profound impact upon the wars and led to domestic upheaval. It was a period of contradictions: of chivalry and barbarity, pious devotion and extravagant materialism, loyal service and ruthless self-aggrandisement. For many, however, above all it was remembered as a golden age of heroic military triumph, the Black Prince an inspirational figure to both Henry V and Henry VIII. Yet the truth behind the romance and legend is rather different, as is the ultimate legacy of the Black Prince and the Lords of War. David Cornell was born in Leicester and educated at Loughborough Grammar School. He read History at Durham University graduating with a First. During his time at Durham he was introduced to the Scottish wars by Professor Michael Prestwich and he subsequently returned to Durham to as a postgraduate where he spent several years researching the wars. He was awarded a PhD in early 2007 and is currently engaged in further research and the writing of a number of academic articles. More about David Cornell
|[ Team LiB ]| Most memory comes in the form of chips, individual integrated circuits meant for permanent installation on a printed circuit board. The capacity of each chip is measured in bits—or, as is more likely in the modern world, megabits. The first chips had narrow, one-bit-wide data buses, so they moved information one bit at a time. To achieve a byte-wide data bus, eight chips had to be used together (under the coordination of the memory controller). Most modern chips use wider data buses—four or eight or more bits—but none comes close to matching the 64-bit data buses of modern computers. To make memory more convenient to install and upgrade in practical computers, memory-makers package several memory chips on a small circuit board to make a memory module. Despite the added circuit board, modules provide a more compact memory package because the chips are soldered to the modules, thus eliminating space-wasting sockets. Soldering the chips down also allows them to be installed closer together (because no individual access is required). Moreover, because the discrete chips that are installed in the memory module never need to be individually manipulated except by a machine, they can use more compact surface-mount packages. A single memory module just a few inches long can thus accommodate a full bank of hundreds of megabytes of memory. Memory modules are the large economy size—RAM in a bigger package to better suit the dietary needs of today's computers. Besides the more convenient package that allows you to deftly install a number of chips in one operation, the memory module also better matches the way your computer uses memory. Unlike most chips, which are addressed at the bit level, memory modules usually operate in bytes. Whereas chip capacities are measured in kilobits and megabits, memory modules are measured in megabytes. The construction of a memory module is straightforward; it's simply a second level of integration above the basic memory chip. Several chips are brought together on a small glass-epoxy circuit board with their leads soldered down to the circuit traces and the entire assembly terminated in an external connector suitable for plugging into a socket or soldering to another circuit board. Memory modules come in a variety of types, many of which are no longer common. Single Inline Memory Modules, which are commonly called SIMMs, connect the contacts on the opposite sides of the board together so that each pair provides a single signal contact. Single Inline Pin Package modules, or SIPPs, have a row of pins projecting from the bottom of the module instead of edge connectors. Both of these packages are no longer used in new computers. Several designs are in current use. Dual Inline Memory Modules, called DIMMs, put separate signals on each contact so that the contacts on each side of the board serve different purposes. Small Outline DIMMs, termed SoDIMMs, shrink the package to fit more compact computers. Rambus Inline Memory Modules, or RIMMs, basically follow the DIMM design but use Rambus instead of SDRAM memory. Dual Inline Memory Modules Engineers created the DIMMs to accommodate the needs of Pentium computers. Earlier module designs—in particular SIMMs—had 8- or 32-bit data buses. As a consequence, a single bank of 64-bit Pentium memory required several modules. Not only were multiple modules a pain when you wanted to upgrade, they took up valuable motherboard space. Moreover, the many sockets they required were incompatible with the high-speed needs of modern memory systems. With more sockets and longer circuit traces, the more difficult it is to achieve high-frequency operation. Figure 16.2 shows a typical DIMM. All DIMMs provide a 64-bit data bus that exactly matches the needs of Pentium-class computers. Practical module capacity now ranges from 64MB to 512MB. Larger capacity modules often put memory chips on both sides of the circuit board. To fit even more memory on each module, one manufacturer (Kingston Technology) has developed a stacking scheme that piggybacks a physically larger chip over a smaller package, thus doubling the holding capacity of a single module. DIMMs come in three types: Those using old-technology (FPM or EDO memory) chips, those for SDR DRAM memory, and those for DDR memory. In size, the three types of modules are identical. They are 5.25 inches wide and (usually) 1 or 1.5 inches tall, although modules with unusually large memory endowments often are taller. Figure 16.3 shows the dimensions of a typical DIMM. To prevent you from sliding a DDR module into a slot meant for a SDR module, the two designs use different connector designs that are keyed to be incompatible. DIMMs can be equipped with non-parity, parity, or error-correction code (ECC) memory. The packages look identical and slide into the same sockets. Old-technology memory modules use signals on several connector contacts called presence detect pins to indicate these parameters. SDR and DDR modules use a more sophisticated design that is sometimes called serial presence detect. Each module is equipped with 2048 bytes of onboard nonvolatile memory that your computer can read and use to identify the module memory type. After reading this information, your computer can then configure itself appropriately. Old Technology DIMMs Memory modules that use old memory technology have edge connectors with 168 separate contacts arrayed on both sides of their small circuit boards. Eight of these contacts are reserved for presence-detect indications. In particular, these presence-detect signals indicate the speed rating of the module. Four of these contacts indicate the organization and addressing characteristics of the memory on the module. One pin, PD5, indicates whether the module contains FPM or EDO memory. Two, PD6 and PD7, indicate the speed rating of the chip, as indicated by the code listed in Table 16.4. (Note the rating in nanoseconds does not correspond to today's PC66/100/133 nomenclature because these modules are not used in modern computers.) The eighth contact indicates whether the module uses parity or ECC technology. In addition to the eight presence-detect indicators, two other contacts carry module ID signals that describe the module type and refresh mode used by the chip. SDR modules have 168 pins on their bottom edge. They are keyed with two notches, dividing the contacts into three groups with short gaps between. The first group runs from pin 1 to pin 10; the second group from pin 11 to pin 40; and the third group from pin 41 to pin 84. Pin 85 is opposite pin 1. The notches are designed to prevent you from sliding a smaller SIMM with fewer connections into a DIMM socket—the longest space between two notches is shorter than the shortest SIMM. The asymmetrical arrangement of the notches on the DIMM prevent its inadvertent improper insertion into its socket—turn a DIMM end-for-end and it won't fit into a socket. Some DIMMs have holes to allow you to latch the modules into their sockets, although some DIMMs lack these holes. The standard DIMM for DDR memory has a connector with 184 pins. It bears a single notch near the center of the connector. Otherwise, DDR modules look much the same as SDR modules and can be handled similarly. Rambus Inline Memory Modules Because of the radical interface required by Rambus memory, memory-makers redesigned individual modules to create a special Rambus Inline Memory Module (RIMM) to accommodate the technology. The basic module appears similar to a standard DIMM, with individual memory chips soldered to a printed circuit board substrate that links to a socket using a conventional edge connector. Despite the relatively narrow bus interface used by the Rambus systems (16 bits), the RIMM package pushes its pin count to 184, partly because alternate pins are held to ground potential as a form of shielding to improve stability and decrease interference at the high clock frequency for which the modules are designed (800MHz). RIMMs also differ in their operating voltage. The Rambus standard calls for 2.5-volt operation, as opposed to the 3.3 volts standard with DIMMs. Future RIMMs may operate at 1.8 volts. The standard 184-pin design supports both non-parity and error-correction RIMMs, with bus widths of 16 and 18 bits, respectively. Figure 16.4 shows a RIMM package. Small Outline DIMMs Although convenient for desktop computers, full-size memory modules are unnecessarily large for notebook machines. Consequently, the makers of miniaturized computers trimmed the size of ordinary memory modules about in half to create what they called the Small Outline DIMM (SoDIMM). The first SoDIMMs were offshoots of the 72-pin SIMMs used for FPM and EDO memory. They had the same 72 contacts as full-size SIMMs but arrayed them on both sides of the module (making it "dual inline"). In a SIMM, the contacts on opposite sides of the module's circuit board are electrically connected together. On a SoDIMM, each has a different function, putting the connection space to more efficient use and allowing the smaller size. Figure 16.5 illustrates a Small Outline Dual Inline Memory Module of this design. As you would expect, a 72-pin SoDIMM is about half the length of a 72-pin SIMM, measuring about 2.35 inches long. As with other module styles, a notch at one end of a 72-pin SoDIMM prevents you from latching it into its socket with the wrong orientation. The notch is on your left when you look at the chip side of the SoDIMM. Figure 16.6 shows the dimensions of a typical SoDIMM. The SoDIMM package proved so compelling it has been adapted for SDRAM modules. The SDRAM SoDIMMs are slightly longer than those used by older technologies, measuring 2.67 inches (67.6 millimeters) long. As with full-size DIMMs, SDRAM SoDIMMs come in two physically incompatible styles for SDR and DDR. SDR SoDIMMs have 144 contacts on their edge connectors. DDR modules have 200. Each style of SoDIMM has a single notch, but the position of the notch is different for SDR and DDR modules. With SDR modules, the notch is only slightly offset from the center; with DDR modules, the notch is near one edge (the pin 1 side) of the module. Figure 16.7 shows the difference in the notches used by the two varieties of SDRAM modules. |[ Team LiB ]|
I have been enjoying Adam Hochschild’s To End All Wars: A Story of Loyalty and Rebellion, 1914-1918, which covers the British role in World War I. My favorite section details how the British responded when it turned out they had a drastic shortage of binoculars, which at that time were very important for fighting the war. They turned to the world’s leading manufacturer of “precision optics,” namely Germany. The German War Office immediately supplied 8,000 to 10,000 binoculars to Britain, directly intended and designed for military use. Further orders consisted of many thousands more and the Germans told the British to examine the equipment they had been capturing, to figure out which orders they wished to place. The Germans in turn demanded rubber from the British, which was needed for their war effort. It was delivered to Germany at the Swiss border. What are the possible theories? 1. It was a two-front war, and thus the British could offer the Germans a deal, knowing part of the costs of the rubber supply would fall on the combatants at the Eastern front, or perhaps even other combatants at the Western front. 2. The deal may have appealed to commercial interests in each country. 3. Politicians may have expected to survive the war, and to have their country survive the war, and in the meantime they wanted the war for their side to go better rather than worse, for reasons of public relations or to appeal to their military lobbies. 4. The traders may have disagreed about the relative merits of what they were exchanging, as is the case on Wall Street every day.
The hard outer surface on your teeth is called the enamel. The pulp is deep inside the tooth and contains blood vessels and nerves. Bacteria can damage different parts of the tooth. What are cavities? Cavities are decayed parts of your tooth. Bacteria build up on your tooth enamel and make acids that cause holes in the tooth—the holes are called cavities Tooth pain happens as cavities get bigger and go through the enamel to the inside of your tooth Dentists find cavities by looking at your teeth and taking x-rays Treatment includes drilling out decay and filling the hole Regularly brushing your teeth, getting dental check-ups, and avoiding sugary foods can help prevent cavities What causes cavities? Bacteria build up on your teeth and make acid that causes decay. Bacteria, saliva (spit), and bits of food form a thin layer called plaque that clings to your teeth. Plaque hardens over time and turns into tartar. Tartar is usually yellow. You sometimes see it at the base of teeth. Bacteria living in plaque and tartar are hard to get rid of. The bacteria thrive on sugar. That's why sugary foods and drinks can lead to cavities. The amount of sugar you eat is less important than the how often you eat sugar. What matters is the amount of time sugar is in contact with your teeth. Sipping a sugary soft drink over an hour is more damaging than eating a candy bar in 5 minutes, even though the candy bar may contain more sugar. Babies who go to bed with a bottle, even if it contains only milk or formula, are also at risk of cavities. Bedtime bottles should contain only water. You’re more likely to get cavities if you: Have a lot of plaque and tartar in your mouth Eat and drink sugary or acidic foods, such as cola sodas or juice Have too little fluoride (a mineral that makes your enamel harder) in your teeth Don’t have much saliva (spit) in your mouth (a condition called dry mouth Dry Mouth Dry mouth is caused by a reduced or absent flow of saliva. This condition can cause discomfort, interfere with speech and swallowing, make wearing dentures difficult, cause bad breath (halitosis)... read more ) Have gums that have shrunk down the bottom of your teeth (receding gums) What are the symptoms of cavities? A shallow cavity in your enamel doesn't hurt. Cavities that are a little deeper may cause pain when you eat hot, cold, or sweet foods or drinks. A cavity that gets deep enough to reach the pulp causes pulpitis Pulpitis The hard outer surface on your teeth is called the enamel. The pulp is a softer layer deep inside the tooth. The pulp contains the tooth's blood vessels and nerves. Pulpitis is painful inflammation... read more . Pulpitis causes toothache even when you’re not eating or drinking. If the pulp gets infected, you may develop a pocket of pus called a dental abscess Dental Abscess An abscess is a collection of pus. Pus is a mix of white blood cells, dead tissue, and bacteria. It builds up wherever your body is fighting an infection. A dental abscess is an abscess that... read more . How can dentists tell if I have cavities? Dentists diagnose cavities by: Looking at your teeth and probing them with dental tools Taking x-rays of your teeth How do dentists treat cavities? If your cavity is very small and only on the enamel, the tooth can fix itself if you have enough fluoride. Dentists treat cavities that are deeper than the enamel by drilling out the decayed part of the tooth and putting in a filling. The filling can be made of: Silver amalgam (a combination of silver, mercury, copper, tin, and sometimes other metals), often used in back teeth where it will be out of sight Composite resins, which match the color of your teeth Glass ionomer, which is tooth-colored and releases fluoride, good for people with a lot of tooth decay Root canal treatment or tooth removal When tooth decay goes deep enough to reach the pulp and it becomes severely inflamed, dentists will give you pain medicine and either: Do a root canal to remove the pulp from your tooth and then fill and seal the tooth canal Take out the tooth if it can't be saved If you have an infection, they'll also give you antibiotics. How can I prevent cavities? You can prevent cavities by: Brushing your teeth with toothpaste that contains fluoride (in the morning and evening and after eating sugary foods) Flossing your teeth daily Getting regular dental care Eating healthy foods and cutting back on food and drinks that have a lot of sugar or acid It's important to get enough fluoride. Fluoride is a mineral that protects your teeth from cavities. Fluoride is added to the public water supply in some areas. If it's not in your water, the dentist may prescribe fluoride supplements for children up to age 8, or apply fluoride treatments to your teeth. If you still get a lot of cavities, dentists may: Put sealants on your teeth (sealants are a hard plastic coating that prevent cavities in teeth with deep crevices) Have you use an antibacterial mouth rinse that helps kill cavity-causing bacteria
Level III-Art Lesson 12: Da Vinci’s Clock Learn to create a collage from shapes inspired by the parts of da Vinci's clock The lesson is suitable for students in grade 5 through adult. Art Lesson Description: We often think of Leonardo da Vinci as a painter. Who hasn’t heard of his painting of the Mona Lisa, The Last Supper, or the Lady with the Ermine? But da Vinci was more than a painter. In fact he thought of himself first of all as an engineer/inventor, and invented - The first self-propelled “car,” - A robot, - War machines, and - Several clocks. The Academy Model Company has converted one of his plans for a clock into a working model, making it easy to study the clock’s mechanics. The model makes it easy to imagine what the invention might have looked like if one were using 15th Century materials. Students will study the parts of the clock, and use them as inspiration to create a collage. In the process they will: - Collect templates, - Collect paper with a variety of textures, - Trace the template shapes onto the papers, cut them out, and - Glue the shapes into a collage. In the PowerPoint version, the collage with be finished in monochromatic colors, using paint and oil pastels. In the Video version, the collage is finished with aluminum foil and color media. Students learn techniques to achieve unity in a composition. It’s an exciting way to explore collage and create an original composition. This lesson includes both POWERPOINT and VIDEO versions of the lesson. List of Supplies for Each Student: - (This lesson has NO drawing warm-up) - A piece of cardboard about 11” x 14” - Acrylic paints (#ad) - Set of brushes (#ad) - 1 set of oil pastels (#ad) - White glue or glue stick (#ad) - 1 scissors - A collections of geometric shapes to be used for templates - A collection of papers with different textures to be cut for the collage - Aluminum foil (video version) - Colored magic markers (#ad) (optional for the video version) Suggestions for Cross-Curricular Connections: - Learn why Da Vinci's clock was an important invention. - Watch the model clock in operation. - Watch a different da Vinci clock invention as it ticks. - Da Vinci invented the first alarm clock. Read about it in “Springs and Things” in Leonardo Da Vinci's Life. - You can also watch a third da Vinci clock in operation. - Learn about the history of timekeeping devices with this video. (You may find the opening moments controversial, but stay with it for a good summary of clock history). - Do an experiment with pendulums. - Find ideas for science projects with pendulums. - Learn about a 3-pendulum rotary harmonograph. The Life of Da Vinci - Read Neo Leo: The Ageless Ideas of Leonardo da Vinci (#ad) by Gene Barretta. - Read Amazing Leonardo da Vinci Inventions: You Can Build Yourself (Build It Yourself) (#ad) - Read “Leonardo Da Vinci as Told to Children" (p. 19 in this UNESCO publication). - Find several lesson plans about Da Vinci. - Learn how Da Vinci combined art and science. - Read some Quick facts about Da Vinci. - Find 20 things you probably didn’t know about Da Vinci. - Discover more facts about da Vinci. History: What makes a person like da Vinci a "Renaissance Person?" Approximate Time to Complete the Art Class: - Collecting textures: 1-60 minutes (Take your time! This can be a lot of fun.) - Tracing and cutting shapes: 20 minutes - Arranging, gluing and painting the collage: 45 minutes - Total time: 125 minutes
Step 6: Draw three small lines under the body as guides for the mouse's legs and feet. That's it for the initial sketch! From this point on, press harder with your pencil to get a more defined sketch. Step 7: Draw the mouse's eye. Sketch in a circle above the area where the guide lines intersect. Mice have dark eyes, so shade in the circle. Draw some lines surrounding the mouse's eye for detail. Step 8: Tighten the shape of the ears. Draw some lines throughout them for structure and smaller lines at the base to represent the mouse's fur. Step 9: Draw the mouse's nose at the end of the guide line. It's similar to an upside-down triangle.
If you’re looking for new (or additional) ways to help students learn and use content vocabulary, we’re recommending Jeanne Halderson’s Teaching Vocabulary: Using Keynote to Create Flash-y Cards available for free in the Books app. Regardless of grade level, the language of new content can be complex and can become a barrier to learning. Teaching students terminology is important as they rework their schemas for understanding. Exposure to, practice with, and demonstration of understanding new words can also help you determine their depth of understanding in assessment. There are a number of valid and equally effective techniques for learning vocabulary, but we wanted to share this particular method because it combines several modes of communication and asks students to engage deeply with the vocabulary. Vocabulary with Keynote Each Keynote slide has a specific structure to get students to engage with vocabulary in print, imagery, and audio. Jeanne breaks down the rationale for why students complete each component and includes a number of student-created examples from her own class. The same slideshow can be used all year long so students build up a collection of vocabulary flashcards which can be used for recall practice. Read the Book Jeanne’s book is available for free in the Books app. You can open Books and search “Halderson” to find this book along with some other literacy-based books by Jeanne. If you’re on your iPad, you can use this link to download it directly. If you try this with your students, send some examples to [email protected] and we’ll feature them in an upcoming newsletter and blog post.
The Earth is just a small part of the universe yet is home to majestic sights and wonders (e.g. the Grand Canyon) that no one ever has or ever will fully understand. As of 2021, an estimated three to thirty million distinct species of animals live on Earth. In addition, an estimated world population of 7.9 billion people live among them. But where on the Earth’s surface do these living creatures make their home, hunt down their prey, or hide from human beings? From what constitutes a coastal landform to volcanic landforms, here is a detailed look at the facts and statistics that make the world’s diverse types of landforms that transform the Earth’s crust, not only unique, but fascinating, mysterious, and fun. Table of Contents - What is a Landform? - How Do Scientists Categorize Landforms? - Classes of Landforms What are the Four Main Types of Landforms? What is a Landform? A landform is a natural or artificial surface feature of a planet’s solid surface that creates its terrain. Landforms can develop from tectonic plate movement taking place under the Earth’s surface, glacial erosion, erosion by water and wind, folding and faulting, and volcanic activity and lava flow. Landform development manifests after a prolonged period—even millions of years. How Do Scientists Categorize Landforms? Landforms fall into various categories, and thereby smaller homogenous divisions, based on their distinctive physical features such as their: •Locale (the location of the landform) • Altitude (the height of the landform) • Pitch (the grade or slope of the landform) • Topography (the shape of the landform) • Stratification (the sedimentary rock or soil layer of the landform) Although the above landform features develop naturally (i.e. lava, erosion, water, etc.), landforms can also develop from the influence of biological heterogeneous factors like algae or vegetation. To figure out what type of landform an area of terrain might be, the above distinctive physical features of the surface need consideration. Each type of landform falls into a different class of landforms, categorized by each one’s specific physical attributes. Classes of Landforms Depending on their characteristics, landforms can fall into one of thirteen distinct classes including, but not limited to, the following: - Aeolian Landforms: produced by wind activity. - Cryogenic Landforms: produced by repeated freezing and thawing, especially water. - Fluvial Landforms: produced by the erosional activity of water from rivers. - Impact Landforms: produced by collisions between astronomical objects and the Earth’s surface (such as when a meteor creates a crater in the ground). - Karst Landforms: produced by eroding and dissolving portions of soluble rock layers above or below the Earth’s surface. - Tectonic Landforms: produced by any of the relief features caused by the subsidence of the Earth’s surface or by ascending magmatic changes. - Weathering Landforms: produced by the breakdown of rock, soil, or minerals as a result of contact with a water body. Read on to learn more about the four major types of landforms. What are the Four Main Types of Landforms? A landform can include canyons, deltas, deserts, a glacial landform, and islands. But the four major types of landforms include mountains, hills, plateaus, and plains. A mountain landform is a colossal, rocky highland with a pointed or rounded top and sloping steep sides and is the highest landform on earth. A series or chain of mountains close together is called a mountain range. Formation of Mountain Landforms Areas of the earth’s land surface that rise a minimum of 1,000 feet (300 meters) or more above the land surrounding it classify as a mountain according to most geologists. Most mountains form when plates—or pieces of the Earth’s crust—collide with each other during an event called plate tectonics. The three primary types of mountains and how they form include mountains of accumulation (or mountains formed by volcanic eruptions such as from shield volcanoes or stratovolcanoes), folded mountains (or mountains formed by plate tectonics), and mountains of erosion (or mountains formed by inclement weather). Mountain Landform Animals Did you know that mountain goats are not goats, but antelopes? They can climb higher than people and on the steep slope of a mountain because of their distinctive cloven hooves with two padded toes that can easily spread wide to increase their balance and significantly improve their grip. The higher one travels on a mountain, the scarcer the oxygen, colder the temperature, and starker the sun, thereby substantially changing animal habitats. Environmental conditions cannot support plant life past the mountain’s tree line—or the point on a mountain where trees cease from growing. As a result, many animals make their home in lower altitudes; only a select and hardy few can live twelve months above the tree line. Mountain landform animals include: • Brown Bear • Nubien Ibex • Snow Leopard Mountain Landform Facts and Statistics Not only do the Himalayas have thirty of the world’s highest mountains, but they began forming when the process of plate tectonics occurred over fifty-five million years ago. Here is a list of fun facts and statistics about mountain landforms: • Even though it looks like a floating glacier in the clouds, Mount Everest is actually the highest mountain on earth at 29,035 feet (or 5.5 miles) above sea level. • The planet, Mars, has ten of the tallest mountains in the solar system, including Olympus Mons which is 15.5 miles tall. • Mountaineering became more popular post World War II due to the unique preparation and equipment the Europeans developed for their soldiers navigating the Alps during war. • Before the technological age of today, geographers measured mountains through a process called triangulation, which consisted of measuring the mountain peak from various observation spots. The mid-ocean ridge is the most expansive mountain chain on Earth and runs 40,390 miles–90% in the deep part of the ocean. Examples of Mountain Landforms Examples of mountain landforms around the world include, but are not limited to: • Aspen Mountain—Rocky Mountain Range/Colorado: 10,705 ft. • Mount Everest—Himalayan Mountain Range/Nepal/China: 29,029 ft. • Mount Fuji—Fuji Volcanic Zone: 12,389 ft. Hills have many similarities with mountains, but they are not as steep nor as tall as their elevation typically falls under 3,000 feet. However, like mountains, hills have a higher elevation than their surrounding land and have sloping, steep sides. Formation of Hill Landforms Hill landforms can develop in a number of diverse ways. One way is by a melting glacier, and their tendency to dig up ground as they shift. Another way is via water currents and the dried-up water, which creates hills. Finally, hills can also form from erosion and deposits, as well as the wind and rain. Hill Landform Animals Many animals that live on hill landforms also live on mountain landforms. However, some other animals include: • White-tailed Deer • Variety of birds Hill Landform Facts and Statistics While a hill supplies an ideal place to get a pleasant view, sometimes the pleasant view can be found in the hills themselves. For example, Mount Rushmore makes its home in South Dakota’s Black Hills, while there are hills in the Philippines called the Chocolate Hills that look like Hershey’s kisses. Examples of Hill Landforms As hills are primarily categorized with an elevation no more than 3,000 feet, here is a list of three hills around the world and their elevations: • Britton Hill—Florida, U.S./345 ft. • Pen Hill—Somerset, England/1,001 ft. • Cavanal Hill—Oklahoma, U.S./2,385 ft. Plateaus are incredibly unique landforms in that they are high plains, (also known as tablelands) or areas of highland consisting of flat terrains significantly raised above the surrounding area. There are two types of plateaus depending on how they formed—volcanic plateaus and dissecting plateaus. Formation of Plateau Landforms A variety of different processes can influence the development of plateaus such as the flow of volcanic magma, discharge of lava, and erosion caused by water and glaciers. Volcanic plateaus form through volcanic eruption and lava while dissecting plateaus form when the Earth’s crust rises. The raised, flat highlands of plateaus take millions of years to develop and can spread over hundreds or even thousands of kilometers. Plateau Landform Animals Animals found on plateau landforms are often indigenous to the terrain. Here is a brief list of animals found on plateau landforms: • Himalayan Marmot • Plateau Pika • Siberian Roe Deer • Tibetan Gazelle • Tibetan Antelope • Wild Yak • Chinese Serow • Himalayan Blue Sheep • Chinese Mountain Cat • Mountain Weasel • Eurasian Badger • Chinese Zokor • Wild Boar • Siberian Chipmunk • Golden Snub-nosed Monkey Plateau Landform Facts and Statistics Did you know a plateau landform in South America inspired Arthur Conan Doyle’s The Lost World (UK)? The harsh environment of Monte Roraima, a sandstone plateau that sits on the border of Brazil and Venezuela, is home to one-third of the mountain’s indigenous plant species. Read on and learn more about other interesting plateau landform facts and statistics. • “Table Mountain” is South Africa’s more famous landmark and most photographed attraction and has millions of visitors every year who reach its top by cable car. • The highest African plateau, found in Ethiopia—the Ethiopian Highlands—forms the largest continuous area of its altitude on the continent, thereby giving it its nickname: “The Roof of Africa.” Examples of Plateau Landforms “The Roof of the World,” the “Colorado Plateau,” and the “Polar Plateau” are just three of the world’s most famous plateau landforms on the Earth’s surface. Here is a brief summary of their location and approximate size. • Tibetan Plateau (“The Roof of the World”)—Southwestern China/area of 970,000 square miles. • Colorado Plateau—Colorado, U.S./area of 130,000 square miles. • Antarctic Plateau (“Polar Plateau”)—Antarctica/area of 5.4 million square miles. Plains simply consist of large areas of flat, sweeping landmass. Often covered in grass and low in elevation, plains are one of the Earth’s four major landforms. This type of landform is the most suitable for civilization and agriculture purposes, such as farming, then plateaus, mountains, or hills. Many different types of plain landforms exist including erosional plains, depositional plains, lacustrine plains, and abyssal plains. But before land can identify as a plain landform, it must show the following attributes: • It must be flat, broad, or slightly rolling land • It must have low elevation compared to the surrounding land on the Earth’s surface Formation of Plain Landforms Plain landforms are postulated to have developed in a variety of ways. Sometimes they form from layers of sediment deposits in the area from hills and mountains by ice, water, wind, and erosion. Other times, plains develop when the lava from a volcano flows across the Earth’s surface. Plain Landform Animals Thousands upon thousands of species of animals live in plain landforms around the world. In North America, these types of animals include such creatures as the fox, pronghorn, bison, black-footed ferrets, antelope, and grassland birds. In Africa, zebras, elephants, lions, and rhinos roam the plains. In Antarctica, emperor penguins, seals, birds, and sea lions call the tundra “home.” Plain Landform Facts and Statistics Plains exist on every continent and cover more than one-third of the world’s landmass. Continue reading to dive in and discover more facts and statistics about plain landforms. • The Native American word for plains is “savannah.” • Plains surround the majority of the world’s rivers and smaller bodies of water. A coastal plain lies next to the ocean. • In North America, some plain landforms may also be called “grasslands,” and are referred to as “prairies.” These plains typically have frigid winters and hot summers. • A tropical grassland plain is a “savannah,” and is typically warm all year long. • “Tundra” are plains in the arctic. Examples of Plain Landforms As every continent has plain landforms, here are examples from each of them: • North America: The Great Plains (U.S./Canada) • South America: Pampas of South America (Uruguay/Argentina/Brazil) • Europe: The Po River Plains (North Italy) • Asia: The Eurasian Steppe (Hungary to China) • Australia: The Central Lowlands (Australia) • Antarctica: The Antarctic Tundra (Antarctica) • Africa: The Bushveld (Africa)
A team of researchers has developed a method of manipulating debris orbiting Earth with rotating magnets to remove space junk. According to the US Space Agency (NASA), there are more than 27,000 pieces of space debris larger than a ball orbiting the Earth and moving at speeds up to about 28,163 km/h. A piece of debris as small as that is enough to damage a satellite or spacecraft, according to Phys on October 29. Professor of mechanical engineering Jake J. Abbott of the University of Utah (USA) led the team that discovered a method of manipulating debris with magnets. The team’s research was published in the scientific journal Nature. This method involves moving non-magnetic metal objects through space using rotating magnets. According to Mr. Abbott, when metal debris is affected by a changing magnetic field, the electromagnetic fields will circulate in the metal in circles. Essentially, the process turns the shard into an electromagnet that generates torque and force, which in turn allows control of the shard’s destination without having to actually grab it. Although the idea of using various types of electromagnetic currents to control objects in space is not new, what Professor Abbott’s team has discovered is the use of multiple sources of magnetic fields in a coordinated manner. This allows them to move objects in six levels of motion, including rotating them. “What we want to do is not just push objects away, but actually control them like on Earth,” Mr Abbott said. In addition, this method also allows scientists to manipulate particularly fragile objects. While a robotic arm can damage an object by exerting force only on part of the object, these magnets apply a gentler force to the entire object so that no damage is caused. . Mr. Abbott believes this principle of using magnets to manipulate non-magnetic metal objects could have other uses than cleaning up debris in orbit. With this new knowledge, scientists can prevent a damaged satellite from spinning through space to repair it, something they were not able to do before.
Developmental delay or maturational delay refers to a delay in two or more areas of the child’s development such as language, fine or gross motor skills, personal and social development, and adaptive behaviour. Developmental delay is a diagnosis that is applied to children under the age of 5. There should be an obvious delay in skills with respect to the expected abilities for their age and it may have several causes (genetic, birth complications, infections, etc.) but they may also be of unknown aetiology. We have to remember that during their development, children may also experience delays in a single area, such as language. It is important to differentiate these specific delays from a more global and generalised delay in most developmental milestones. WHAT CHARACTERISTICS CAN WE OBSERVE IN DEVELOPMENTAL DELAY? - There are no characteristic or abnormal signs. Developmental milestones can appear in the expected order, albeit more slowly. The child’s behaviour is more similar to that of a younger child than that corresponding to their chronological age. - They find it difficult to form concepts, establish categories, make classifications and create relationships between objects or facts. - The level of motivation towards the activity can be affected by insufficient verbal comprehension, by difficulties with attention or by the degree of difficulty of the proposed task itself. - These children usually present delays in their strategies for reorganising and understanding the demands of the social environment, maintaining and managing attention, organising information and adapting their behaviour. With early and adequate stimulation, many children adopt developmental patterns and a rhythm similar to children of their own age. However, in other cases the developmental delay persists, causing intellectual disability in the future. During the first sessions of child physiotherapy, we examine the child and collect all the data needed to set up a personalised programme, including an interview with the family to find out relevant aspects in order to establish the most appropriate plan for the child’s recovery: Music therapy is one of the treatment options available at Guttmann Barcelona, for both adult and paediatric patients affected by neurological injuries or diseases at different stages within the rehabilitation process.
In recent years, my classroom has been impacted by the lessons learned from the technology classes I have had the opportunity to attend. During this last summer, Beverly Burks gave us a quick overview of Makerspace. My curiosity of this concept has led to an area in my classroom where the students are now designing and creating projects. I feel like Makerspace can prepare students for the real world and maybe introduce them to their future career. One such example from my classroom is a young boy, Keaton, who built a rocket during Makerspace rotation. He loved making the rocket, but now he wants to take it apart and design it so he can improve it. The skills that he is using and developing are critical and problem solving. Can you imagine that all students would be able to experience this excitement and joy of learning?I propose that Riverside Applied Learning Center start a Makerspace room and be a pilot for the Northeast Fort Worth ISD elementary schools. I. Preparing (November 2018-January 2019) A. Send the proposal to principal and others to share how Makerspace can excite the students B. Start talking to Teachers about skills that will be in used in the Makerspace room with email blasts/ faculty meetings 1. Critical/ Logical Thinking 2. Designing and Creating 3. Decomposition 4. Computational Thinking 5. Growth Mindset 6. Engineering 7. Coding 8. Science 9. Math 10. Technology 11. Writing C. Start writing grants (and Donor Choose) to get electronics and furniture 1. Ozobots 2. Micro:bits 3. Little Bits 4. Legos 5. IPads 6. Containers 7. Tables 8. Bean Bags Chairs and Stools 9. Dash and Dot 10. TeacherGeek Maker Cart 11. Codeapillar 12. Bee-Bots II. Planning (February 2019-May 2019) A. Finding a Place/ Room B. Will a teacher be in the room at all times? Or will this be a room that teachers would sign up to use? C. Teachers will do different activities working with these skills that will be in used in the Makerspace room in faculty meetings/ PLCs 1. Critical/ Logical Thinking 2. Designing and Creating 3. Decomposition 4. Computational Thinking 5. Growth Mindset 6. Engineering 7. Coding 8. Science 9. Math 10. Technology D. Talking to The Welman Project (free resources to educators and non-profit organizations) to look for these things- 1. Containers 2. Tables 3. Cabinets 4. Arts and Craft Materials 5. Tools E. Set up a Committee for Implementation 1. Administration 2. Teachers 3. Parents 4. Community 5. Students F. Professional Development (with Kohn or other experts from Academic Educational Technology Department) 1. Engineering Design Process 2. Computational Thinking 3. Coding G. Visit other makerspace rooms III. Implementation (August 2019-December 2019) A. Set up committee 1. Administration 2. Teachers 3. Parents 4. Community 5. Students B. Have Afterschool and Saturday workdays to set up room C. Ask for Donations from the parents 1. Recycled Materials 2. Fabric 3. Plastic 4. Paper D. Have a Reveal Party in December 2019 or January 2020 Do you want to join me in having the students create and design in a Makerspace room?
Rheumatoid arthritis is an autoimmune disease characterized by pain, swelling and deformity of the joints due to prolonged inflammation, even leading to disability. It can affect any joint, although it is most common in wrists and hands. In some people, the condition can damage different systems such as the skin, eyes, lungs or blood vessels. The causes of rheumatoid arthritis are unknown, although the complex contribution of genetics together with environmental factors such as certain viral infections or hormones has been proposed. Even intestinal flora has been implicated in its development. The prevalence of rheumatoid arthritis is about 1% in the general population, although it is more frequent in women than in men. Among the non-genetic risk factors that may contribute to the development of the disease are: - Age: it can appear at any age, but the incidence increases with age. - Sex: it is 2-3 times more frequent in women than in men. - Smoking: Smokers have a higher risk of developing the disease. Rheumatoid arthritis (RA) is a disease of complex etiology for which twin studies estimate a 60% genetic contribution to the disease. In an association study conducted with about 23000 cases and more than 300000 controls, more than 70 loci significantly associated with the disease are shown. The RA variants were preferentially located in binding sites of various transcription factors related to CD4+ T cell biology and, to a lesser extent, other cells of the immune system. Rheumatoid arthritis is an autoimmune disease that causes pain, stiffness, swelling and loss of joint mobility, especially in the hands, feet, wrists, shoulders, elbows, hips and knees. Occasionally it can cause fatigue, fever and loss of appetite. These symptoms may be constant or come and go, with active periods (flares) and others of relative remission. About 40% of affected individuals have symptoms involving other systems such as the skin, eyes, lungs, kidneys, etc. Rheumatoid arthritis is the most disabling of the rheumatic diseases. Because it is a disease of unknown origin, there is no sure way to prevent rheumatoid arthritis, however, there are actions that could help to have a better management of the pathology such as: - Lead a healthy lifestyle, with moderate physical activity and healthy eating. - Avoid activities that require heavy physical efforts such as standing for long hours or avoiding bending the back and neck for hours at a time. - Reduce body weight in case of overweight. - Avoid consumption of toxic substances such as tobacco. 13.5 million variants Ha E, Bae SC, Kim K . Large-scale meta-analysis across East Asian and European populations updated genetic architecture and variant-driven biology of rheumatoid arthritis, identifying 11 novel susceptibility loci. Annals of the Rheumatic Diseases. 2021 May;80(5):558-565. Giannini D, Antonucci M, Petrelli F, Bilia S, Alunno A, Puxeddu I. One year in review 2020: pathogenesis of rheumatoid arthritis. Clin Exp Rheumatol. 2020 May-Jun;38(3):387-397. Smolen JS, Aletaha D, Barton A, Burmester GR, Emery P, Firestein GS, Kavanaugh A, McInnes IB, Solomon DH, Strand V, Yamamoto K. Rheumatoid arthritis. Nat Rev Dis Primers. 2018 Feb 8;4:18001.
Pneumonia is inflammation and infection of the lungs, causing difficulty breathing, cough and chest pain. Pneumonia klebsiella can either affect one or both lungs and you will discover numerous forms of the disease. The most widespread causes of pneumonia are: pulmonary infection with viruses (influenza, herpes simplex virus, varicella-zoster, adenovirus, respiratory syncytial virus), gram positive bacteria (Streptococcus pneumoniae, Staphylococcus aureus, Streptococcus pyogenes) and gram negative bacteria (Haemophilus influenzae, Klebsiella pneumoniae, Neisseria meningitides, Pseudomonas aeruginosa). Pneumonia can also be caused by infection with mycoplasmas (Mycoplasma pneumoniae), small infectious agents that share the characteristics of both viruses and bacteria. When pneumonia is caused due to infection with viruses, the klebsiella disease is normally much less severe and generates milder symptoms. The symptoms of viral pneumonia resemble those of flu or cold: cough, headache, difficulty breathing, nausea, muscle and chest discomfort. Most people with viral forms of pneumonia don’t will need any medical treatment, as the disease clears on itself inside a couple of weeks. If the symptoms intensify, it really is a sign of complication and thus medical intervention is needed. Unlike viral pneumonia, bacterial forms of the illness are far more severe and create intense symptoms: shortness of breath, pronounced difficulty breathing, dizziness, chills, sweating, high fever. When pneumonia is caused by infection with bacteria, specific medical treatment with antibiotics is required for overcoming the illness. The illness also requirements to be discovered in time, so as to prevent the development of complications. Although the forms of pneumonia caused by infection with mycoplasmas are generally not serious, the presence of these microorganisms inside the organism is more hard to detect and as a result the disease can be revealed late. Unlike viral and bacterial pneumonia, mycoplasma forms of pneumonia develop slowly and generate symptoms that do not often point to pneumonia. Pneumonia is really contagious plus the infectious agents responsible for causing the illness are airborne and may be very easily acquired through breathing. In spite of all of the natural defenses of the respiratory method (nostril hairs, mucus, cilia), some microorganisms are still able to reach inside the lungs, causing inflammation and infection. As soon as they break by way of the natural body defenses, irritants, viruses and bacteria speedily spread inside the alveoli, causing severe damage to the lungs. Pneumonia may be developed by everyone, at any age. Nevertheless, elderly people and incredibly young children are exposed the most to developing pneumonia. People with weak immune system, chronic pulmonary obstructive diseases, internal dysfunctions (cirrhosis, kidney complications), people today who have followed prolonged chemotherapy and folks who have suffered surgical interventions are also really susceptible to developing pneumonia. Statistics reveal that much more than 3 million men and women inside the United States are diagnosed with pneumonia every year. Viral forms of pneumonia are common in young children and elderly men and women, while adults typically develop bacterial forms of the disease. Study results also indicate that around 200 000 individuals are diagnosed with bacterial forms of pneumonia each and every year, and about five percent of hospitalized patients eventually die as a consequence of complication. Pneumonia is often a serious illness and wants special attention. When suffering from severe forms of pneumonia, it’s incredibly essential to follow an proper medical treatment as a way to fully overcome the illness.
What is Down syndrome? Down syndrome is a disorder of our genetic material and currently is the most common cause of intellectual disability. Down syndrome occurs in around one in 1100 births in Australia. Usually, we humans have 23 pairs of chromosomes – the threadlike structures containing our DNA and genes. This number and arrangement of chromosomes determine a person’s development. In Down’s syndrome, instead of having 2 copies (the pair) of chromosome 21, there is an extra copy, making 3. For this reason, the condition is also called Trisomy 21. This extra copy of chromosome 21 is present in the cells of the body and is responsible for the typical features of Down syndrome. Children with Down syndrome are born with a characteristic physical appearance. Some of the features include upward slanted eyes, a flat nose on a round head and abnormalities on the palms of the hands and the soles of the feet. Adults and children with Down syndrome are commonly smaller than other people of their age. In people with Down syndrome, there is usually some delay in development and a degree of intellectual and learning disability. Language and memory are usually affected. The severity varies from person to person. People with Down syndrome are more likely to have other birth defects, such as in the heart or bowel. Down syndrome is also often associated with health problems such as hearing and vision problems. The life expectancy for a person with Down syndrome is about 50 to 60 years old and continues to increase due to advances in medicine and community resources. There are several long-term health issues associated with Down syndrome. These can include hypothyroidism (an under-active thyroid gland), obstructive sleep apnoea leukaemia, early-onset dementia, epilepsy and depression in adulthood. There is no evidence for any therapy that can be done to prevent leukaemia, dementia and epilepsy; parents and doctors should be aware of the increased likelihood of these conditions to look for early warning signs. We do not know what causes Down syndrome but we do know how it happens. It is most often due to abnormal cell division in the egg cell or sperm cell before they fuse to create an embryo. While older women are at higher risk, women of any age can have a baby with Down syndrome. The risk increases in women from age 35 years and continues to increase as you get older. Prenatal tests for Down syndrome Prenatal tests for Down syndrome are offered to all pregnant women, but not all women choose to have them. Screening tests identify those at higher risk of having a baby affected by Down syndrome. There are also tests that can be done in the early stages of pregnancy to find out if the baby has Down syndrome – these are called diagnostic tests. It’s important to be fully informed before having any of these tests, as you may be confronted with a decision if the tests come back positive. Your doctor will help guide you on what’s best for you and your pregnancy. Screening tests give you a percentage chance of whether you are carrying a baby with Down syndrome. Screening tests are not perfect; they may give a false-positive result (indicating the baby has Down syndrome when it doesn’t), and they may not identify a baby which does have Down syndrome (false-negative). But many women choose to have screening tests because unlike diagnostic tests (see below) they don’t carry the small risk of causing a miscarriage. There are 3 types of screening tests: Combined first trimester screening (CFTS) – This test combines blood test results, your age and an ultrasound of the baby’s neck (nuchal translucency scan, done at 11-13 weeks) to calculate the risk of your baby having Down syndrome. There is a partial Medicare rebate for the blood test but there is an additional cost for the nuchal translucency ultrasound. Second trimester maternal serum screening – This test combines a blood test with your weight and age and gestational age (age of the pregnancy in weeks) to estimate the Down syndrome risk. This test is not as accurate as CFTS and is usually only recommended for women who did not have testing earlier in the pregnancy. Non-invasive prenatal testing or screening (NIPT or NIPS) (also known as cell free DNA/cfDNA) – This test, done from 10 weeks, involves a blood test that examines DNA from the baby that has entered the mother’s bloodstream. It’s the most accurate screening test currently available. Currently, NIPT is only available through private clinics and not covered through Medicare nor covered by private health. This test is often referred to by brand names, including Harmony test or Precept test. If your screening results come back in the high-risk category you will be offered diagnostic testing. Some women may choose not to have diagnostic tests, and others may choose to go straight to having diagnostic testing, skipping screening tests altogether. Bear in mind that most people who get a high-risk result from screening tests have a normal result from their diagnostic test. Diagnostic tests for Down syndrome can tell you whether your baby has Down syndrome – they are 99.9% accurate. But unlike screening tests they carry a small risk of miscarriage (up to or equal to 1%). The diagnostic tests include chorionic villus sampling (CVS) and amniocentesis. Both these tests involve having a procedure where a needle is inserted into your abdomen (tummy) to take a sample of fetal cells (cells from the developing baby). The cells are taken from either the placenta or amniotic fluid (the fluid that surrounds the baby). CVS is performed from 10 – 13 weeks and amniocentesis from 15 weeks. Remember, the decision to have any or all of these tests is yours. Talk to your general practitioner (GP), midwife or obstetrician for more information on these tests. Tests and diagnosis in newborn babies If a baby is born with physical features that are characteristic of Down syndrome, a blood test from the baby can confirm the diagnosis. The blood test is called a chromosomal karyotype. Treatment of Down syndrome Treatment depends on your particular needs and stage of life. Treatments that are usually started early in childhood and tailored to the needs of your child. Most children with Down syndrome see several different doctors and specialists (including a paediatrician – specialist in children’s health). Various other healthcare professionals are also usually involved in treatment, including: - speech pathologists; - occupational therapists; - psychologists; and - audiologists (specialists in hearing care). Your GP can help to coordinate all the teams involved in your child’s care. An educational plan made for your child can assist in the transition to becoming an independent adult. Caring for someone with Down syndrome Caring for someone with Down syndrome can be very draining emotionally and physically. Parents are encouraged to seek support from their GP. Support groups and workshops can connect you to other parents raising a child with Down syndrome. Additionally, there are lots of online resources that answer common questions parents have about their child with down syndrome. State-run Down syndrome websites have excellent information under the services and resources tabs. The National Disability Insurance Scheme (NDIS) is Australia’s first national support for people with disability and is available in all states except Western Australia, where it is still rolling out. There is also an Early Childhood Early Intervention (ECEI) scheme that aims to provide increased support for children with developmental delay. Go to https://www.ndis.gov.au/ to see if you can access these schemes. People with Down syndrome are individuals and have goals just like anyone else. They strive to have a meaningful job, close circle of friends, place to live and lasting relationships. It may be harder to reach these goals, but with support from community and family these are much more achievable.
Top Contributors - Ewa Jaraczewska, Jess Bell and Kim Jackson Introduction[edit | edit source] Clinically Relevant Anatomy[edit | edit source] The ankle joint complex can be divided into three parts: the talocrural, talocalcaneonavicular and subtalar parts. The talocrural (TC) joint is formed by three bones and a complex ligamentous apparatus. The tibia, fibula and talus are connected by the collateral ligaments and the syndesmotic ligament complex. Ankle Joint[edit | edit source] Bones[edit | edit source] The ankle is formed by three bones: the talus, tibia and fibula. The anatomical structure of the foot consists of the hindfoot, midfoot and forefoot. Each part of the foot is composed of several bones. The lower leg and foot constitute the ankle. The following bony elements of the ankle joint are part of this structure: The talocrural joint (TC or sometimes called the tibiotalar joint) is referred to as the ankle joint. The articulating surfaces are the lateral and the medial malleoli, distal end of the tibia and the talus. The primary movements of the TC joint are dorsiflexion and plantarflexion in the sagittal plane. In the hindfoot, the talus and calcaneus articulate and form the subtalar joint (ST, also known as the talocalcaneal joint). The ST joint has three articulations, and the talus and calcaneus both have three articulating facets. The main motions at this joint are inversion and eversion of the ankle and hindfoot. The Chopart joint (or MT, midtarsal or transverse tarsal joint, talocalcaneonavicular joint) is the "junction" between the hindfoot and midfoot. This joint includes the talonavicular and calcaneocuboid joints, and allows forefoot rotation. The navicular articulates with all three cuneiform bones distally. In addition to the navicular and cuneiform bones, the cuboid bone has a distal articulation with the base of the fourth and fifth metatarsal bones. Below is a summary of the ankle articulations: Ligaments[edit | edit source] The ligaments of the ankle consist of: - The medial ligaments (the deltoid ligament) - The medial collateral ligament (MCL) is divided into two layers: superficial and deep - The lateral ligaments - The lateral collateral ligament complex (LCL) is composed of three ligaments: the anterior talofibular, calcaneofibular, and posterior talofibular ligaments - The tibiofibular syndesmosis - i.e. the ligaments connecting the distal epiphyses of the tibia and fibula - The tibiofibular syndesmosis articulation includes the anteroinferior tibiofibular ligament, posteroinferior tibiofibular ligament, and the interosseous tibiofibular ligament. You can read more about ankle ligaments here. Muscles[edit | edit source] The lower leg muscles are divided into four compartments: the superficial posterior compartment, the deep posterior compartment, the lateral compartment, and the anterior compartment. The primary plantarflexors of the ankle are located in the posterior compartment. Muscles of the lateral compartment plantarflex the ankle and evert the foot. All the muscles within the anterior compartment perform ankle dorsiflexion. More information on the muscles and fascia of the ankle can be found here. Neural and Vascular[edit | edit source] Neural[edit | edit source] The tibial nerve and common peroneal nerve (also known as common fibular nerve) originate at L5, S1 and S2. The tibial nerve provides motor fibres to gastrocnemius, soleus, tibialis posterior, flexor digitorum longus, and flexor hallucis longus. Its sensory fibres occasionally supply the area typically innervated by the deep peroneal nerve. The superficial branch of the common peroneal nerve sends motor fibres to peroneus (fibularis) longus and brevis. The deep branch sends motor fibres to tibialis anterior, extensor digitorum longus, extensor hallucis longus, extensor digitorum brevis (rarely innervated by the tibial nerve). The sensory fibres of the superficial branch supply the anterolateral part of the leg and much of the dorsum of the foot and toes. The deep branch supplies the skin between the first and second toes. The sural nerve originates from the tibial nerve and cutaneous branches of the common peroneal nerve. It is divided into the sural communication nerve and lateral sural cutaneous nerve. Its sensory fibres innervate the posterior aspect of the distal leg and the lateral aspect of the foot. Vascular[edit | edit source] The following arteries supply the distal aspect of the lower leg: - Popliteal artery: superficial posterior compartment including gastrocnemius, soleus and plantaris muscles. - Tibial artery - Anterior tibial artery: proximal tibiofibular joint, knee joint, ankle joint, muscles and skin of the anterior compartment of the leg. - Posterior tibial artery: soleus, popliteus, flexor hallucis longus, flexor digitorum longus and tibialis posterior. - Fibular artery: popliteus, soleus, tibialis posterior, and flexor hallucis longus muscles. - Sural artery: gastrocnemius muscle, soleus muscle and plantaris muscle. You can read more about the neural and vascular systems of the ankle here. Classifications of Fractures[edit | edit source] Clinically relevant classifications for ankle fractures include the following: - Malleolar (and its subcategories according to the Danis–Weber ABC classification) - Distal tibia - Any ankle fracture-dislocation - Any bimalleolar or tri-malleolar ankle fracture - Any lateral malleolar fracture with a significant talar shift on any plain radiograph view at any time. An ankle fracture is considered stable if none of the above criteria are met. The AO/OTA system classifies fractures for the entire body. In the ankle, fractures are divided into malleolar, distal tibia, and fibular fractures. This system of classification is most commonly used to classify malleolar fractures, and is based on the severity and complexity of the injury: - Type A: infrasyndesmotic fibular injury (with three subgroups) - Type B: transsyndesmotic fibular fracture (with three subgroups) - Type C: suprasyndesmotic injury (with three subgroups) A revised version of the AO/OTA classifications separates fractures for epiphyseal, metaphyseal, and diaphyseal fractures. When multiple fractures and fracture systems occurs, several labels can be applied. You can learn more about AO/OTA classification here. Clinical Presentation[edit | edit source] Patients present to the emergency room with ankle fractures due to falls, inversion injuries, sports-related injuries, or minor trauma due to diabetes, peripheral neuropathy and other medical conditions. The most common symptoms when an ankle fracture is suspected include: - Swelling of the ankle - Inability to weight bear Diagnostic Procedures[edit | edit source] You can read about ankle investigations and tests here. Outcome Measures[edit | edit source] A wide variety of outcome measures are available to use in adults with ankle fractures: - Olerud Molander Ankle Score (OMAS)- the most frequently collected primary outcome measure - Measurements of ankle range of motion - the second most frequently collected primary outcome measure - American Orthopaedic Foot and Ankle Society Ankle Hind-Foot Scale (AOFAS) - The Lower Extremity Functional Scale (LEFS) - The 36-item Short-Form Survey (SF-36) - Visual Analogue Scale for pain (VAS-Pain) - Manchester Oxford Foot Questionnaire (MOXFQ) - American Academy of Orthopaedic Surgeons Foot and Ankle Outcomes Score (AAOS) - Ankle Fracture Outcome of Rehabilitation Measure (A-FORM) - Foot and Ankle Ability Measure (FAAM) - Adequate Patient Reported Outcome Measures in ankle instability reported by Hansen et alinclude: - Modification of Functional Ankle Instability (IdFAI) - Ankle Instability Instrument (AII) - European Foot and Ankle Society Score (EFAS score) Management / Interventions[edit | edit source] General Considerations[edit | edit source] In order to choose the most appropriate intervention after an ankle fracture, the physiotherapist must consider the following: - The presence of a “variety” of protocols with a lack of conclusive recommendations - Two trends in the literature: - Traditional protocol includes incremental weight bearing after 6 weeks, with full weight bearing at 12 weeks, based on the mechanism of injury, and the involvement of other soft tissues. - Early mobilisation protocol includes early weight bearing (before 6 weeks), early exercises, general conditioning, orthoses, and manual therapy. - The OUTCOMES of the traditional protocol include: - The OUTCOMES of an early mobilisation protocol include: - No more complications than the traditional protocol - Earlier return to work than the traditional protocol - Decreased risk of thromboembolism and osteoporosis - Decreased risk of Complex Regional Pain Syndrome (CRPS) - Improved general well-being and social re-interaction of the patient - Decreased socio-economic costs Early Mobilisation Protocol[edit | edit source] Early phase (3 – 6 weeks post-surgery)[edit | edit source] - Desensitisation of CRPS: brushing, mirror therapy - General conditioning in non-weight bearing (NWB) and partial weight bearing (PWB): arm ergometer, stationary cycling, Pilates on reformer, circuit training gym - Prepare for full weight-bearing (FWB) gait at 6 weeks Full weight bearing: PWB - FWB (4 - 6 weeks post-surgery)[edit | edit source] - Functional rehabilitation: - Cardio-vascular fitness (cycling) - cycling (spinning) - swimming (no kicks) - Strengthening exercises with less than 50% of body weight - Proprioceptive exercises (Balance Error Scoring System(BESS) with crutches) - Cardio-vascular fitness (cycling) Full weight bearing (week 6-8 post-surgery)[edit | edit source] - Gait (cycle to warm up, then walk) - Decline squats - Proprioceptive exercises: Tandem standing lunge with twists, perturbation exercises (pull at stable base) Full weight bearing (week 8-10 post-surgery)[edit | edit source] - Walk with resistance bands and weights - Step up and step overs - 1 leg balance/ aeroplane - Star Excursion Balance Test (SEBT) Week 10 to final phase[edit | edit source] - Walking endurance with load (walking to work and back with a backpack) - Jumping and landing (indoor climbing wall) - Total time since accident – 14 weeks: hiking, post-traumatic stress therapy - Orthoses: hiking boots, inserts, compression sleeve for lower legs Resources[edit | edit source] - Henkelmann R, Schneider S, Müller D, Gahr R, Josten C, Böhme J. Outcome of patients after lower limb fracture with partial weight-bearing postoperatively treated with or without anti-gravity treadmill (alter G®) during six weeks of rehabilitation - a protocol of a prospective randomized trial. BMC Musculoskelet Disord. 2017 Mar 14;18(1):104. - Matthews PA, Scammell BE, Ali A, Coughlin T, Nightingale J, Khan T, Ollivere BJ. Early motion and directed exercise (EMADE) versus usual care post ankle fracture fixation: study protocol for a pragmatic randomised controlled trial. Trials. 2018 May 31;19(1):304. References[edit | edit source] - Pflüger P, Braun KF, Mair O, Kirchhoff C, Biberthaler P, Crönlein M. Current management of trimalleolar ankle fractures. EFORT Open Reviews. 2021 Aug 10;6(8):692-703. - Brockett CL, Chapman GJ. Biomechanics of the ankle. Orthopaedics and trauma. 2016 Jun 1;30(3):232-8. - Ficke J, Byerly DW. Anatomy, Bony Pelvis and Lower Limb, Foot. [Updated 2021 Aug 11]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2021 Jan-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK546698/ - Golanó P, Vega J, de Leeuw PA, Malagelada F, Manzanares MC, Götzens V, van Dijk CN. Anatomy of the ankle ligaments: a pictorial essay. Knee Surg Sports Traumatol Arthrosc. 2010 May;18(5):557-69. - Milner CE, Soames RW. Anatomy of the collateral ligaments of the human ankle joint. Foot Ankle Int. 1998 Nov;19(11):757-60. - Szaro P, Ghali Gataa K, Polaczek M. et al. The double fascicular variations of the anterior talofibular ligament and the calcaneofibular ligament correlate with interconnections between lateral ankle structures revealed on magnetic resonance imaging. Sci Rep 2020;10: 20801. - Yammine K, Jalloul M, Assi C.Distal tibiofibular syndesmosis: A meta-analysis of cadaveric studies. Morphologie, 2021. - Yamashita M, Mezaki T, Yamamoto T. "All tibial foot" with sensory crossover innervation between the tibial and deep peroneal nerves. J Neurol Neurosurg Psychiatry. 1998 Nov;65(5):798-9. - Grujičić R. Common fibular (peroneal) nerve [Internet]. KenHub. 2021 [cited 24 July 2022]. Available from: https://www.kenhub.com/en/library/anatomy/common-fibular-nerve - Olczak J, Emilson F, Razavian A, Antonsson T, Andreas Stark A, Gordon M. Ankle fracture classification using deep learning: automating detailed AO Foundation/Orthopedic Trauma Association (AO/OTA) 2018 malleolar fracture identification reaches a high degree of correct classification. Acta Orthopaedica, 2021; 92(1): 102-108, - Michelson JD, Magid D, McHale K. Clinical utility of a stability-based ankle fracture classification system. J Orthop Trauma. 2007 May;21(5):307-15. - Fonseca LLD, Nunes IG, Nogueira RR, Martins GEV, Mesencio AC, Kobata SI. Reproducibility of the Lauge-Hansen, Danis-Weber, and AO classifications for ankle fractures. Rev Bras Ortop. 2017 Dec 6;53(1):101-106 - Lambert LA, Falconer L, Mason L. Ankle stability in ankle fracture. J Clin Orthop Trauma. 2020 May-Jun;11(3):375-379. - Feger J. AO/OTA classification of malleolar fractures. Reference article. Available from https://radiopaedia.org/articles/aoota-classification-of-malleolar-fractures [last access 29.06.2022] - Seewoonarain S, Prempeh M, Shakokani M, Magan A. Ankle fractures. Journal of Arthritis. 2016:1-4. - McKeown R, Rabiu AR, Ellard DR, Kearney RS. Primary outcome measures used in interventional trials for ankle fractures: a systematic review. BMC Musculoskelet Disord 2019; 20 (388). - Olerud C, Molander H. A scoring scale for symptom evaluation after ankle fracture. Archives of orthopaedic and traumatic surgery. 1984 Sep;103(3):190-4. - Hansen CF, Obionu KC, Comins JD, Krogsgaard MR. Patient reported outcome measures for ankle instability. An analysis of 17 existing questionnaires. Foot Ankle Surg. 2022 Apr;28(3):288-293 - Dawson J, Boller I, Doll H, Lavis G, Sharp R, Cooke P, Jenkinson C. The MOXFQ patient-reported questionnaire: assessment of data quality, reliability and validity in relation to foot and ankle surgery. The Foot. 2011 Jun 1;21(2):92-102. - Nguyen MQ, Dalen I, Iversen MM, Harboe K, Paulsen A. Ankle fractures: a systematic review of patient-reported outcome measures and their measurement properties. Qual Life Res. 2022 Jun 18. - Hansen CF, Obionu KC, Comins JD, Krogsgaard MR. Patient reported outcome measures for ankle instability. An analysis of 17 existing questionnaires. Foot Ankle Surg. 2022 Apr;28(3):288-293. - Simpson H. Ankle Fractures Course. Physiopedia. 2022 - Lin CWC, Donkers NAJ, Refshauge KM, Beckenkamp PR, Khera K, Moseley AM. Rehabilitation for ankle fractures in adults. Cochrane Database of Systematic Reviews 2012, Issue 11. Art. No.: CD005595. - Pfeifer CG, Grechenig S, Frankewycz B, Ernstberger A, Nerlich M, Krutsch W. Analysis of 213 currently used rehabilitation protocols in foot and ankle fractures. Injury. 2015 Oct;46 Suppl 4:S51-7. - Swart E, Bezhani H, Greisberg J, Vosseller JT. How long should patients be kept non-weight bearing after ankle fracture fixation? A survey of OTA and AOFAS members. Injury. 2015;46(6):1127-30 - Goost H, Wimmer MD, Barg A, Kabir K, Valderrabano V, Burger C. Fractures of the ankle joint: investigation and treatment options. Dtsch Arztebl Int. 2014 May 23;111(21):377-88. - Albin SR, Koppenhaver SL, Marcus R, Dibble L, Cornwall M, Fritz JM. Short-term Effects of Manual Therapy in Patients After Surgical Fixation of Ankle and/or Hindfoot Fracture: A Randomized Clinical Trial. J Orthop Sports Phys Ther. 2019 May;49(5):310-319 - Nilsson G, Nyberg P, Ekdahl C, Eneroth M. Performance after surgical treatment of patients with ankle fractures--14-month follow-up. Physiother Res Int. 2003;8(2):69-82. - Suciu O, Onofrei RR, Totorean AD, Suciu SC, Amaricai EC. Gait analysis and functional outcomes after twelve-week rehabilitation in patients with surgically treated ankle fractures. Gait Posture. 2016 Sep;49:184-189. - Beckenkamp PR, Lin CW, Engelen L, Moseley AM. Reduced Physical Activity in People Following Ankle Fractures: A Longitudinal Study. J Orthop Sports Phys Ther. 2016 Apr;46(4):235-42 - Ng R, Broughton N, Williams C. Measuring Recovery After Ankle Fractures: A Systematic Review of the Psychometric Properties of Scoring Systems. J Foot Ankle Surg. 2018 Jan-Feb;57(1):149-154. - Jansen H, Jordan M, Frey S, Hölscher-Doht S, Meffert R, Heintel T. Active controlled motion in early rehabilitation improves outcome after ankle fractures: a randomized controlled trial. Clin Rehabil. 2018 Mar;32(3):312-318. - Dehghan N, McKee MD, Jenkinson RJ, Schemitsch EH, Stas V, Nauth A, Hall JA, Stephen DJ, Kreder HJ. Early Weightbearing and Range of Motion Versus Non-Weightbearing and Immobilization After Open Reduction and Internal Fixation of Unstable Ankle Fractures: A Randomized Controlled Trial. J Orthop Trauma. 2016 Jul;30(7):345-52 - Smeeing DP, Houwert RM, Briet JP, Kelder JC, Segers MJ, Verleisdonk EJ, Leenen LP, Hietbrink F. Weight-bearing and mobilization in the postoperative care of ankle fractures: a systematic review and meta-analysis of randomized controlled trials and cohort studies. PLoS One. 2015 Feb 19;10(2):e0118320.
The construction of the famous Tower of Pisa in Italy was started in 1173 and finished in 139 Why was the Tower built The Tower of Pisa is the belltower of the churches. The city of Pisa was at the beginning a simple but important Italian seaport. With its growth, so did its religious buildings. Its fame and power grew gradually over the years, as the people of Pisa were involved in various military conflicts and trade agreements. The Pisans attacked the city of Palermo on the island of Sicily in 1063. The attack was successful and the conquerors returned to Pisa with a great deal of treasure. To show the world just how important the city was, the people of Pisa decided to build a great cathedral complex, the Field of Miracles. The plan included a cathedral, a baptistery, a bell tower (the Tower of Pisa) and a cemetery. Why does the Tower lean? The leaning of the Tower of Pisa comes into the story in 1173, when construction began. Thanks to the soft ground, it had begun to lean by the time its builders got to the third story, in 1178. Shifting soil had destabilized the tower’s foundations. Over the next 800 years, it became clear the 55-metre tower wasn’t just learning but was actually falling at a rate of one to two millimeters per year. Today, the Leaning Tower of Pisa is more than five meters off perpendicular. Its architect and engineer tried to correct this by making the remaining stories shorter on the uphill side – but to no avail. It kept leaning more and more. The lean, first noted when three of the tower’s eight stories had been built, resulted from the foundation stones being laid on soft ground consisting of clay, fine sand and shells. The next stories were built slightly taller on the short side of the tower in an attempt to compensate for the lean. However, the weight of the extra floors caused the edifice to sink further and lean more.
As Hurricane Katrina surged towards New Orleans, people faced the unthinkable prospect of abandoning their homes and finding shelter. Worst affected were some of the city’s most vulnerable citizens, the poor and the elderly, parents with young children, people without cars, and people living in flood-prone areas. Among those who stayed back, many were old enough to remember Hurricane Camille, a category 5 storm that devastated the region in 1969. Many homes were spared from flooding then, so it stood to reason that they should hold up to Katrina, also a category 5 storm that was demoted to a category 3 by the time it hit land. Sadly, they were mistaken, as the category rating of the hurricane was not the best measure of the raw destructive power of the storm. The Saffir-Simpson rating system In the western hemisphere, hurricanes are all rated on the Saffir-Simpson scale, an empirical measure of storm intensity devised in 1971 by civil-engineer Herbert Saffir and meteorologist Bob Simpson. To compute a storm’s category rating, you have to measure the highest speed sustained by a gust of wind for an entire minute. The wind’s speed is measured at a height of 10 meters because wind speeds increase as you climb higher, and it is here that they do the most damage. Based on how large this maximum speed is, a storm is assigned to one of five different categories. The problem with this number is that it only captures one aspect of a storm’s intensity – the highest speed that it can sustain. Not only is it tricky to measure this peak speed, but different organizations may come to different conclusions about it, depending on their coverage of the wind data. This number doesn’t tell you anything about the size of the storm, nor about how the wind-speeds are distributed overall. Consider a tale of two storms – the first is fierce but more contained, whereas the second is larger, and though it has lower peak wind speed, these wind speeds are spread over a larger area. The SS scale would give the first storm a higher score, even though the latter may be more destructive. Based on the rating, people might have expected Katrina to be about as destructive as Camille. A rip in the wind tapestry So how can one take the true measure of a storm? Storms are dangerous because of the energy carried in the moving air. Unless you live in a windy city, or drive your car at high speed on an interstate, you probably don’t think of air as something that carries much energy. But in a storm, strong winds ram into stationary objects, like trees, buildings, or the surface of the ocean, and impart some of their energy of motion. Some structures can safely absorb this energy, while others will give way. As Hurricane Sandy made its way through the US, many turned to this incredible real-time wind map to get a larger picture of the storm. I watched Sandy as it made landfall, and was mesmerized by the unexpected beauty that underlies this destructive force. On most days, if you look at the wind map, you’ll find a seamless tapestry made up of delicate threads and broader, sweeping strokes. The wind weaves its way through the central mountains, and brushes through the plains in wide swathes, leaving trails like a comb pulled through the hair of an unruly child. It’s a flow that is sculpted by geography and powered by the ebb and flow of weather systems. Visualizing this flow is like watching a globe-sized zen garden rearrange itself, tended not by any individuals, but by the blind, mathematical laws of fluid dynamics. On this day, as Hurricane Sandy pummeled through the north-eastern US states, the winds started to pick up outside my window in New Jersey, and the trees swayed violently as the gusts grew stronger. On the wind map, there seemed to be a giant bald spot, a rip in the wind tapestry where from where threads had started to fray. In essence, a hurricane creates a vortex, like the kind that forms when you drain the water from your bath tub. Vortices are strange, unwieldy creatures, a consequences of the non-linear equations that govern the flow of fluids. In normal situations, these vortices die out, as their energy drains out to the fluid around them. But hurricanes are self-sustaining, fed by evaporating columns of air rising from warm ocean water. Scientists struggle to adequately model the dynamics of hurricanes. It’s a mathematical balancing act. One a larger scale, you have to worry about the flow of the atmosphere that’s responsible for steering the hurricane. On a finer-scale, you have to grapple with interactions near the core that give the storm its strength. You have to include just enough essential mathematics to reproduce the behavior of a real storm, while leaving out the details that can bog you down into a mire of calculations. To get a small sense of the complexity of the task, this visualization shows a computer simulation of Hurricane Katrina forming (cue to unnecessarily dramatic blockbuster music) And here’s a more down-to-earth video of an impressive computer simulation that was able to reproduce the known seasonal cycle of tropical hurricanes (or, as they call them, hurricane-like-vortices): The measure of a storm Predicting a storm is one half of the scientific story. The other part is figuring out how destructive it’s going to be. We can get a sense of a storms’ strength with a little bit of high school physics. You might remember that every object in motion carries a certain amount of energy, known as its kinetic energy. The kinetic energy of an object depends on the square of its speed, and is directly proportional to the mass of the object. Imagine a gust of wind blows by your house, travelling at 50 miles per hour. How much energy does this wind gust carry, and what does that mean in terms of damage it can inflict? Well, think of a storm as being built out of moving parcels of air. Each of these parcels has a certain amount of kinetic energy. To work this out, we first need the mass of each chunk (in kilograms). We can get this using a little trick. Let’s rewrite a kilogram in an odd way: What we just did is write a kilogram as the product of kilogram per cubic meter — a density — times a cubic meter — a volume. So, the mass of an object is just its density multiplied by the volume. That means, Kinetic Energy stored in a chunk of storm wind Energy per cubic meter of storm This equation tells you how much energy is sitting in each cubic meter of air that whizzes by your window. So let’s try plugging in some approximate numbers. Moisture laden air has a density of about 1 kilogram per cubic meter, and 50 miles per hour is about 22 meters per second. Plugging in numbers, this tells us that a standard 2-liter coke bottle worth of storm wind at 50 mph carries about half a Joule of kinetic energy. That may not sound like a lot, and it isn’t (it’s about the energy you’d expend in poking someone). But consider what happens when you make the sizes bigger. Imagine the trunk of a small car filled with storm wind, and you get up to 100 Joules of wind energy. That’s about the kinetic energy carried by a child accidentally stumbling into a glass door. Now take a Volkswagen Beetle, fill it with storm wind, and you get nearly 700 Joules of Wind Energy. That’s the kinetic energy contained in a shot put thrown by the world record holder. Finally, picture many times that energy ramming into a tree or a building, and you get a sense of the kind of damage a storm can deliver. In other words, by adding up the contributions of these ‘bottles of wind energy’ over the real size of a storm, you start to get a sense of its true destructive potential. What makes this method very different from the SS scale is that different ‘wind bottles’ can have different wind speeds (think of bottling up all the different colors in the wind map above). By adding it all up, you learn much more than just the peak intensity of a storm. This number takes into account how the wind speeds are distributed throughout the bulk of a storm. What I’ve described here is essentially a measure called the Integrated Kinetic Energy of a storm, and it is used by the National Oceanic and Atmospheric Administration to measure the strength of a storm. (See here for a good review. The main difference is that they only add up the energy for bottles of wind moving faster that a certain minimum speed, and the ‘bottles’ are much bigger than Volkswagens.) This leads us to a question that was bothering me a few days ago. You may have heard people predicting that Hurricane Sandy may be the biggest storm to hit the US, yet it’s only a category 1 storm. How can this be? Well, this plot helps make things clear. It shows the integrated kinetic energy for hurricanes hitting the United States, measured when they strike land. I came across it on twitter via Brian McNoldy (@BMcNoldy), an atmospheric researcher at University of Miami. — Brian McNoldy (@BMcNoldy) October 31, 2012 You can see that by this measure, Sandy exceeds Katrina in energy content, exceeded in the US only by Hurricane Isabel. What’s more, the integrated kinetic energy of a storm is a good predictor of its potential to cause storm surges, where sea levels rise dangerously as the wind pushes against the ocean’s surface. Category ratings based on the SS scale, on the other hand, do not predict storm surges well. By now, I hope I’ve convinced you that when you hear about a tropical storm or a hurricane, you should look for numbers beyond the SS category rating. But there’s still another side to the story. Hurricanes bring on an onslaught of energy, but nature manages to fight back. Trees don’t topple like dominoes. Far from being passive agents, it turns out that trees have tricks up their sleeves (or rather, up their trunks) that help them stay rooted. To learn more, stay tuned for the next post on the science of hurricanes. Powell, M., & Reinhold, T. (2007). Tropical Cyclone Destructive Potential by Integrated Kinetic Energy Bulletin of the American Meteorological Society, 88 (4), 513-526 DOI: 10.1175/BAMS-88-4-513 Sandy Wind Analyses at the Hurricane Research Division, NOAA Operational Hurricane Track and Intensity Forecasting at the Geophysical Fluid Dynamics Laboratory, NOAA
Search Within Results Common Core: Standard Common Core: ELA Common Core: Math Topic: Common Core Learning Standards No Results Found Your search did not match any content. Some suggestions: - Double check your spelling. - Try searching for an item that is less specific. You can always narrow your search results later. - Try searching with fewer keywords.
- Patient Care - Shoulder & Elbow - Elbow Arthroscopy Information - The Anatomy of the Elbow The Anatomy of the Elbow The elbow is a hinged joint made up of three bones, the humerus, ulna, and radius. The ends of the bones are covered with cartilage. Cartilage has a rubbery consistency that allows the joints to slide easily against one another and absorb shock. The bones are held together with ligaments that form the joint capsule. The joint capsule is a fluid filled sac that surrounds and lubricates the joint. The important ligaments of the elbow are the medial collateral ligament (on the inside of the elbow) and the lateral collateral ligament (on the outside of the elbow.) Together these ligaments provide the main source of stability for the elbow, holding the humerus and the ulna tightly together. A third ligament, the annular ligament, holds the radial head tight against the radius. There are tendons in your elbow that attach muscle to bone. The important tendons of the elbow are the biceps tendon, which is attached the biceps muscle on the front of your arm, and the triceps tendon, which attaches the triceps muscle on the back of your arm. The muscles in your forearm cross the elbow and attach to the humerus. The outside (lateral) bump just above the elbow is called the lateral epicondyle. Most of the muscles that straighten the fingers and wrist come together and attach to the medial epicondyle, or the bump on the inside of your arm just above the elbow. These two tendons are important to understand because they are common locations of tendonitis. All of the nerves that travel down the arm pass across the elbow. Three main nerves begin together at the shoulder the radial nerve, the ulnar nerve and the medial nerve. These nerves are responsible for signaling your muscles to work and to also relay sensations such as touch, pain and temperature. NEXT TOPIC: Common Conditions that Require Elbow Arthroscopy
What Would You Do? - Plotting and Planning the Strategies of the Civil War - Grade Level: - Eleventh Grade-Twelfth Grade - Civil War, Geography, History, Military and Wartime History - Two periods or one block - Group Size: - Up to 36 - National/State Standards: - Social Studies for Georgia - SSUSH9(f) Core Curriculum - CCSS.ELA-Literacy.RH.11-12.1 OverviewStudents learn about the difficult decisions made by the generals of the Atlanta Campaign by becoming the leaders of two fictional armies. By plotting their own strategies, they learn more about the choices made by generals and the results of those decisions. Objective(s)To help students assess if there was a way that the South could have won the American Civil War. By assessing the strategies of each side using both primary and secondary resources, students can answer the question of whether or not the historic outcome was the only possible one. This lesson can be used as either a pre-site or post-site activity in conjunction with a visit to Kennesaw Mountain National Battlefield Park. It may also be used as a stand-alone introductory lesson in the classroom during Civil War study. The Battle of Kennesaw Mountain is part of the Atlanta Campaign, a coordinated offensive by General Ulysses S. Grant to destroy Confederate resistance and bring about an end to the War. A major focus was placed northwestern Georgia, with Major General William T. Sherman in charge of the Georgia offensive. From May to September 1864, Federal and Confederate forces fought across north Georgia from Dalton to Atlanta, with the fall of Atlanta on September 2, 1864, as the Campaign's high point. - Union and Confederate strategy materials - Guiding questions for each group are necessary - Vital statistics and map of Tangmania and Gagoola Introduction: This is a simulation that can provide a solid reference point to build upon while teaching the American Civil War. Used as an introductory less, Gagoola vs. Tangmania allows the students to explore the options available to Civil War leaders. Students will be using the Gagoola and Tangmania information to conduct a simulation. - Divide the class into two separate groups. - Assign either group the role of Tangmania or Gagoola. It is important that each group have the opposing groups information as well. - Each group will cooperatively read the packet and answer the guiding questions. - After answering the guiding questions, each group will develop a battle plan and conditions for winning. - Each representative group will elect two generals to present their battle plan to the class as a whole. It is useful to project the map onto a smartboard, dry erase board, or overhead projector. Park ConnectionsThe Atlanta Campaign was a major contributing factor towards ending the American Civil War. The Battle of Kennesaw Mountain precedes the Battle of Atlanta and it was General William T. Sherman's defeat at what is now known as Cheatham Hill that caused him to resume flanking, or moving to the side of the opposing army, maneuvers. This flanking tactic caused Confederate commander Joseph E. Johnston to abandon his Kennesaw line and Sherman was able to cross the Chattahoochee River into Atlanta, leading to Johnston being replaced as commander, the evacuation of Atlanta, and the eventual fall of the city. A week later Sherman left Atlanta and began his infamous "March to the Sea", further crippling the Confederacy. Have students research and answer the following question: How was Sherman's 1864 Atlanta Campaign and subsequent March to the Sea an extension and fulfillment of the Anaconda Plan?
Viruses do assembly work for solar cells Researchers at MIT have enlisted viruses to help assemble solar cells and improve their performance. While it's been known that carbon nanotubes can improve the efficiency of electron collection from a solar cell's surface, there have been two big problems to their use. First, the manufacture of carbon nanotubes generally produces a mix of two types, some acting as semiconductors and some as metals. The new research shows that the effects of these two types tend to be different, because the semiconducting nanotubes can enhance the performance of solar cells, but the metallic ones have the opposite effect. Nanotubes also tend to clump together, which reduces their effectiveness. But the MIT team has managed to rope in viruses to do some of the dirty work. They've found that a genetically engineered version of a virus called M13, which normally infects bacteria, can be used to control the arrangement of the nanotubes on a surface. It keeps the tubes separate so they can’t short out the circuits, and don’t clump. The system the researchers tested used a type of solar cell known as dye-sensitized solar cells, but say it could also work with quantum-dot and organic solar cells. In tests, adding the virus-built structures enhanced the power conversion efficiency to 10.6 percent from 8 percent — almost a one-third improvement. This dramatic improvement takes place even though the viruses and the nanotubes make up only 0.1 percent by weight of the finished cell. With further work, the researchers think they can ramp up the efficiency even further. Prashant Kamat, a professor of chemistry and biochemistry at Notre Dame University, says that while others have attempted to use carbon nanotubes to improve solar cell efficiency, the improvements observed were marginal. "It is likely that the virus template assembly has enabled the researchers to establish a better contact between the TiO2 nanoparticles and carbon nanotubes. Such close contact with TiO2 nanoparticles is essential to drive away the photo-generated electrons quickly and transport it efficiently to the collecting electrode surface," he says "Dye-sensitized solar cells have already been commercialized in Japan, Korea and Taiwan. If the addition of carbon nanotubes via the virus process can improve their efficiency, the industry is likely to adopt such processes." Because the process would just add one simple step to a standard solar-cell manufacturing process, it should be quite easy to adapt existing production facilities and thus should be possible to implement relatively rapidly, the team says.
Sustainable development, as that term is commonly used and understood, means making continued economic and social development more resource efficient and less detrimental to the environment. But making development more sustainable, while highly desirable, is not the same thing as actually achieving sustainability. As we plan and carry out human development programs we must ensure that our aggregated demands upon the planet do not exceed the Earth’s capacity to supply them. The first step in achieving true sustainability is recognition of the total scale of human activity. The Global Footprint Network estimates that we are already using 150 percent of the Earth’s capacity to regenerate resources, and that does not take into account non-renewable resources, such as fossil fuels. By that measure, we are already operating unsustainably. The next step in achieving true sustainability is acknowledging the interconnectedness of all the various subsets of sustainability. Climate change, energy, food, water, and the environment are all inter-related, and efforts to address one challenge often exacerbate other global challenges. We cannot solve problems in isolation. We must be mindful of the inevitable trade-offs. The third and crucial step is recognizing that our failure to balance human demands with the capacity of the Earth has serious consequences for people today, not just future generations. Extreme weather patterns and soaring food prices are products of an over-heated, over-subscribed planet, and they are a sign that much worse is to come unless we reduce the total scale of human activity. We believe that the greatest challenge facing humanity today is that we are simply demanding too much from the planet. Humanity must collectively recognize and fully acknowledge that our species is already over-utilizing the finite resources of planet earth and reduce the total scale of the human endeavor in an urgent, orderly, and equitable manner. An overwhelming body of scientific evidence and repeated warnings from global studies (and reports from high level commissions) inform us that we have already exceeded the sustainable carrying capacity of the planet and that we are putting resource systems (and future human development) at great risk. We must wake up to this reality. The total quantity of natural resource goods and services that our species takes from the planet each year must be down-sized, and we must plan future withdrawals in a more responsible manner. Our future resource demands must be brought into balance with the capacity of the planet’s ecosystems. It is time to recognize that our species has simply grown too big for one planet and that we must adopt a new political and social paradigm: one that calls for reducing and adapting ourselves to fit within the means of nature rather than pursuing our constant struggle to adapt nature and exploit resources on an ever-increasing scale. We’ve gone beyond the safety regime for such activity. We must shift our development focus from ‘building more for humanity’ to ‘adapting and sizing ourselves to fit the planet.’ Human development and sustainable well-being will now be better served in the context of this latter paradigm: sizing our societal and economic activities to fit within the resource limitations of each local, regional, national, and global ecosystem. With the global reality of total resource overshoot already upon us, it’s the only reasonable policy option to pursue. This risk is often framed as something that may occur in the future (e.g. paragraph 42 of the Report to the Secretary-General by the UN System Task Team on the POST-2015 UN Development Agenda; “Realizing the Future We Want for All”). This future-tense framing is politically more palatable but factually absurd. It only serves to stall important policy efforts that are urgently needed for achieving future sustainability on a planet with finite resources.
I love using fill-in-the-blank stories to work on parts of speech. The problem I find with most of them is that they address too many different parts of speech in a single story. Friendship Soup is fun, winter story for practicing just naming singular and plural nouns. If you are just introducing nouns, discuss what they are and come up with a list of them together. Then have the students fill in the story with words from the list. You can also have the students come up with their own lists of singular and plural nouns if they are able to. Read the story back to them with their answers filled in and talk about what parts do and don’t make sense. Have fun and remember that in this case the sillier the story is the better!
Desiccators, sometimes called dry boxes, provide a low-humidity atmosphere for storage of items and materials that would otherwise be damaged by moisture. Desiccators are used for a wide range of applications across several scientific disciplines. In chemistry and biology, desiccators are often used to store hygroscopic reagents and chemicals. Keeping these compounds in a low-humidity environment drastically increases their shelf life. In semiconductor research and manufacturing, dry storage is often employed to prevent damaging oxidation of wafers and other components that can lead to immediate or latent failures. In order to achieve a low relative-humidity (RH) atmosphere for storage within a desiccator, two storage techniques are commonly used: desiccant-based and nitrogen-purged. Benefits of Desiccant A desiccant is a hygroscopic material, often silica gel. We’ve all seen “pillow packs” in pill bottles and electronic pac
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, videos, activities, or other ideas you'd like to contribute, we'd love to hear from you. Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals. Teacher Resources by Grade |1st - 2nd||3rd - 4th| |5th - 6th||7th - 8th| |9th - 10th||11th - 12th| Varying Views of America |Grades||9 – 12| |Lesson Plan Type||Standard Lesson| |Estimated Time||Two 50-minute sessions| Narragansett, Rhode Island After reviewing the literary elements of tone and point of view, students work in small groups to read and summarize Walt Whitman's I Hear America Singing, Langston Hughes' I, Too, Sing America, and Maya Angelou's On the Pulse of the Morning. They identify the tone and point of view of each poem, citing specific text references. Finally, students compare the three poems using a Venn diagram, synthesize the similarities and differences they identified, and then discuss their findings with the class. Varying Views of America Student Interactive: Students can use this online Venn diagram tool to compare and contrast three poems. Using poetry to explore an issue more typically explored through prose can provide many advantages. Because poetry is typically short, students can easily be exposed to more than one perspective on the topic. Poetry can also help students to make connections between historical periods and events and the impacts those events have on individuals. In writing of using poetry in their classroom to explore World War II, Elizabeth E. G. Friese and Jenna Nixon wrote that students "reached beyond the facts on the pages of a textbook, into deeper connections and the emotions of a difficult time in history." This lesson takes advantage of these positive aspects of using poetry to address social studies issues by exploring what America meant to three different poets at three different times in history. Friese, Elizabeth E. G. and Jenna Nixon. "Poetry and World War II: Creating Community through Content-Area Writing." Voices from the Middle 16.3. (March 2009): 23-30. Buehl, D. (2013). Classroom Strategies for Interactive Learning, 4e. Newark, Delaware: International Reading Association. Burke, Jim, Ron Klemp, and Wendell Schwartz. 2002. Reader's Handbook: A Student Guide for Reading and Learning. Wilmington, Massachusetts: Great Source Education Group. Fisher, Douglas, Nancy Frey, and Douglas Williams. "Seven Literacy Strategies That Work." Educational Leadership 60.3 (November 2002): 70-3.
G.GCI.3 Constructing a Circumcenter 1) Use the POLYGON tool to construct a triangle. 2) Use the PERPENDICULAR BISECTOR (PB) tool to construct the PB's of this triangle's 3 sides. 3) Plot the intersection of these 3 perpendicular bisectors (circumcenter). Construct a circle centered at the circumcenter that passes through any vertex of this triangle. Why does this circle pass through the other 2 vertices of this triangle? When you're done (or if you're unsure of something), feel free to check by watching the quick silent screencast below the applet.
The skull is a structure of bones that protects the brain and helps to support the features of the face. The two parts of a human skull are called neurocranium and viscerocranium. The largest bone of the viscerocranium is the mandible. Use this PowerPoint template to explain the anatomy of a skull and find out about all the different bones. Start Downloading Today Get instant access to thousands of premium PowerPoint templates and slides.Get Started Now
Young-Loveridge, J., & Bicknell, B. (2014). Supporting the development of number fact knowledge in five- and six-year-olds. In J. Anderson, M. Cavanagh, & A. Prescott (Eds.), Curriculum in focus: Research guided practice. Proceedings of the 37th Annual Conference of the Mathematics Education Research Group of Australasia (pp. 669–676). Adelaide, SA, Australia: Mathematics Education Research Group of Australasia Inc. Permanent Research Commons link: https://hdl.handle.net/10289/8871 This paper focuses on children’s number fact knowledge from a study that explored the impact of using multiplication and division contexts for developing number understanding with 34 five- and six-year-old children from diverse cultural and linguistic backgrounds. After a series of focused lessons, children’s knowledge of number facts, including single digit addition, subtraction, and doubles had improved. However, they did not always apply this knowledge to relevant problem-solving situations. The magnitude of the numbers did not necessarily determine the difficulty level for achieving automaticity of number fact knowledge. Mathematics Education Research Group of Australasia Inc. © Mathematics Education Research Group of Australasia Inc., 2014. Used with permission. - Education Papers
Why Use Crop Rotation? Growing vegetables in the same spot every year can lead to a build up of contamination by soil borne pests and diseases and lead to a depletion of minerals and nutrients. Following a crop rotation system, vegetables in the same family are grown in a different part of the plot each year, this can help improve soil structure and fertility and help control pests and diseases that affect a specific plant family. Some vegetables, such as brassicas are heavy feeders and deplete a lot of the soil's minerals, while others such as carrots are light feeders and use up fewer nutrients. Plants such as beans and peas actually improve the soil by adding nutrients such as nitrogen. By alternating the planting of these types of crops soil health can be maintained. There are many interpretations of crop rotation but the one thing common to all is never grow brassicas in the same spot each year. This will help avoid the fungal infection clubroot which can remain in the soil for up to twenty years! Any area not being used for an overwintering crop can be utilised by sowing a Green Manure which will suppress weeds, protect from soil erosion and leaching of nutrients by winter rain or snow and add structure to the soil. Keep it Simple If your garden is small you can still use the principles behind crop rotation to improve your soil and yields. Dividing a small plot into three or four smaller beds will allow a simple crop rotation. The basic rule to remember is never plant the same crop in the same place two years in a row. To make the most of a small space interplant slow maturing crops with fast growers such as salads, oriental veg, radish and spinach, that way you can get two crops from the same amount of space. Some vegetables are not prone to soil-borne disease so don't need to be part of your rotation so if you have empty spaces plant non-rotation crops such as lettuce, sweetcorn, spinach and courgette. Crop Rotation at a Glance - Plant brassicas after legumes - sow crops such as cabbage, cauliflower and kale in beds previously used for beans and peas as these fix nitrogen in the soil and the brassicas benefit from the nutrient-rich conditions created. - Avoid planting root vegetables on areas which have been heavily fertilised, this causes lush foliage at the expense of the roots and parsnips and carrots will fork if grown in too rich a soil. Sow these in an area which has grown heavy feeders (such as brassicas) the previous season. - Grow veg from different groups together if they require the same conditions. - Alternate root vegetables and vegetables with shallow roots to help improve soil structure. - Interplant slower growing varieties with salads and other quick grow crops to maximise production. If you have any questions or comments please use the form below.
April 22, 2004: Resembling a diamond-encrusted bracelet, a ring of brilliant blue star clusters wraps around the yellowish nucleus of what was once a normal spiral galaxy in this new image from NASA's Hubble Space Telescope (HST). This image is being released to commemorate the 14th anniversary of Hubble's launch on April 24, 1990 and its deployment from the space shuttle Discovery on April 25, 1990. The galaxy, cataloged as AM 0644-741, is a member of the class of so-called "ring galaxies." It lies 300 million light-years away in the direction of the southern constellation Volans.See the rest: The sparkling blue ring that encircles the nucleus of AM 0644-741 is 150,000 light-years in diameter, making it larger than our entire home galaxy, the Milky Way. AM 0644-741 lies 300 million light-years away in the direction of the southern constellation Volans. Ring galaxies are an especially striking example of how collisions between galaxies can dramatically change their structure, while also triggering the formation of new stars. They arise from a particular type of collision, in which one galaxy (the "intruder") plunges directly through the disk of another one (the "target"). In the case of AM 0644-741, the galaxy that pierced through the ring galaxy is out of the image but visible in larger-field images. The resulting gravitational shock imparted due to the collision drastically changes the orbits of stars and gas in the target galaxy's disk, causing them to rush outward. As the ring plows outward into its surroundings, gas clouds collide and are compressed. The clouds can then contract under their own gravity, collapse, and form an abundance of new stars. The rampant star formation explains why the ring is so blue: it is continuously forming massive, young, hot stars, which are blue in color. Another sign of robust star formation is the pink regions along the ring. These are rarefied clouds of glowing hydrogen gas, fluorescing because of the strong ultraviolet light from the newly formed massive stars. Anyone who lives on planets embedded in the ring would be treated to a view of a brilliant band of blue stars arching across the heavens. The view would be relatively short-lived because theoretical studies indicate that the blue ring will not continue to expand forever. After about 300 million years, it will reach a maximum radius, and then begin to disintegrate.
Nature and technology have long had a complicated relationship. The factories of the Industrial Revolution set in motion the escalation of resource exploitation, pollution and greenhouse gas emissions that eventually led to the global climate change we face today. On the other hand, technological innovations such as hybrid cars and biodegradable packaging are making it possible for humanity to lessen its toll on the health of our planet. Although nature and technology are often pitted as opposites — the past versus the future — in truth we don’t have to choose one or the other. In fact, to sustain life on this planet, both are crucial. That’s what makes Conservation International’s (CI) new collaboration with the Massachusetts Institute of Technology (MIT) so exciting. CI and MIT are motivated by a shared sense of urgency to find solutions to the mounting pressures humans are exerting on nature. Human actions are the primary contributor to global climate change and environmental degradation. Reducing these human pressures on nature is essential to prevent conditions from worsening — and surmount the climate impacts already set in motion. Through this partnership, CI and MIT will explore how to fight climate change by marrying technical, engineering and science-based solutions with nature-based approaches. From forests to soils to coastal and ocean ecosystems, nature holds great potential for sequestering carbon, keeping it safely locked away and out of the atmosphere. In fact, protecting nature could be 30 percent of the solution to limit warming of the planet. Nature also plays a critical role in helping society adapt to climate change. From protecting coastal cities from rising sea levels and storm surges to ensuring food and freshwater security, natural systems offer solutions that can be more effective and less costly than technological alternatives — particularly in regions of the developing world. But in order for nature-based solutions to yield their full potential, we need science and engineering, too. For example, take the devastation inflicted on the Philippines by Typhoon Haiyan in 2013. With winds of 314 kilometers (195 miles) per hour, waves up to six meters high and massive flooding in coastal areas, the storm caused more than 6,000 deaths, displaced more than 4 million and destroyed half a million homes. The most severe loss of life and property damage from Haiyan occurred in the Visayas region of the central Philippines. While a number of factors contributed to the destruction, the degradation and loss of coastal ecosystems — particularly mangroves and coastal forests which could have acted as an absorbent barrier between the communities and the storm surge — left the coasts completely exposed and the region highly vulnerable. More than 80 percent of the mangrove forests along the coasts of the Visayas had been cleared, mainly to install fish and shrimp ponds. (In the video below, a scale model produced by the Dutch research institute Deltares shows how mangrove forests reduce wave strength.) The scientific community has long recognized that mangroves are effective at reducing the impacts of waves, moderating flooding and controlling erosion. A recent World Bank study of developing countries subject to storms estimated that 3.5 million people in developing countries are currently protected from storm impacts by mangroves — a number expected to more than double to 7.2 million people when you consider predicted impacts of climate change. Despite these findings, however, governments, engineers and coastal developers have remained highly resistant to integrating mangroves into coastal planning or regularly using them as “green infrastructure” to protect coasts. One major reason for this is the lack of quantified data describing the specific value of these ecosystems for reducing wave height, storm surges, wind strength, erosion and other storm impacts. Without more data on specific values and design parameters, the amount of coastal protection provided by mangroves is hard to quantify — and given the current condition of most mangrove forests, it won’t be enough on its own. MIT, a leader in science and engineering, is known for developing novel technological solutions that address real-world problems. In order to increase our scientific knowledge of what is actually going on in the mangroves, CI and MIT will build on MIT’s laboratory studies that measure the water drag forces on mangroves to develop physically realistic models of the drag and turbulence generated by mangroves and predict the reduction of storm surge. We aim to identify the minimum mangrove area needed to provide beneficial defense, and explore the impact of restored mangrove areas on adjacent coastlines to understand when the redirection of surge by the forest intensifies the surge at adjacent sites. In addition to the Philippines, this model will be tested in other places CI currently works, such as China, Suriname, Costa Rica and Ecuador. Other initial joint CI-MIT research projects will focus on topics such as cities and nature-based solutions and how to use technology for ecological monitoring. Together, we plan to develop and deploy educational opportunities for MIT students, CI staff and the broader public and engage diverse stakeholders in the complex scientific, technical and socioeconomic work of enlisting natural systems to contribute to climate solutions. Around the globe, everyone from national governments to global companies to academics to small towns is working on their own solutions to mitigate climate change and adapt to its impacts. Protecting nature may be crucial, but it’s only one part of the puzzle — and for a problem this large and complex, we need to share knowledge and work together to fight it every way we know how. Daniela Raik is the senior vice president of CI’s Moore Center for Science. - INFOGRAPHIC: Nature’s coast guard - On Philippine coasts, rebuilding nature’s barriers to stormier seas - New technology aims to expand knowledge of Qatar’s mangroves
1 MARK QUESTIONS Q. 1. Which component of sunlight facilitates drying of wheat after harvesting? Q. 2. Which part of solar radiation stimulates the formation of vitamin D in our bodies? Q. 3. Name two semiconductor materials or elements used in fabricating solar cells. Q. 4. Name any one element used in making solar cells. On what property of the element is this use based? Q. 5. Name the largest component of biogas. Q. 6. How is slurry left over after generation of biogas in biogas plant used? Q. 7. What is greenhouse effect? Q. 8. What is bagasse? What is its use? Q. 9. What is the value of solar constant? Q. 10. Name the gas that is added to LPG to detect its leakage. Q. 11. How is biogas produced? Q. 12. What is gasohol? Q. 13. Which hydrocarbon has the highest calorific value? Q. 14. What is wind energy farm? Q. 15. What is geothermal energy? Q. 16. What is visible light? What is its approximate wavelength range? Q. 17. What is common between the three sites (i) Gulf of Kutch in Gujarat 2 MARKS QUESTIONS Q. 18. “Electricity generated by the water stored in a dam can be considered to be another form of solar energy.” Explain describing the series of energy transformations in sequence taking place during the process. Q. 19. Explain how harmful components of sunlight are prevented from reaching the earth’s surface? Name the group of chemical compounds which may destroy this natural process. Q. 20. State two disadvantages of using hydrogen gas as a fuel. Q. 21. Mention any two ways by which water can be used to produce hydroelectricity. Q. 22. Electricity generated with a windmill is another form of solar energy. Explain. Q. 23. People living on hills often get sunburns on their skin. Which component of sunlight is responsible for this effect? Why is this effect generally not observed near sea level? Q. 24. In which form is solar energy stored in oceans? Mention any two forms that could be harnessed to obtain energy in usable form. Q. 25. For producing electricity, the energy from flowing water is preferred to energy obtained by burning coke. State two reasons for it. Q. 26. What do you mean by the destructive distillation of wood? What are the substances obtained during the process? Q. 27. Draw a labelled diagram of an experimental set-up for obtaining charcoal from wood. Name the process involved. Q. 28. State one important advantage and one important limitation of water energy. 3 MARKS QUESTIONS Q. 29. Draw the diagram of the floating gas holder type biogas plant and mark on it the gas outlet. Q. 30. Name the three forms in which energy from oceans is made available for use. What are OTEC power plants? How do they operate? Q. 31. Write two advantages of using geothermal technologies for power generation purposes. Name atleast two places where geothermal energy can be used for commercial purposes. Q. 32. Why is biogas considered superior to animal dung as a fuel? Draw a neat labelled diagram of a biogas plant. Q. 33. Name a possible fuel of the future that is being produced by the fermentation of sugars. To what use is a mixture of this fuel and petrol being put in some countries? Why is this fuel not being used as a commercial fuel at present? Q. 34. Name some forms of biomass that are suitable for making biogas. Give two advantages of using biowastes to produce biogas. 5 MARKS QUESTIONS Q. 35. Draw a labelled diagram of solar cooker. What purposes are served by the blackened surface, glass cover plate and the mirror in a solar cooker? What would happen if the plane glass mirror of a solar cooker is replaced by a concave glass mirror? Q. 36. Describe the construction of a box - type solar cooker and show it with the help of a diagram. How is the rise in temperature obtained in this set up? Mention two advantages and two limitations of a solar cooker. Q. 37. On what principle does a solar water heater operate? Draw a labelled schematic diagram for a solar water heater. The solar constant at a place is 1.4 kW/m2. How much solar energy will be received at this place per second over an area of 5 m2. Q. 38. What is the basic cause for winds to blow? Name a part of India where wind energy is commercially harnessed. Compare wind power and power of water flow in respect of generating mechanical and electrical energies. What is the hindrance in developing them? Q. 39. Name the major fuel component of biogas. What are its other combustible components? Draw a simple labelled diagram of a fixed dome - type biogas plant. What is the use of the residual slurry and why? Q. 40. Why is the sun called the ultimate source of practically “all the energy” on the earth? Which form of energy, if any, can be viewed as an exception to this statement?
Just in from the field, artifacts are carefully cleaned to remove dirt. Depending on the kind of artifact, a delicate paintbrush or toothbrush might be used to clean it. Sometimes materials are too delicate, and are left as they were found, to prevent further damage. Cataloguing and labeling Once cleaned, the artifacts are sorted into basic groups such as pottery, lithics (stone), bone, and plants. Then they are catalogued. The total number of artifacts in each category is counted. Important artifacts are described individually, and sometimes drawn. Each artifact is assigned a unique number that tells the site, feature, and level that it was found. These numbers are used in a database to keep track of all the artifacts. The individual artifacts that are to be separately analyzed have their unique number and the site number written right on them to ensure that they can be tracked. Artifacts may start to break down when they are removed from the earth. Careful cleaning and treatment is necessary to preserve the materials. For example, copper artifacts are wrapped in acid-free tissue paper to prevent further corrosion. Some materials are so delicate that exposure to air causes them to start decaying. Special treatments, usually conducted by museum conservators, may be required. Whole pots are rarely found. If lucky, archaeologists find enough pieces to put together a large portion of the decorated rim area of the pot. This process is much like a three-dimensional jigsaw puzzle, often with several puzzle parts mixed together, and many missing pieces.
Arthropods from the Triassic period have been discovered preserved in amber. They are 100 million years older than previous amber inclusions. The two mites and one fly were found in millimeter-scale droplets of amber from northeastern Italy. Researchers published their findings in the journal Proceedings of the National Academy of Sciences. Arthropods are invertebrate animals including insects, arachnids and crustaceans. The specimens were preserved with microscopic fidelity, allowing the estimation of the amount of evolutionary change over millions of years. Arthropods are more than 400 million years old, but before now the oldest record of these animals in amber dates to about 130 million years, in the Cretaceous period. The amber droplets range between 2 to 6 mm in length, were buried in the Dolomite Alps of northeastern Italy and excavated by Eugenio Ragazzi and Guido Roghi of the University of Padova. Over 70,000 droplets were screened for inclusions by a team of German scientists led by Alexander Schmidt, of the Georg-August University in Göttingen. Two of the specimens are a new species of mites, Triasacarus fedelei and Ampezzoa triassica. They are the oldest fossils of a group called Eriophyoidea, which has about 3,500 living species, all of which feed on plants and sometimes form abnormal growth called galls on their bodies. T. fedelei and A. triassica fed on a now extinct conifer, but 97% of today’s gall mites feed on flowering plants. These mites existed before the appearance of flowering plants. The mites seemed to have evolved and endured even when flowering plants entered the environment. The fly couldn’t be identified because some of its body parts weren’t well preserved. There was a huge change in flora and fauna in the Triassic, because it was right after one of the most acute extinction pulses in the geological time record, the Permian-Triassic extinction event, which occurred 252.28 Ma. This could help elucidate how life continued to evolve.
Zone 3 on the USDA hardiness scale refers to regions where winters are harsh, bringing low temperatures between -30 and -40 degrees Fahrenheit. In the United States, few areas are designated as zone 3: states with substantial zone 3 coverage include Montana, North Dakota, Minnesota and Wisconsin. Annual flowers are not designed to endure the winter, but lots of species will thrive in zone 3 during the summer. Morning glory is a general botanical term used to refer to more than 1,000 flowering vine species within the Convolvulaceae family. Morning glory is low-maintenance, but does require full sunlight and light watering, and some vines may need a little support in order to grow and spread properly. Its trumpet-shaped blossoms open in the early morning sunlight, lending the flower category its name. Blossoms come in blue, purple, red, white and yellow, and a single vine can grow more than 10 feet in a season. Morning glory typically dies upon first frost. Begonia is a common flower genus with more than 1,500 distinct species. They make popular garden additions because their flowers tend to be big and bright, in showy hues of red, yellow, pink and white; leaves are usually asymmetrical and marked with intricate fractal lines. Native to tropical climates, begonias can be grown year-round, but they can still thrive as annuals during the warm weather season of zone 3. In cooler temperate climates, they require partial shade and well-drained soil. Daisies are among the most common flowers in temperate regions, and the plant family includes nearly 25,000 distinct documented species. While many of these species are annuals and a variety will grow during zone 3's growing season, the extreme durability of the Arctotis genus, more commonly known as the African daisy, is well-suited to the zone's short summers. Well-defined blooms with sharp petals grow in red, orange, yellow, pink and white. African daisies do well in full sun and can thrive even under drought conditions. The petunia genus is a popular choice among ornamental flowers because petunias grow quickly, require little maintenance and tend to grow thick clusters of attractive, brightly colored blooms. Common flower colors include blue, purple, red, white and yellow, though many varieties feature multi-colored blooms and some produce blooms of different colors within a single plant. Most varieties grow best in raised planters or hanging baskets, but some petunias are designed specifically to act as groundcover. Petunias like full sunlight and well-draining soil, but require only light watering and can do well in the absence of fertilizer.
A page from the "Causes of Color" exhibit... What colors do butterflies see? Great chefs know that enjoying a meal involves more than taste, and they go to great lengths to give food visual appeal. Likewise, flowering plants use unique visual cues to attract butterflies for a tasty meal that will also help with pollination. The symbiotic relationship between flowers and butterflies has evolved so that flowers encourage butterflies and other pollinators to feed on their nectar. Plants attract potential pollinators in many ways, including by their color, scent, reflectance, size, outline, surface texture, temperature, and motion. In contrast, plants that do not depend on insect or bird pollination are unlikely to have showy or scented flowers. To attract the potential pollinator to a particular blossom, the availability of nectar has to be advertised. These nectar guides, which are also known as "pollen guides" or "honey guides," present a visual contrast, either in form or coloring. Sometimes we can see these patterns, and sometimes they are in the ultraviolet range. Butterflies respond to the color of the petals. The color of the nectar guide of the horse chestnut tree (Aesculus hippocastanum) changes from yellow to red when nectar is no longer in production. One flower that shows a bulls-eye effect in the ultraviolet range is the black-eyed Susan (Rudbeckia hirta), which contains compounds that absorb light strongly between wavelengths of 340 nm and 380 nm. The petals of the black-eyed Susan, a large daisy-like flower, appear plain yellow to humans, but insects see a very dark center. Butterflies vary widely in their sensitivity to light, and are considered to have the widest visual range of any form of wildlife. The Chinese yellow swallowtail butterfly (Papilio xuthus) has a pentachromatic visual system (i.e., the eyes contain five different types of cells, which each react to a different band of light). It uses color vision when searching for food, and is sensitive to UV, violet, blue, green, and red wavelength peaks, suggesting color constancy. In nature, these butterflies feed on nectar provided by flowers of various colors not only in direct sunlight, but also in shaded places and on cloudy days. The windmill butterfly (Atrophaneura alcinous) has a visual range from at least 400 nm to 700 nm, while the Sara Longwing butterfly (Heliconius sara) has a range from 310nm to 650nm. To the human eye, many butterflies appear the same, but the butterflies themselves can often identify each other quite easily using ultraviolet markings. Shown above is the yellow Cleopatra butterfly (Gonepteryx cleopatra). The male and female little sulphur butterflies (Eurema lisa) differ only in the ultraviolet region, with the males being strongly ultraviolet reflective and the females non-reflective in ultraviolet. In contrast, butterflies that are palatable to birds display significant differences in appearance. The ultraviolet patches on some butterflies are directionally iridescent, so that they appear to flicker in flight. This flickering is thought to have an important role in butterfly behavior and communication. Butterflies tend to avoid the color green when feeding, but are attracted to it during egg laying. The next generation needs to hatch near a good source of food, as caterpillars have a voracious appetite. The green photoreceptors are used for the detection of movement, rather than when foraging.
The American Cyclopædia (1879)/Palestine PALESTINE (Gr. Παλαιστίνη, derived from the Heb. Pelesheth, Philistia), a country of western Asia, now forming a part of the Turkish empire, bounded N. by the Lebanon mountains, which separate it from Cœle-Syria, E. and S. by the desert which separates it from Arabia and Egypt, and W. by the Mediterranean. It lies between lat. 30º 40' and 33º 15' N., and lon. 33º 45' and 36º 30' E.; length about 200 m., average breadth 60 m.; area, 12,000 sq. m.; pop. estimated at 300,000. The name Palestine was never applied by the ancient Hebrews to anything more than the southern portion of the coast region, as synonymous with Philistia; and when it occurs in the English translation of the Bible it has this sense. The earlier Greek usage was the same; but under the Romans it became the general name for the whole country of the Jews, and Josephus uses it in both the early and the later application. Modern Palestine is included in the vilayet of Syria, and contains the two subpashalics of Acre and Jerusalem. It is a “land of hills and valleys.” It is remarkably separated by mountain and desert from other countries, and its seashore is without any good harbor. The ancient harbor of Caasarea, the principal port during the Roman dominion, was entirely artificial, and the ruins of its breakwater are now only a dangerous reef. From Tyre, which is N. of Palestine proper, to the borders of Egypt, there is now but one port, Jaffa, and this only allows landing by boats under favorable circumstances. From the coast on the west the land rises rapidly to a mountainous height in the centre, and declines on the other side to the low level of the desert, being cleft through the centre N. and S. by the deep valley of the Jordan. This depression, called by the Arabs el-Ghor, is the most characteristic feature of the physical geography of Palestine, and corresponds with the valley of the Orontes and Leontes in Cœle-Syria, and with the wady Arabah in Arabia Petræa. The coast level varies much in breadth, being in some places only a narrow pass between the mountains and the sea, and in others expanding into plains of considerable width. The southern portion of the coast level is termed in the Scriptures the plain or low country (Heb. Shefelah), and the western part of it was the abode of the Philistines. This plain is very fertile, and is covered with corn fields. N. of it is a plain less level and fertile, the Sharon of the Scriptures, a land of fine pastures, which under the Roman empire contained Cæsarea, the Roman capital of Palestine. Beyond Cæsarea the plain grows narrower, until it is terminated by Mt. Carmel, N. of which lies the plain of Acre, about 15 m. long from N. to S., and about 5 m. in average breadth from the seashore to the hills on the east. Mt. Carmel is a ridge about 10 m. long and 1,500 ft. high, stretching N. by W., and terminating at the sea in a high promontory which encloses on the south the bay of Acre. North of Mt. Carmel are the Lebanon mountains (in the wider sense), which consist of two parallel ranges running N. into Syria, and enclosing between them a beautiful and fertile plain, called in Scripture the valley of Lebanon, and by the classic writers Cœle-Syria, the “hollow or enclosed Syria.” This plain, only the extreme southern portion of which is in Palestine, is 90 m. long and from 10 to 20 m. broad, except at the S. end, where it is narrower. The western range of these mountains runs nearly parallel to the sea, into which it projects several promontories; and its average elevation is about 7,000 ft., while its loftiest summits, including Jebel Timarun (10,533 ft. according to Burton) and Jebel Makmel (9,998 ft.), are covered with perpetual snow. These summits are outside of Palestine, as is the natural amphitheatre in which grow the finest specimens that remain of the famous cedars that once covered all the mountains of Lebanon. This great western range was called Libanus by the classic writers, and to the eastern range they gave the name of Anti-Libanus. In the Scriptures both ranges are called Lebanon. They are composed of masses of limestone rock. The general elevation of Anti-Libanus is less than that of Libanus, but at its southern extremity rises the conical snow-clad peak of Hermon, called by the Arabs Jebel esh-Sheikh (the chief), or eth-Thelj (the snowy), to the height of about 10,000 ft., rivalling the highest peaks of Libanus, and overlooking all Palestine. S. of Hermon the Anti-Libanus sinks into the hills of Galilee, which rise from a table land elevated about 1,000 ft. above the sea, and sloping on the east to the Jordan, on the west to the plain of Acre, and on the south to the plain of Esdraelon. The last named plain, extending from the sea to the Jordan, is often mentioned in the Scriptures under the names of Megiddo, Jezreel, and others, and was the great battle field of Jewish history. It is traversed by ridges known as the mountains of Gilboa and Little Hermon. On its N. E. border stands Mt. Tabor, now known as Jebel et-Tur, the traditional scene of the transfiguration. Though only 1,800 ft. high, it is one of the most remarkable and interesting of the mountains of Palestine. It is sometimes called the southern termination of the Lebanon range, but rises abruptly from the plain, and is entirely insulated except on the west, where a narrow ridge joins it to the rocky hills about Nazareth. It is densely covered with trees and shrubs, except a small tract on the top. Its isolated summit commands a panoramic view of the principal places of Samaria and Galilee, and was the rendezvous of Barak from which he rushed down to the defeat of Sisera. In the middle ages it was the resort of many hermits. It is now covered with ruins of a fortress of Saracenic architecture, while there are also remains of a far earlier period. S. of the plain of Esdraelon stretches an unbroken tract of mountains, about 30 m. in breadth, and rising in height toward the south till near Hebron it attains an elevation of 3,000 ft. above the sea. The northern part of this region comprised Samaria, and the southern Judea. The principal mountains of Samaria are Ebal and Gerizim, which rise to the height of about 2,700 and 2,600 ft. respectively above the sea, the former N. and the latter S. of a narrow valley in which stands the town of Nablus, the ancient Shechem, the capital of the ten tribes after their secession from the rest of Israel. — The hills of Judea are masses of barren rock, for the most part of moderate apparent elevation, though their general height above the sea is 2,000 or 3,000 ft. On their E. face these mountains descend abruptly to the great valley of the Jordan, their general slope being furrowed by steep and rugged gorges, which form the beds of winter torrents. The precipitous descent from Jerusalem to Jericho is famous for difficulty and danger, and is an example of the valleys descending to the Jordan through all its length. The W. slope of the hills is more gradual and gentle, but still difficult of passage, and the central heights of Palestine are a series of natural fastnesses of great strength; and both in ancient and modern times armies have traversed the western plains from Egypt to Phœnicia without disturbing the inhabitants of the hill country. The Jordan is the only important river of Palestine. Its sources are mainly on the southern and western declivity of Mt. Hermon, and after a short course its head streams unite and flow into Lake Merom, now called Lake Huleh. After quitting this the river is sluggish and turbid for a short distance, till it passes over a rocky bed where its mud is deposited, and then rushes on through a narrow volcanic valley. About 13 m. below Lake Huleh it enters the lake of Gennesaret or Tiberias, or sea of Galilee, which is between 600 and 700 ft. lower than the level of the Mediterranean. On issuing from the S. end of this lake the river enters a valley from 5 to 10 m. wide, through which its course is so winding that within a space of 60 m. in length the river traverses 200 m. and descends 27 rapids through the ever deepening valley, until it finally enters the Dead sea at a depression of a little over 1,300 ft. below the level of the Mediterranean, after a total direct course from N. to S. of 120 m. At the mouth the river is 180 yards wide. Except the Jordan, Palestine has no streams considerable enough to be called rivers; those so called in its history are mere brooks or torrents which become dry in summer. The Kishon, now Nahr el-Mukutta, which enters the bay of Acre near Mt. Carmel, flows from Mt. Tabor, and in winter and spring is a large stream, while during the rest of the year it has water only in the last 7 m. of its course. The Kanah enters the Mediterranean between Cæsarea and Jaffa. The Arnon, often mentioned in Scripture, is now called the wady Modjeb; it rises near the S. E. border of the country, and flows circuitously to the Dead sea. The Jabbok, now the wady Zurka, N. of the Arnon, flows a parallel course into the Jordan. The brook Kedron flows through the valley of Jehoshaphat, on the E. side of Jerusalem, to the Dead sea, but is merely a torrent and not a constant stream. Springs and fountains of remarkable size, however, are found in different parts of the country. The principal lakes are the Dead sea in the south and the lake of Gennesaret in the north. — In many parts of the country, and especially in the valley of the Jordan and the vicinity of the Dead sea, there are indications of volcanic origin, and earthquakes are often felt. The mountains are mostly of oolitic limestone of a light gray color. Black basalt is very common. The general character of the scenery is stern and sombre. “Above all other countries in the world,” says Dean Stanley, “it is a land of ruins. In Judea it is hardly an exaggeration to say that, while for miles and miles there is no appearance of present life or habitation, except the occasional goatherd on the hillside or gathering of women at the wells, there is hardly a hilltop of the many within sight which is not covered with the vestiges of some fortress or city of former ages. The ruins we now see are of the most distant ages: Saracenic, crusading, Roman, Grecian, Jewish, extending perhaps even to the old Canaanitish remains before the arrival of Joshua.” (See Bashan.) — Palestine has a mild and steady climate, with a rainy season in the latter part of autumn, winter and a dry and almost rainless season constituting the rest of the year. The heat of summer is oppressive in the low lands, especially in the deep depression of the Jordan valley, but not among the hills; and the cold of winter is not sufficient to freeze the ground, though snow sometimes falls to the depth of a foot at Jerusalem. Though the mountains have an exceedingly barren appearance, the plains and valleys are remarkably fertile. The valley S. of Bethlehem is irrigated and cultivated with care, and has a rich and beautiful appearance. The hill country of the south is dryer and less productive than that of the north. In ancient times even the mountains were cultivated by means of terraces; but in consequence of wars and the depopulation of the country, the terraces have been neglected and broken down, and the soil of the mountains swept by rains and torrents into the valleys. On some of the hills, however, the terraces have been rebuilt, and planted with olives, figs, and the vine; but the greater part are either bare or covered with a rough growth of stunted oak. There are now no forests, and most of the trees of the country are small. The olive, fig, and pomegranate are largely cultivated, and are the most common trees. Besides these are the terebinth or turpentine tree, the oak, sycamore, mulberry, pine, pistachio, laurel, cypress, myrtle, almond, apricot, walnut, apple, pear, orange, and lemon. The number of shrubs and wild flowers is very great, and always attracts the attention of travellers; and there is such a prevalence of anemones, wild tulips, poppies, and other red flowers, as to give a scarlet color to the landscape. Palestine has always been famous for its grapes, which are remarkable alike for size and flavor. The chief agricultural productions are wheat, barley, maize, and rye. Rice is grown on the marshy borders of the Jordan and some of the lakes. Peas, beans, and potatoes are cultivated, and also tobacco, cotton, and sugar cane. The agriculture is of a rude and negligent character; the fields are seldom fenced, the few divisions being by dilapidated stone walls, or by irregular hedges of the prickly pear. More attention is paid to pastoral pursuits, and flocks of sheep and goats are very numerous. Cattle are few and poor. The roads being impracticable for wheeled vehicles, camels are the principal beasts of burden. Asses and mules are much used for riding, and fine Arabian horses are sometimes met with. The chief wild animals are bears, wild boars, panthers, hyaenas, jackals, wolves, foxes, and gazelles. Lions, which were found here in ancient times, are now extinct. Birds are few in number, though there are many distinct species, among which may be mentioned the eagle, vulture, osprey, kite, hawk, crow, owl, cuckoo, kingfisher, woodpecker, woodcock, partridge, quail, stork, heron, pelican, swan, goose, and duck. Venomous serpents are unknown, and the most noxious animals are scorpions. Mosquitoes are very common, and bees are extremely plentiful, depositing their honey in hollow trees and holes in the rocks. Locusts occasionally appear in vast swarms and devour every species of vegetation. — The present inhabitants of Palestine are a mixed race of very varied origin. The Mohammedans are the dominant and most numerous sect, and are composed of a few Turks who occupy the higher government situations, and of the great body of the common people, who are descended from mixed Arab, Greek, and ancient Syrian ancestors, the last element greatly preponderating. They are noble-looking, graceful, and courteous, but illiterate, fanatical, and indolent. The Christians are almost entirely of Syrian race, descendants of those who occupied the country when it was conquered by the Saracens. They belong mostly to the Greek church, of which there is a patriarch at Jerusalem, who has ecclesiastical jurisdiction over the whole of Syria. Under him are eight bishops, whose sees are Nazareth, Acre, Lydda, Gaza, Sebaste, Nablus, Philadelphia, and Petra. There are also a few Maronites and Roman Catholics in the large towns, and in Jerusalem about 200 Armenians under a patriarch of their own faith. The Jews, mostly from Spain, with a few from Poland and Germany, are about 10,000 in number, and live almost exclusively in the towns of Jerusalem, Hebron, Tiberias, and Safet. The population is less than one tenth of what it was in ancient times. — Palestine was first known as Canaan. But this name was confined to the country between the Mediterranean and the Jordan, the principal region E. of that river being called the land of Gilead. Palestine was subsequently called the land of promise, the land of Israel, Judah, Judea, and the Holy Land. The term Judea, though in later periods of Jewish history frequently applied to the whole country, belonged, strictly speaking, only to the southern portion of it. In the earliest times in which Palestine or Canaan becomes known to us, it was divided among various tribes, whom the Jews called collectively Canaanites. The precise locality of these nations is not in every case distinctly known. The Kenites, the Kenizzites, the Kadmonites, and a part of the Amorites lived E. of the Jordan; while W. of that river dwelt the Hittites, the Perizzites, the Jebusites, and most of the Amorites, in the hill country of the south; the Canaanites proper, in the middle; the Girgashites, along the E. border of the lake of Gennesaret; and the Hivites, mostly in the north among the mountains of Lebanon. The southern part of the coast was occupied by the Philistines and the northern by the Phœnicians. After the conquest of Canaan by the Israelites under Moses and Joshua, the land was distributed among the tribes. Judah, Simeon, Benjamin, and Dan occupied the south; Ephraim, half of Manasseh, and Issachar, the middle; and Zebulon, Naphtali, and Asher, the north. Reuben, Gad, and the other half of Manasseh were settled beyond the Jordan. After the division into two kingdoms by the secession of the ten tribes (about 975 B. C.), the boundary line between them was the northern limit of the tribe of Benjamin. In the time of Christ Palestine was subject to the Romans, and the country W. of the Jordan was divided into the provinces of Galilee, Samaria, and Judea. Galilee was that part of Palestine N. of the plain of Esdraelon, and was divided into lower or southern and upper or northern Galilee. Samaria occupied nearly the middle of Palestine. Judea as a province corresponded to the N. and W. parts of the ancient kingdom of Judah; but the S. E. portion formed a part of the territory of Idumæa. On the other side of the Jordan the country was called Peræa, and was divided into eight districts, viz.: 1, Peræa in a limited sense, which was the southernmost district, extending from the river or brook Arnon to the river Jabbok; 2, Gilead, N. of the Jabbok; 3, Decapolis, or the district of ten cities, which, as nearly as can be ascertained, were Scythopolis or Bethshan (which however was on the W. side of the Jordan), Hippos, Gadara, Pella, Philadelphia or Rabbah, Dion, Canatha, Galasa or Gerasa, Raphana, and perhaps Damascus; 4, Gaulonitis, extending N. E. of the upper Jordan and of the lake of Gennesaret; 5, Batanea, E. and S. E. of Gaulonitis; 6, Auranitis, with Ituræa, N. E. of Batanea, now known as the desert of Hauran; 7, Trachonitis, N. of Auranitis; 8, Abilene, in the extreme north, among the mountains of Anti-Libanus. — The earlier part of the history of Palestine is treated in the article Hebrews. The country remained subject to the Roman and Byzantine emperors for more than six centuries after Christ. The Jews, after frequent rebellions, in one of which, A. D. 70, Jerusalem was destroyed by Titus, were mostly driven from the country and scattered as slaves or exiles over the world. With the spread of Christianity, Palestine became the resort of vast numbers of pilgrims, and Jerusalem was made the seat of a patriarch. The emperor Constantine and his mother Helena erected throughout the land costly memorials of Christian faith, marking with churches, chapels, or altars every spot supposed to have been the scene of the acts of the Saviour. In 614 the Persians under Chosroes II. invaded Palestine, and, assisted by the Jews to the number of 26,000, captured Jerusalem. It was regained by Heraclius, but was conquered by the Mohammedan Arabs in 637. For the next two centuries the country was the scene of civil war between the rival factions of the Ommiyade, the Abbasside, and the Fatimite caliphs. From the middle of the 8th century it was a province of the Abbasside caliphs of Bagdad till 969, when it fell under the power of the Fatimite rulers of Egypt. In 1076-'7 it was conquered by the Seljuk Turks, but in 1096 it was regained by the Egyptian sultans, in whose possession it was when invaded by the crusaders in the following year. The crusaders made Godfrey of Bouillon ruler of Jerusalem, and he and his successors reigned in Palestine till Jerusalem was retaken by Sultan Saladin in 1187, and the Christian kingdom overthrown. Two years afterward another crusade was undertaken under Philip, king of France, Richard I. of England, and the emperor Frederick Barbarossa of Germany. It did not regain Jerusalem, but partially restored the Christian rule upon the coast. Another crusade in 1216, chiefly of Hungarians and Germans, met with little more success. Still another, undertaken by the emperor Frederick II. in 1228, resulted in the recovery of Jerusalem, and the Christian dominion was reëstablished over a considerable extent of territory; but after various vicissitudes of fortune, and in spite of repeated succors from Europe, it finally yielded to the arms of the Egyptian Mamelukes in 1291. The sultans of Egypt held it till 1517, when it was conquered by the Turks, in whose possession it has remained till the present time, with the exception of a brief occupation in 1839-'41 by the forces of the rebellious pasha of Egypt, Mehemet Ali. — Much attention has been given in recent times to the careful exploration of Palestine, with important results in the identification of places named in Scripture. This began with the work of Dr. Edward Robinson, the results of which were published in his “Biblical Researches” (3 vols. 8vo, Boston, 1841) and “Later Researches” (1856). Among the most recent explorations have been those of the British society organized in 1865 under the name of the “Palestine Exploration Fund,” the reports of which appear in the work of Captains Wilson and Warren, entitled “The Recovery of Jerusalem” (8vo, London, 1871), and in quarterly statements issued since that work. Among the results of the English explorations have been the trigonometrical survey of a great part of Samaria and Judea, the discovery of some remarkable Greek inscriptions of Christian origin within the Haram enclosure at Jerusalem, and the identification of a great number of Biblical and classical sites, among which are the rock Etam, Alexandrium, Chozeba, Maarath, the cliff of Ziz, Hareth, Ziph, Maon, the hill of Hachilah, the Levitical city of Debir, Ecbatana (a Roman city on Mt. Carmel), Archelais, Sycaminum, Eshtaol, Seneh (the scene of Jonathan's victory and the site of the Philistine camp), the rock Oreb, the wine press of Zeeb, the altar of Ed, the high place of Gibeon, the city of Nob, and the cave of Adullam. Among the latest identifications is Bethabara, the scene of the baptizing by John, which Lieut. C. E. Conder in 1875 fixed at the ford known as Makhadet Abara, holding that it is a different place from the Bethabara of the book of Judges. The American “Palestine Exploration Society,” organized in 1871, sent out expeditions in 1872 under command of Lieut. Edgar L. Steever, jr., and in 1874 under Prof. H. M. Paine. This society has left the region about Jerusalem to the British organization already in the field, and has undertaken to survey the region E. of the Jordan. It has published the results of its work in three “Statements,” issued in 1871, 1873, and 1875. The report of 1875 states that Mt. Pisgah has been identified with the S. W. summit of a triple mountain called by the Arabs Jebel Siaghah, about 10 m. E. of the N. end of the Dead sea. (See Pisgah.) — Among the most important works on Palestine, besides those already named, are those of Kitto, “Palestine” (London, 1841); Munk, Palestine: description géographique, historique et archéologique (Paris, 1845; German ed. by M. A. Levy, Breslau, 1871); Lynch, “Official Report of the Expedition to the Dead Sea” (8vo, Philadelphia, 1849); Churchill, “Mount Lebanon” (4 vols. 8vo, London, 1853-'62); Stanley, “Sinai and Palestine” (8vo, 1856); Prime, “Tent Life in the Holy Land” (12mo, New York, 1857); Porter, “Handbook for Travellers in Syria and Palestine” (2 vols., London, 1858; 2d ed., 1868); Thomson, “The Land and the Book” (2 vols. 8vo, New York, 1859); Tristram, “Topography of the Holy Land” (8vo, 1872); and Ritter, Die Erdkunde, vols. xiv.-xvii., translated into English under the title of “Comparative Geography of Palestine and the Sinaitic Peninsula” (4 vols. 8vo, Edinburgh, 1866).
Evolution is often thought of being a slow-process, taking thousands, if not millions, of years. However a new study in The American Naturalist found that Trinidadian guppies underwent evolution in just eight years, or thirty generations. Less than a decade ago Swanne Gordon, a graduate student at UC Riverside, and her team introduced Trinidadian guppies into the Damier River in the Caribbean island of Trinidad. They placed the guppies above a waterfall to allow them to flourish in a largely predator-free environment. In eight years the guppies had undergone noticeable evolution: they produced larger and fewer offspring. Trinidadian guppies are small freshwater fish. Photo by: Paul Bentzen. “High-predation females invest more resources into current reproduction because a high rate of mortality, driven by predators, means these females may not get another chance to reproduce,” explained Gordon. “Low-predation females, on the other hand, produce larger embryos because the larger babies are more competitive in the resource-limited environments typical of low-predation sites. Moreover, low-predation females produce fewer embryos not only because they have larger embryos but also because they invest fewer resources in current reproduction.” To test just how well-adapted to their environment the guppies in the test site had become, Gordon and her team introduced a new population of guppies, who lived under heavy predation, into the site. The team studied the competing populations for four weeks, and found that the locally adapted guppies fared far better, especially the juveniles. Local juvenile survival was over 50 percent higher than the introduced juveniles from the predated population. “This shows that adaptive change can improve survival rates after fewer than ten years in a new environment,” Gordon said. “It shows, too, that evolution might sometimes influence population dynamics in the face of environmental change.” (03/16/2009) Discovered in the Solomon Island of Vanikoro, a new species of bird from the white-eye family leads credence to the belief that white-eyes are the world’s fastest evolving family of birds. (01/06/2009) A newly identified, but already endangered species of pink land iguana may provide evidence of the lizard’s evolution on the Galápagos Islands, report researchers writing in the Proceedings of the National Academy of Sciences. (11/26/2007) Some 80 million years ago, during a period of global warming, a group of relatively immobile salamanders trekked from western North America to the continent that became Asia, report researchers writing in this week’s issue of the journal Proceedings of the National Academy of Science.
During upheaval in Libya in 2013, a window of opportunity opened for scientists from the University of Kansas to perform research at the Zallah Oasis, a promising site for unearthing fossils from the Oligocene period, roughly 30 million years ago. From that work, the KU-led team last week published a description of a previously unknown anthropoid primate — a forerunner of today’s monkeys, apes and humans — in the Journal of Human Evolution. They’ve dubbed their new find Apidium zuetina. Significantly, it’s the first example of Apidium to be found outside of Egypt. “Apidium is interesting because it was the first early anthropoid primate ever to be found and described, in 1908,” said K. Christopher Beard, Distinguished Foundation Professor of Ecology and Evolutionary Biology and senior curator with KU’s Biodiversity Institute, who headed the research. “The oldest known Apidium fossils are about 31 million years old, while the youngest are 29 million. Before our discovery in Libya, only three species of Apidium were ever recovered in Egypt. People had come up with the idea that these primates had evolved locally in Egypt.” Beard said evidence that Apidium had dispersed across North Africa was the key facet of the find. He believes shifting climatic and environmental conditions shaped the distribution of species of Apidium, which affected their evolution. “We’ve found evidence that climate change — not warming, but cooling and drying — across the Eocene-Oligocene boundary probably is the root cause in kicking anthropoid evolution into overdrive,” he said. “All of these anthropoids, which were our distant relatives, were living up in the trees — none of them were coming down. When the world became cooler and dryer in this period, what was previously a continuous belt of forest became more fragmented. This created barriers to gene flow and movement of animals from one part of forest to what used to be adjacent forest.” With a forest broken up, there was an inhibition of gene flow that through time resulted in speciation, or the creation of new species, according to the KU researcher. “Animals that are sequestered become different species over millions of years,” Beard said. “As the climate oscillates again, you’ve got different species of Apidium. As forests expand and contract, now you’ve got competition between species of Apidium that have never seen each other before. One species outcompetes the other, the other goes extinct, and we think that’s what we’re picking up with this Libyan Apidium, which is related to the youngest and largest species of Apidium known from Egypt.” Beard said that Apidium zuetina would have been physically similar to modern-day squirrel monkeys from South America, but with smaller brains, and would have dined on fruits, nuts and seeds. “We know that Apidium was a very active arboreal monkey, a really good leaper,” he said. “We know they actually had fused lower-leg bones just above the ankle joint. That’s really unusual for anthropoid primates, and the only reason for it to happen is because you like to jump a lot, as it stabilized the join between those bones and the ankle.” The team identified Apidium zuetina through detailed analysis of its teeth. “All of the fossils we have so far are just teeth, not even jaw bones — but fortunately, the teeth of these anthropoids are so distinct and diagnostic that they function like fingerprints at a crime scene,” Beard said. “Studying details of cusps and crests on teeth, we can determine evolutionary relationships. It might sound like thin evidence, but I suspect even with whole skeletons we’d still be focused on teeth to determine relationships. This is because teeth evolve rapidly in response to shifting diets, while an animal’s skull and skeleton typically evolves more slowly. Fortunately for paleontologists, teeth are well-documented in the fossil record because tooth enamel is the hardest part of a mammal body, durable and easy to fossilize.” Yet, the researchers chose to name Apidium zuetina not after any of its physical characteristics, but after the Zuetina Oil Company that made the dangerous Libyan fieldwork possible. “Without their logistical support, we couldn’t have done this work at all,” Beard said. “We did this just after end of the Libyan civil war that led to the overthrow of Gadhafi.” Beard said the discovery took place during a brief lull in violence in Libya. But the trip to the Zallah Oasis was precarious nonetheless. “We knew it was risky, but we thought we could go because of our local collaborator, Mustafa Salem, a geology professor at Tripoli University,” he said. “He’s revered as a father figure among Libyan geologists. An oil facility was close to some interesting sites, and after Mustafa contacted a former student who was working there, they provided our team with charter flights to an airstrip near the oil facility. Without that alone, we couldn’t have done our fieldwork — the roads are too dangerous with bandits and the like. They also gave us lodging, food, water and security.” Beard said armed guards accompanied the team everywhere, manning trucks mounted with antiaircraft guns. “They never asked for a nickel from us in return,” said the KU researcher. “There was an Islamist attack on a gas facility at the same time near the Algerian-Libyan border, and they killed 30-40 workers. So the security protected us and potentially saved our lives.” K. Christopher Beard, Pauline M.C. Coster, Mustafa J. Salem, Yaowalak Chaimanee, Jean-Jacques Jaeger. A new species of Apidium (Anthropoidea, Parapithecidae) from the Sirt Basin, central Libya: First record of Oligocene primates from Libya. Journal of Human Evolution, 2016; 90: 29 DOI: 10.1016/j.jhevol.2015.08.010
Learn something new every day More Info... by email Toe amputation is the surgical removal of all or part of one of a toe. Gangrene, frostbite, and atherosclerosis are the most common conditions which might require toe amputation. Amputation surgery on the toe is fairly simple, usually requires little time in the hospital and, once healed, leaves the patient with few side effects or walking disabilities. Gangrene occurs when tissue dies due to infection or a lack of blood flow to an area, and is a common occurrence in extremities such as fingers and toes. Severe, and often poorly treated, injuries such as burns or severe trauma resulting in crushed bone and broken skin are common causes of gangrene. Inadequately attended circulatory diseases such as diabetes and arteriosclerosis can also cause gangrene. When gangrene occurs, it is essential to treat it immediately to prevent more tissue from dying and any spread of infection. In extremities such as the toe, the treatment is typically amputation followed by an aggressive course of antibiotics. Frostbite, or tissue death due to cold exposure, can sometimes lead to toe amputation. When the body is exposed to extreme cold for a long period of time, it sacrifices the extremities to save the vital organs. This is done by constricting the blood vessels in the arms and legs, reserving blood for the vital organs instead and keeping the core body warmer. The lack of blood circulation in the extremities deprives the tissue of oxygen and nutrients, and causes cell death. Toes and other distal extremities are the body parts most often affected by frostbite, and may require amputation. Atherosclerosis is a vascular disease where the walls of the arteries thicken, reducing the circulation of blood. Extremities are often the first parts of the body to experience the severe effects of long-term and poorly treated atherosclerosis. In the toes, tissue death sets in after prolonged starvation of blood. Diabetes, smoking, and hypertension increase the risks of atherosclerosis. Toe amputation is fairly simple surgery, lasting only about one hour. Just before surgery, the patient is given intravenous antibiotics, general anesthesia, and the foot is thoroughly cleaned and disinfected. The skin at the base of the toe is opened, and the blood vessels are closed off. Bones and muscles in the toe are then removed, and the skin is stitched closed. If the area is severely infected, a drain may be put in place to prevent the spread of infection, or the area may be left open and packed with special wound dressings which can be changed and monitored. The hospital stay following toe amputation is usually from one to seven days, depending on the presence and degree of infection. Following surgery, the patient will receive physical therapy to learn to balance without the amputated toe while walking and running. A special shoe may be needed for a few weeks as the wound heals. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
There is more to an outline than Roman numerals! Think of it as an organizing tool — most of the work is done in the intellect instead of on the page. Outlining can be required before a work is written as a way to organize an essay, short story, book report, or other form of writing. But it can also be used to summarize another’s work. Outlining while reading a book, for example, is one way to get more out of what we are reading. Ruth Beechick frequently mentions asking a student to make an outline of an article or essay (preferably a well-written article or essay), and on a subsequent day, writing his own article or essay from the outline. As an example, Take notes [on a paragraph or two]; outline them if you can. Wait one or two days, then write two paragraphs from your notes. Compare your paragraphs with [the author’s], and find ways to improve your writing. Wait again and write new paragraphs from your notes. Did you improve? Dr. Ruth Beechick, taken from the Parent-Teacher Guide for the Original McGuffey Readers If you are looking at another’s work, you will need to be able to find the main idea, and then list the details. After reading something full of information that you want to remember, develop an outline. You don’t have to do the outline in order. First you could get down the main points, then back up and put information under each main point. Your second or third time through you may change your mind about what points are the main ones, but that’s all right. When your information is all listed, ask your teacher to show you how people number and label the parts of an outline, and how the subpoints should be indented under the main ones. Dr. Ruth Beechick, You Can Teach Your Child Successfully Outlines typically take the following example format: I. Main Point 1 A. Subpoint 1 B. Supoint 2 II. Main Point 2 A. Subpoint 1 B. Subpoint 2 C. Subpoint 3 III. Main Point 3 There is no requirement to have a certain number of points or subpoints — although some would like you to believe you must always have at least two subpoints. That would be fine if writers always followed such rules, but they don’t. You may not always find two subpoints. For this reason, and others, always try to fit the tool to the form, rather than the other way around. Some outlines may end up being a simple list. - Practice finding the main point of a paragraph. - Once finding the main point is easy, begin to identify the subpoints. - When subpoints are easily identified, begin to looking for examples and flesh out the outline. - Once a student is competent at outlining another’s work, he can begin to write an outline that he will follow for his own work. This interactive from ReadWriteThink walks you through the process.
Many researchers have investigated how human language develops, with psycholinguists, sociolinguists and educational linguists searching for evidence of how natural language intersects with learning in different domains, specifically in the learning of mathematics. Children are instructed in the symbolic and abstract world of maths as they engage with their world experientially. They learn by example, from people who engage with mathematics themselves in a systematic way. An example of such engagement is the imitative saying of words in the count list (one, two three…), preparing them for maths learning and serving as a place-holder for later conceptual understanding. Children pick up on a (rather long) ‘meme’ – the list of sequenced words that they hear and repeat. They learn the ‘count meme’ and say it, but their language is not numerically meaningful at first, although it is a source of great enjoyment – especially if they note how their parents react because the two year-old is not only learning to speak, but also to ‘count’! Well, this is not yet real counting. Most children learn to count (for real) only when they can see the one-to-one correspondence between objects that they count out. It takes a long time to cement this knowledge. It also takes a long time to get the sequence right and to establish an understanding of which number comes ‘before’ or ‘after’ another number. Children develop a personal mental line of numbers, but its neurological representation is not the same for each child. It is not as if there is a straight ‘numberline’ representation in each person’s mind. I still have the same representation that I had as a grade one child. My mental image is more like a wobbly line graph than a horizontal line. In this type of cognitive development, as in the learning of the count list, the language used by a child features strongly. Later on, it can become one of the obstacles in solving word problems. Why language of learning matters While learning to make their world mathematical, children rely on the language in which they encounter facts, procedures and concepts of maths – communicated in a specific language. But, the structures of languages differ. Spoken languages sound very, very different. When I go into a grade R class where everyone speaks Sesotho, and then go to a class next to it where everyone speaks isiZulu, I hear different intonations, accents, sounds, pauses and also tempo. The big picture of language as a powerful tool in early education is very much like digging and filling the foundations of a house one wishes to build. So, if children come to school they learn new words and sentences in their home language (or not). They also learn to identify written symbols (1, 2, 3…) which, at the same time, have a linguistic name too. On top of that they have to learn the procedures of how to use all of this data, and then try to build a concept. They do all of this at quite a speed – in one language, with some South African code-switching to English here and there when the instruction is not in English (or Afrikaans). Most children in our country move on to grade 4 and they learn via a dense school curriculum and, on top of that, they encounter all of this in English – a very challenging language to learn to read and write and to get to know the logic of its grammar and syntax. Considering all of this, I would say that one way to prepare young children to learn mathematics and science, is to insert English as a language of reading in the areas of Science, Technology, Engineering and Mathematics (STEM), as soon as possible. In the SARChI Chair that I hold on the University of Johannesburg Soweto campus, this is our main project. We are investigating ways to bring the English terminology and syntax into English readers (books for learning to read) and we are developing reading tests that use the content knowledge of STEM in the SA curriculum. One of the assets of a multilingual country is that we can translate back and forth and look for meaning by accessing other languages. Ensuring that each child can read and communicate in their home language is the first component of the building of their house of knowledge. But, we cannot fully utilise this asset when English is not firmly established by the middle grades of primary school. Author: Professor Elizabeth Henning Integrated Studies of Learning Language, Mathematics and Science in the Primary School University of Johannesburg | To this day when I imagine numbers in a sequence, not using symbols, the mental picture is this line graph. The horizontal numberline is a good heuristic too, but not always, because it is nearly always represented as a straight line. It is one version. Switching from one language of instruction to another language of instruction during teaching and learning. SARChI Chair in Integrated Studies of Learning Language, Mathematics and Science in the Primary School.
Introduction | History of Totalitarianism Totalitarianism refers to an authoritarian political system or state that regulates and controls nearly every aspect of the public and private sectors. Totalitarian regimes establish complete political, social, and cultural control over their subjects, and are usually headed by a charismatic leader. In general, Totalitarianism involves a single mass party, typically led by a dictator; an attempt to mobilize the entire population in support of the official state ideology; and an intolerance of activities which are not directed towards the goals of the state, usually entailing repression and state control of business, labor unions, churches and political parties. A totalitarian regime is essentially a modern form of authoritarian state, requiring as it does an advanced technology of social control. Totalitarian regimes or movements tend to offer the prospect of a glorious, yet imaginary, future to a frustrated population, and to portray Western democracies and their values as decadent, with people too soft, too pleasure-loving and too selfish to sacrifice for a higher cause. They maintain themselves in political power by various means, including secret police, propaganda disseminated through the state-controlled mass media, personality cults, the regulation and restriction of free speech, single-party states, the use of mass surveillance and the widespread use of intimidation and terror tactics. Totalitarianism is not necessarily the same as a dictatorship or autocracy, which are primarily interested in their own survival and, as such, may allow for varying degrees of autonomy within civil society, religious institutions, the courts and the press. A totalitarian regime, on the other hand, requires that no individual or institution is autonomous from the state's all-encompassing ideology. However, in practice, Totalitarianism and dictatorship often go hand in hand. The term "Totalitarismo" was first employed by "the philosopher of Fascism" Giovanni Gentile (1875 - 1944) and Benito Mussolini (1883 - 1945) in mid-20th century Fascist Italy. It was originally intended to convey the comforting sense of an "all-embracing, total state", but it soon attracted critical connotations and unflattering comparisons with Liberalism and democracy. Totalitarianism does not necessarily align itself politically with either the right or the left. Although most recognized totalitarian regimes have been Fascist and ultra-Nationalist, the degraded Communism of Stalin's Soviet Union and Mao Zedong's People's Republic of China were equally totalitarian in nature, and the phrase "Totalitarian Twins" has been used to link Communism and Fascism in this respect. It can be argued that Totalitarianism existed millennia ago in ancient China under the political leadership of Prime Minister Li Si (280 - 208 B.C.), who helped the Qin Dynasty unify China. Under the ruling Legalism philosophy, political activities were severely restricted, all literature destroyed, and scholars who did not support Legalism were summarily put to death. Something very similar to Totalitarianism was also in force in Sparta, a warlike state in Ancient Greece, for several centuries before the rise of Alexander the Great in 336 B.C. Its “educational system” was part of the totalitarian military society and the state machine dictated every aspect of life, down to the rearing of children. The rigid caste-based society which Plato described in his "Republic" had many totalitarian traits, despite Plato's stated goal (the search for justice), and it was clear that the citizens served the state and not vice versa. In his "Leviathan" of 1651, Thomas Hobbes envisioned an absolute monarchy exercising both civil and religious power, in which the citizens are willing to cede most of their rights to the state in exchange for security and safety. Niccolò Machiavelli's "The Prince" touched on totalitarian themes, arguing that the state is merely an instrument for the benefit of the ruler, who should have no qualms at using whatever means are at his disposal to keep the citizenry suppressed. Most commentators consider the first real totalitarian regimes to have been formed in the mid-20th Century, in the chaos following World War I, at which point the sophistication of modern weapons and communications enabled totalitarian movements to consolidate power in: - Soviet Union under Joseph Stalin (1878 - 1953), from 1928 to 1953. - Italy under Benito Mussolini (1883 - 1945), from 1922 to 1943. - Nazi Germany under Adolf Hitler (1889 - 1945) from 1933 to 1945. - Spain under Francisco Franco (1892 - 1975), from 1936 to 1975. - Portugal under António de Oliveira Salazar (1889 - 1970), from 1932 to 1974. Other more recent examples, to greater or lesser degrees, include: the People's Republic of China under Mao Zedong, North Korea under Kim Il Sung, Cuba under Fidel Castro, Cambodia under Pol Pot, Romania under Nicolae Ceausescu, Syria under Hafez al-Assad, Iran under Ayatollah Khomeini and Iraq under Saddam Hussein.
With dwindling gasoline reserves, the search for more renewable energy sources has been ongoing for quite some time, with scientists looking at the possibility of heavy alcohols like isobutanol as alternative sources of energy. Apart from being more compatible with current gasoline based infrastructure, they also provide more energy when compared to ethanol. However, before the use of isobutanol becomes practical, there is a need for scientists to figure out a way of producing it from renewable resources on large scale. Biologists and chemical engineers from MIT have come up with a way to drastically increase isobutanol production in yeast. The scientists engineered the yeast such that isobutanol synthesis occurs entirely in mitochondria (the cell structures which produce energy and host numerous biosynthetic pathways). Through the use of this approach, the scientists achieved a 260% boost in isobutanol production. Read the full story
Go to the previous, next section. This chapter describes the Emacs commands that add, remove, or adjust indentation. delete-indentation). This would cancel out the effect of LFD. Most programming languages have some indentation convention. For Lisp code, lines are indented according to their nesting in parentheses. The same general idea is used for C code, though many details are different. Whatever the language, to indent a line, use the TAB command. Each major mode defines this command to perform the sort of indentation appropriate for the particular language. In Lisp mode, TAB aligns the line according to its depth in parentheses. No matter where in the line you are when you type TAB, it aligns the line as a whole. In C mode, TAB implements a subtle and sophisticated indentation style that knows about many aspects of C syntax. In Text mode, TAB runs the command indents to the next tab stop column. You can set the tab stops with To move over the indentation on a line, do M-m back-to-indentation). This command, given anywhere on a line, positions point at the first nonblank character on the line. To insert an indented line before the current line, do C-a C-o TAB. To make an indented line after the current line, use C-e LFD. If you just want to insert a tab character in the buffer, you can type C-q TAB. split-line) moves the text from point to the end of the line vertically down, so that the current line becomes two lines. C-M-o first moves point forward over any spaces and tabs. Then it inserts after point a newline and enough indentation to reach the same column point is on. Point remains before the inserted newline; in this regard, C-M-o resembles C-o. To join two lines cleanly, use the M-^ delete-indentation) command. It deletes the indentation at the front of the current line, and the line boundary as well, replacing them with a single space. As a special case (useful for Lisp code) the single space is omitted if the characters to be joined are consecutive open parentheses or closing parentheses, or if the junction follows another newline. To delete just the indentation of a line, go to the beginning of the line and use M-\ delete-horizontal-space), which deletes all spaces and tabs around the cursor. If you have a fill prefix, M-^ deletes the fill prefix if it appears after the newline that is deleted. See section The Fill Prefix. There are also commands for changing the indentation of several lines at once. C-M-\ ( indent-region) applies to all the lines that begin in the region; it indents each line in the "usual" way, as if you had typed TAB at the beginning of the line. A numeric argument specifies the column to indent to, and each line is shifted left or right so that its first nonblank character appears in that column. C-x TAB ( indent-rigidly) moves all of the lines in the region right by its argument (left, for negative arguments). The whole group of lines moves rigidly sideways, which is how the command gets its name. M-x indent-relative indents at point based on the previous line (actually, the last nonempty line). It inserts whitespace at point, moving point, until it is underneath an indentation point in the previous line. An indentation point is the end of a sequence of whitespace or the end of the line. If point is farther right than any indentation point in the previous line, the whitespace before point is deleted and the first indentation point then applicable is used. If no indentation point is applicable even then, (see next section). indent-relative is the definition of TAB in Indented Text mode. See section Commands for Human Languages. See section Indentation in Formatted Text, for another way of specifying the indentation for part of your text. For typing in tables, you can use Text mode's definition of TAB, tab-to-tab-stop. This command inserts indentation before point, enough to reach the next tab stop column. If you are not in Text mode, this command can be found on the key M-i. You can specify the tab stops used by M-i. They are stored in a tab-stop-list, as a list of column-numbers in The convenient way to set the tab stops is with M-x edit-tab-stops, which creates and selects a buffer containing a description of the tab stop settings. You can edit this buffer to specify different tab stops, and then type C-c C-c to make those new tab stops take effect. In the tab stop buffer, C-c C-c runs the function edit-tab-stops-note-changes rather than its usual definition edit-tab-stops records which buffer was current when you invoked it, and stores the tab stops back in that buffer; normally all buffers share the same tab stops and changing them in one buffer affects all, but if you happen to make tab-stop-list local in one edit-tab-stops in that buffer will edit the local Here is what the text representing the tab stops looks like for ordinary tab stops every eight columns. : : : : : : 0 1 2 3 4 0123456789012345678901234567890123456789012345678 To install changes, type C-c C-c The first line contains a colon at each tab stop. The remaining lines are present just to help you see where the colons are and know what to do. Note that the tab stops that control tab-to-tab-stop have nothing to do with displaying tab characters in the buffer. See section Variables Controlling Display, for more information on that. Emacs normally uses both tabs and spaces to indent lines. If you prefer, all indentation can be made from spaces only. To request this, set nil. This is a per-buffer variable; altering the variable affects only the current buffer, but there is a default value which you can change as well. See section Local Variables. There are also commands to convert tabs to spaces or vice versa, always preserving the columns of all nonblank text. M-x tabify scans the region for sequences of spaces, and converts sequences of at least three spaces to tabs if that can be done without changing indentation. M-x untabify changes all tabs in the region to appropriate numbers of spaces. Go to the previous, next section.
Software developers are the creative minds behind computer programs. Some develop the applications that allow people to do specific tasks on a computer or other device. Others develop the underlying systems that run the devices or control networks. Software developers typically do the following: - Analyze users’ needs, then design, test, and develop software to meet those needs - Recommend software upgrades for customers’ existing programs and systems - Design each piece of the application or system and plan how the pieces will work together - Create a variety of models and diagrams (such as flowcharts) that instruct programmers how to write the software code - Ensure that the software continues to function normally through software maintenance and testing - Document every aspect of the application or system as a reference for future maintenance and upgrades - Collaborate with other computer specialists to create optimum software Software developers are in charge of the entire development process for a software program. They begin by asking how the customer plans to use the software. They design the program and then give instructions to programmers, who write computer code and test it. If the program does not work as expected or people find it too difficult to use, software developers go back to the design process to fix the problems or improve the program. After the program is released to the customer, a developer may perform upgrades and maintenance. Developers usually work closely with computer programmers. However, in some companies, developers write code themselves instead of giving instructions to computer programmers. Developers who supervise a software project from the planning stages through implementation sometimes are called information technology (IT) project managers. These workers monitor the project’s progress to ensure that it meets deadlines, standards, and cost targets. IT project managers who plan and direct an organization’s IT department or IT policies are included in the profile on computer and information systems managers. The following are types of software developers: - Applications software developers design computer applications, such as word processors and games, for consumers. They may create custom software for a specific customer or commercial software to be sold to the general public. Some applications software developers create complex databases for organizations. They also create programs that people use over the Internet and within a company’s intranet. - Systems software developers create the systems that keep computers functioning properly. These could be operating systems that are part of computers the general public buys or systems built specifically for an organization. Often, systems software developers also build the system’s interface, which is what allows users to interact with the computer. Systems software developers create the operating systems that control most of the consumer electronics in use today, including those in phones or cars. Software developers held about 1 million jobs in 2012. Many software developers work for computer systems design and related services firms or software publishers. Some systems developers work in computer and electronic product manufacturing industries. Applications developers work in office environments, such as for insurance carriers or corporate headquarters. In general, software development is a collaborative process and developers work on teams with others, who contribute to designing, developing, and programming successful software. However, some developers telecommute (work away from the office). Software developers usually have a bachelor’s degree, typically in computer science, software engineering, or a related field. A degree in mathematics is also acceptable. Computer science degree programs are the most common, because they tend to cover a broad range of topics. Students should focus on classes related to building software in order to better prepare themselves for work in the occupation. For some positions, employers may prefer a master’s degree. Although writing code is not their first priority, developers must have a strong background in computer programming. They usually gain this experience in school. Throughout their career, developers must keep up to date on new tools and computer languages. Software developers also need skills related to the industry in which they work. Developers working in a bank, for example, should have knowledge of finance so that they can understand a bank’s computing needs. The median annual wage for applications software developers was $90,060 in May 2012. The median wage is the wage at which half the workers in an occupation earned more than that amount and half earned less. The lowest 10 percent earned less than $55,190, and the top 10 percent earned more than $138,880. The median annual wage for systems software developers was $99,000 in May 2012. The lowest 10 percent earned less than $62,800, and the top 10 percent earned more than $148,850. In May 2012, the median annual wages for applications software developers in the top four industries in which these developers worked were as follows: - Computer and electronic product manufacturing: $97,960 - Software publishers: $96,920 - Finance and insurance: $91,970 - Computer systems design and related services: $88,500 In May 2012, the median annual wages for systems software developers in the top four industries in which these developers worked were as follows: - Computer and electronic product manufacturing: $105,030 - Finance and insurance: 99,940 - Software publishers: 99,750 - Computer systems design and related services: 98,500 Employment of software developers is projected to grow 22 percent from 2012 to 2022, much faster than the average for all occupations. Employment of applications developers is projected to grow 23 percent, and employment of systems developers is projected to grow 20 percent. The main reason for the rapid growth is a large increase in the demand for computer software. Mobile technology requires new applications. The healthcare industry is greatly increasing its use of computer systems and applications. Also, concerns over threats to computer security could result in more investment in security software to protect computer networks and electronic infrastructure. Systems developers are likely to see new opportunities because of an increase in the number of products that use software. For example, computer systems are built into consumer electronics, such as cell phones, and into other products that are becoming computerized, such as appliances. In addition, an increase in software offered over the Internet should lower costs and allow more customization for businesses, also increasing demand for software developers. Some outsourcing to foreign countries with lower wages may occur. However, because software developers should be close to their customers, the offshoring of this occupation is expected to be limited. - Informing Job Seekers - Sarammi Inc. Recruiting, Sourcing and Job Search Mentoring - Job Searching Strategies - Pittsburgh’s Ultimate Career Connection on Facebook Resume Writing EBook Straight Talk from a Recruiter: Resume Writing Strategies and Easy To Follow Techniques
A recent report of National Geographic has confirmed that the Earth doesn’t have just one moon, but has three of them. A team of Hungarian astronomers and physicists have submitted their report with conclusive proof about the presence of other two moons. They further added that the moons are entirely made of dust particles present in the gravitational pull in the space. The first sight of these moons was witnessed by a Polish astronomer Kazimierz Kordylewski in 1961. The dust clouds were then named after him because of his immense contribution towards its discovery. The new findings suggest that the Kordylewski clouds are about 65,000 by 45,000 miles in actual size and are about 15 by 10 degrees wide. The dust clouds as the name suggest are made of micro dust particles and are spread over a space area that is approximately nine times the width of the earth. The Lagrange points which suggest the sweet spots in the planetary orbit was of great help to the physicists to identify the presence of these natural bodies. Photographs were released by the team of researchers to confirm the presence of the Kordylewski moon spread at a distance of around 250000 miles. The reports have further shown that, even though the dust moons are stable in the orbit, there ingredients change with time and get swapped amongst each other. Tags : #Earth has three moon
Groundwater is any water found underground, including aquifers, subterranean rivers and streams, permafrost, and soil moisture. Groundwater flows to the surface naturally at springs and oases. It may also be tapped artificially by the digging of wells. The upper limit of abundant groundwater is called the water table. Groundwater is naturally replenished from above, as surface water from rain, rivers or lakes sinks into the ground. Some groundwater also comes from below, as water from the mantle enters the lithosphere. Problems with groundwater Groundwater is a highly useful and abundant resource, but it does not renew itself rapidly. If groundwater is extracted intensively, as for irrigation in arid regions, it may become depleted. The most evident problem that may result from this is a lowering of the water table beyond the reach of existing wells. Wells must consequently be deepened to reach the groundwater; in places like India, the water table has dropped hundreds of feet due to over-extraction. A lowered water table may, in turn, cause other problems. In coastal areas, a lowered water table may induce seawater to flow into the ground and mix with the groundwater. This is called a saltwater intrusion. Alternatively, salt from minerals may leach into the groundwater of its own accord In India, a drop in the water table has been associated with arsenic contamination. It is thought that irrigation for rice production since late 1970s resulted in the withdrawal of large quantities of groundwater, which caused the local water table to drop, allowing oxygen to enter the ground and touching off a reaction that leaches out arsenic from pyrite in the soil. The actual mechanism, however, is yet to be identified with certainty. Not all groundwater problems are caused by over-extraction. Pollutants dumped on the ground may leach into the soil, and work their way down into aquifers. Movement of water within the aquifer is then likely to spread the pollutant over a wide area, making the groundwater unusable.
The bobcat (Lynx rufus) is a species that is native to Ohio, and one of seven wild cat species found in North America. Domestic cats belong to the same family, Felidae, as the bobcat. Prior to settlement, bobcats were common throughout Ohio, but were extirpated from the state in 1850. They began to repopulate Ohio in the Mid-1900s. Since then, this cat has been sighted more often every year and is returning "home" to Ohio. Download Ohio Wildlife Field Guides The bobcat has short, dense, soft fur. Their coat color varies to include light gray, yellowish brown, buff, brown, and reddish brown on the upper parts of the body. The fur on the middle of the back is frequently darker than that on the sides. Under parts and the inside of the legs are generally whitish colors with dark spots or bars. The back of the bobcat's ears are black with white spots. The top of the tip of the ears are black; on the lynx, a cousin of the bobcat, the entire tip of the ear is black. The bobcat's tail is also black. Breeding may occur at anytime throughout the year; mostly it occurs from December through May. The gestation period lasts about 63 days. When available, the female will use an area of rock outcroppings as a natal den. The young are born helpless and are dependent on the mother. At birth, the bobcat is completely furred with its eyes closed. Young bobcats' eyes will open in 3 to 11 days, 10 days is typical. Litters range from 1 to 6 kittens; 2+ is average. Bobcats typically have one litter per year, but will produce a second if the first is lost. The young are fully weaned at eight weeks and they will disperse and begin life on their own in the fall and late winter. Habitat & Behavior Generally, the bobcat is a solitary animal, territorial and elusive by nature. Adult females have an extremely low tolerance for other adult females in their home range. The males of this species are more tolerant of another male within the home range. Bobcats generally lie in wait for their prey, pouncing when an animal comes near. Prey pursuit rarely extends more than 60 feet. Bobcats are carnivores and will consume a wide variety of insects, reptiles, amphibians, fish, birds, and mammals. Rabbits and, in northern latitudes, white-tailed deer are important components of the bobcat's diet. Research & Surveys Bobcat Population Status Report [pdf] This species occurs in the forests of eastern and southern Ohio. The Division of Wildlife received 521 confirmed sightings of bobcats in 2020. Sightings are most often confirmed through trail camera images or road-killed bobcats. While bobcats are now common in portions of southeast and southern Ohio, large amounts of unoccupied, suitable forested habitat remains, in particular in Northeast Ohio. Bobcat sightings are expected to continue to increase in future years as the population increases in distribution. Bobcat sightings can be reported through our online wildlife reporting system. Frequently Asked Questions Q: How likely are you to see a bobcat in the wild? A: It is very unlikely to see a bobcat in the wild. They are very elusive and they are also nocturnal or crepuscular, meaning active at dusk and dawn. Those are the best times to try to catch a glimpse of one. Q: Will the range of one male bobcat overlap with another? A: Male bobcats establish large home ranges that overlap with multiple females and occasionally overlap with the ranges of other males. Q: Do bobcats eat other cats? A: They can, but it is unusual. They prefer easier prey. Q: Do bobcats purr? A: Yes, they do! Q: How is the wild turkey population in Ohio impacted by bobcats? A: At present we have no evidence which suggests bobcats have much of an impact (positive or negative) on turkey populations in Ohio. Bobcats rarely prey on turkeys or other birds. In fact, a study in Ohio found the remains of birds, in general, were present in the stomachs of only 5% of road-killed bobcats. More commonly, bobcats will consume other turkey predators like opossums, however, this is also unlikely to occur at high enough rates to have a large-scale impact on turkey populations. Q: How defensive are bobcats toward each other? A: Bobcats are solitary animals and will establish individual home ranges, however, those home ranges may overlap with ranges of other bobcats. Mostly, bobcats will avoid each other by marking their territories with scrapes, urine, and feces. However, aggressive physical encounters may occur, most commonly between 2 males around the breeding season. Q: How big is a bobcat footprint? A: Between 1.5 and 3 inches wide and long. Q: What controls the bobcat population? A: The density of a bobcat population will vary based on the number of resources (prey and habitat) available in a given area, and the amount of mortality that occurs. Potential causes of mortality for bobcats include disease, predation, starvation, injury, and human-induced mortality such as roadkill, poisoning, poaching, and legal harvest (where permitted). Q: When do bobcats have their kittens? A: Breeding may occur at any time throughout the year, however it mainly occurs in February and March. The gestation period lasts about 63 days and the majority of kittens are born in April or May. Q: What can we do to help bobcats? What are groups like the ODNR doing? A: We're working to improve habitat across the state and to educate people about our native wildlife. You can help improve habitat by planting and preserving native plant species. You can also help our biologists track bobcats and other species by reporting your sightings online. Q: Do bobcats like water? A: Bobcats are good swimmers and will enter the water to cross rivers and streams. Q: How many litters do bobcats have per year? A: Bobcats usually have just one litter. Humans should not interact with bobcat kittens, leave wildlife in the wild. Q: Do bobcats migrate? A: Bobcats do not migrate, but as young adults, they may disperse a long distance from the area where they were born in order to establish their own home range. Q: Can bobcats be hunted or trapped in Ohio? A: Bobcats cannot be hunted or trapped in Ohio. Q: Are bobcats likely to attack humans? A: No, they are not. Q: How do you determine the sex of a bobcat? A: It is difficult to determine the sex of a bobcat from afar, but generally male bobcats are larger and heavier than females.
Rugby union is an intermittent high-intensity sport, in which activities that call for maximal strength and power are interspersed with periods of lower intensity aerobic activity and rest (26). Until recently, it has not been possible to collect objective data on player work rates in situ other than via use of heart rate (HR) monitoring. Traditionally, the majority of studies have investigated game demands in rugby union through time-motion analysis systems incorporating the use of game video recordings (24,9,10,14,30). Problems using video recordings may occur however as a result of errors involved in categorization of locomotor activity. This is important because rugby union is a dynamic intermittent sport with many gait changes during game phases. Furthermore, notational analysis systems are largely dependent on trained users, and considerable subjectivity may exist when interpreting data. Therefore, accurate performance assessment may be technically difficult given the complex interactions of players and the varied nature of game play. With the development of Global Positioning System (GPS) technology for use in sport, investigators can now evaluate training loads and activity profiles of players on the field. This is achieved via use of portable tracking devices, which permit quantitative measurement of activity profiles through traditional GPS triangulation methods and accessory accelerometer software. Positional data are normally achieved by comparing the signal travel time of radio frequency signals sent from the orbiting satellite and the GPS receiver worn by the player/athlete. The distance to the satellite is then calculated by multiplying the signal travel time with the speed of light. By calculating the distance to at least 4 satellites, the exact position can be trigonometrically determined (19). Changes in speed (velocity) are usually determined via the Doppler shift method, that is, measurements of the changes in signal frequency due to movement of the receiver. With this, an opportunity exists to gain valuable data on game demands in team sports like rugby union through objective distance and speed calculations not previously available using the same recording system. Through investigation of game demands, training methods can be targeted to mimic the positional requirements and physiological loads, thus optimizing player conditioning to the energy demands of the sport. “On the field” GPS technology has been used previously in Australian Football League to profile positional demands and enhance knowledge on injury potential. To the author's knowledge, these data have not been published within scientific literature. Limited data using this type of technology exist in rugby union despite its initial use within domestic southern hemisphere rugby. Furthermore, evaluation of game demands at the elite level is currently difficult as such communication devices are not allowed during competitive league and cup games. With this in mind, the aim of this pilot study was to gather information on rugby union forward and back play at the elite level and demonstrate the potential use of GPS technology in the assessment of the games' physiological demands. It should be noted that this study focuses on a small subject number, the goals being to provide some insight into the contemporary demands of rugby union with an additional focus on future research. An improved understanding of the game and training loads may help facilitate best practice advice for player management issues and appropriate training periodization. Accuracy of Global Positioning System Technology Limited data exist on the accuracy and reliability of current GPS software estimation of true distance and velocity. In a study using a generic GPS logger, it has been stated that errors in data logging may increase during activity over circular paths most likely because of underestimations in speed (35). A recent study comparing current GPS software with timing gates in soccer demonstrated that both methods produced comparable speed and distance data in a linear running protocol (28). In the same study, it was concluded that GPS data recording at 1 Hz seemed appropriate for calculating distance at lower velocities but that greater error in estimation may occur at higher velocities. The accuracy of this system in calculation of distances traversed has been previously shown to be within 4.8 (15) and <1% (22) in estimation of true distance measured using a trundle wheel. In the latter study, GPS technology was shown to be accurate in the assessment of speed (within 0.01 m·s−1 of true value). Limited data exist on accuracy of GPS in calculation of distance and speed in field activities requiring repeated changes in running intensity. Experimental Approach to the Problem To examine game play at the elite level, data in this study were taken from players during an out-of-season competitive 80-minute game. This involved 2 teams normally participating in the Celtic (Magners) league and Guinness Premiership, respectively. Both leagues represent the highest standard of club play among the Celtic nations (Ireland, Scotland, and Wales) and England. Data were obtained from 3 home team players (age, 25.0 ± 3.6 years; weight, 104.6 ± 10.4 kg; height, 193.3 ± 9.7 cm; o2max, 53.3 ± 2.1 ml·kg−1·min−1; mean ± SD). These included 1 back (out-half) and 2 forwards (back row and lock). However, as one of the players (lock forward) only participated in a quarter of the game, discussion will concentrate on the 2 players with full data sets. Forward and back players were selected in an attempt to investigate game play differences for different playing positions. Before participation in the study, players provided informed consent and were made aware of their ability to withdraw from testing at any time. Ethical procedures for the study were obtained from the University of Glamorgan Ethics Committee. All players were fully habituated and familiarized with the data collection systems. This was done on several occasions during training sessions before the actual game itself. Players were asked to wear an individual GPS unit (mass: 80 g; dimensions: 91 × 45 × 21 mm) encased within a protective harness between the player's shoulder blades in the upper thoracic-spine region (estimated). Players also wore a HR transmitter belt (Polar Electro, Kempele, Finland) to incorporate HR data. This was recorded synchronously (1-second intervals) with the GPS tracking device (SPI Elite; GPSports Systems, Canberra, Australian Capital Territory, Australia). Devices were switched on 5 minutes before the start of the game and turned off immediately after the game had ended. Data stored included time, velocity (calculated via Doppler shift), distance, position, direction, HR, and number and intensity of player impacts as measured in “g” force. GPS data were recorded at 1 Hz and accelerometry (tri-axis) data at 100 Hz, respectively. After collection, data were downloaded to a personal computer where further analysis was carried out via use of the system software provided by the manufacturer (Team AMS; GPSports, V1.2). Heart Rate and Locomotor Activity Recorded game HRs were categorized into 6 HR zones based on each player's known maximum HR (HRmax) monitored using an incremental treadmill running test in the laboratory. HR zones were as follows: (a) 0 to 60% HRmax, (b) 60 to 70% HRmax, (c) 70 to 80% HRmax, (d) 80 to 90% HRmax, (e) 90 to 95% HRmax, and (f) 95 to 100% HRmax. Total values for HR exertion were achieved using a similar weighting system carried out by Edwards (16). Frequency and duration of locomotor efforts were evaluated from the time spent in 6 player speed zones. Allocations of speed zones were those thought typical of varying locomotor categories during intermittent team sport. These were as follows: (a) standing and walking (0-6 km·h−1), (b) jogging (6-12 km·h−1), (c) cruising (12-14 km·h−1), (d) striding (14-18 km·h−1), (e) high-intensity running (18-20 km·h−1), and (f) sprinting (>20 km·h−1). The above categories were later divided into 2 further locomotor categories to provide a crude estimate of player work to rest ratios: (a) low-intensity activity (0-8 km·h−1) and (b) moderate- and high-intensity activity (>8 km·h−1). This categorization was based on data obtained from a similar previous study in Australian rules football using GPS software (34). Body Load and Game Impacts Player impact data (intensity, number, and distribution) were gathered from accelerometer data provided in “g” force. Intensity of impacts was graded according to the following scaling system provided by system manufacturers: 5-6G: light impact, hard acceleration/deceleration/change of direction; 6-6.5g: light to moderate impact (player collision, contact with the ground); 6.5-7g: moderate to heavy impact (tackle); 7-8g: heavy impact (tackle); 8-10g: very heavy impact (scrum engagement, tackle); and 10+g: severe impact/tackle/collision. Computation of player body load during exercise also involved use of the above acceleration zone forces. Body load was calculated automatically using the system software provided by the manufacturers. Estimation of Energy Expenditure Information regarding estimation of energy expenditure (EE) was obtained from continuous measurement of HR during the game. In turn, corresponding values for o2 were estimated from player's individual HR-o2 relationships (2), which were obtained during a standard incremental running protocol (o2max test) collected on the players in our laboratory. Rates of energy expenditure were subsequently calculated using methods similar to those previously shown in soccer (3,17) and rugby league (8). Non-playing periods (halftime, warm-up, and cooldown) were omitted from all analyzed GPS and energy expenditure data. Given the nature of the present investigation and small subject number, data presented below are of a descriptive nature only. Where appropriate, the duration for each activity is presented as the mean and SD. Players recorded mean and peak HRs of 172 and 200 b·min−1, respectively, during the game (Figure 1). Both players reached preestablished maximum heart rate (HRmax) during the game. The back spent more time at 80 to 90% HRmax (42%) than the forward (27.7%), whereas the forward spent more time at 90 to 95% HRmax (15.4%) than the back (4.7%) (Table 1). Mean HR was higher in the first half than the second half (173 vs. 169 b·min−1; Table 2). During the game time of 83 minutes, 72% was spent standing and walking, 18.6% jogging, 3.3% cruising, 3.8% striding, 1% high-intensity running, and 1.2% sprinting (mean data for both players). These values represent a work to rest ratio of 1:5.7. Players covered on average 6,953 m during the game (Table 2). Of this distance, 37% (∼2,800 m) was spent standing and walking, 27% (∼1,900 m) jogging, 10% (∼700 m) cruising, 14% (∼990 m) striding, 5% (∼320 m) high-intensity running, and 6% (∼420 m) sprinting (Table 3). The majority of moderate to intense accelerations occurred over running intervals of 4 to 6 seconds with little difference between player positions. Acceleration data (Table 4) refer to the number of times the players changed velocity in defined categories over 1-second time intervals. Changes in velocity over 1.5, 2.0, 2.5, and 2.75 m·s−1 correspond to changes in running speed of 5.4, 7.2, 9, and 10 km·h−1 in 1-second intervals, respectively. During the game, the players encompassed 742 changes in tempo, occurring approximately every 3 to 4 seconds. The back entered the high-speed zone (>20 km·h−1) on a greater number of occasions (34 vs. 19) than the forward (Table 5). In turn, the forward entered the lower speed zone (6-12 km·h−1) on a greater number of occasions than the back (315 vs. 229) but spent less time standing and walking than the back (66.5 vs. 77.8%). Players reached maximum speeds of 28.7 km·h−1 (back) and 26.3 km·h−1 (forward), respectively. Peak speeds for both players occurred during the second half (Table 4). Both players' work to rest ratios (average, 1:5.7) were also lower during the second half (Table 2), indicating less recovery time (i.e., time spent below 8 km·h−1) between play periods. Average player running speed over the game was 4.2 km·h−1, values greater during the second half for both players (Table 4). Within-half comparisons revealed that values for maximum speed, average speed, total distance covered, and peak HR were higher at the start of the each playing (20 minutes) quarter (Q) such that values for Q1 > Q2 and Q3 > Q4 (both players). Cross-quarter comparisons revealed that values for the above variables were highest during the third quarter of the game, that is, first 20 minutes after halftime. Body Load and Game Impacts Both players received a large number of impacts during the game with positional differences observed between the number of impacts received by the back and forward (798 vs. 1,274). Grouping of game impacts within the latter 3 categories (heavy + very heavy + severe) revealed that the forward was involved in 60% more high-level impacts than the back (Table 6). Furthermore, 66% of the high-level impacts received by the forward occurred during the second half. This resulted in greater overall body load and body load per minute for the forward player (Table 6). To the author's knowledge, this is the first study to evaluate player demands during a competitive game of rugby union using objective “on the field” software. During the game, players covered an average distance of 6,953 m (83.7 m·min−1) (Table 2). These values are less than distances reported in professional soccer players (118 ± 7.5 m·min−1) using similar GPS technology (1) but greater than previous estimations of running distance in rugby union (10,30). Similar to the study by Roberts et al., the present data suggest that backs travel greater total distances during a game than their forward counterparts (7.6% further in the current study). Interestingly, both players recorded greater running distances in the second half of the game (6.7% back, 10% forward), indicating that deterioration in running ability or, perhaps, depletion of energy reserves was not an issue in this player group. During game activity, players performed 87 moderate-intensity runs (>14 km·h−1) (18) over an average distance of 19.7 ± 14.6 m. Along with running a greater total distance, the back entered the high-speed zone (>20 km·h−1) on a greater number of occasions (34 vs. 19; Table 5) than the forward. The back also covered a greater total distance sprinting (>20 km·h−1, 524 vs. 313 m) when compared with the forward. Not surprisingly, values for total sprinting distance observed in the current study (elite senior players) are substantially greater than those reported previously in elite U19 rugby for backs (253 ± 45 m) and forwards (94 ± 27 m) (10). Overall, the data would suggest that backs participate in a greater amount of higher intensity locomotor work when compared with forwards, although the forward was found to cover a greater average distance per sprint burst activity (15.3 m back, 17.3 m forward) in the present investigation. Several studies have demonstrated that estimated total work performed (quantified by HR and movement patterns) is lower for backs than forwards (24,10,11). This is thought to occur despite the fact that forwards spend more time in the lower speed zones. Data in the present study revealed that the forward entered the lower speed zone (6-12 km·h−1) on a greater number of occasions than the back (315 vs. 229; Table 5) but spent less time standing and walking than the back (66.5 vs. 77.8% of total time; Table 7). It should however be noted that the percentage of time or effort exerted by the forwards in static activity and tackling was not measured in the current study and such activity would be anticipated to significantly contribute to game workload in this player group (30). Overall, the above findings demonstrate that the back participated in more anaerobic high-intensity activity interspersed with longer recovery periods in the lowest speed zones, whereas the forward spent more time in the moderate speed zones as recovery time between high-intensity activities. This may have implications for positional specific training requirements. The typical sprint distances of 15 to 20 m in the current study and number of intense accelerations (Table 4) imply that the ability to accelerate quickly is highly important in professional rugby union. Of interest, the majority of intense accelerations did not occur from standing starts (0 km·h−1), implying that quick changes in player running gait are of essence in game performance. During the game, approximately 10% of game time was spent performing intense locomotor activity. This corroborates previous studies using less objective methods of analysis (9,10). Because the current study did not include time spent performing intense game-specific efforts and utility movements, it is likely that this figure is greater in modern day rugby union. Nevertheless, the longest continuous time recorded above speeds of 20 km·h−1 was just 7 seconds (46.6 m), with the majority of high-intensity work periods below 6 to 7 seconds in duration. These data support previous findings where typical sprint distances of just 10 to 20 m have been shown (10,11). Such findings point to the contribution of the anaerobic energy system, in particular the phosphocreatine system during high-intensity activity of which is interspersed with long periods of lower intensity activity, primarily aerobic in nature. Interestingly, there was a general trend for both players' speed and distance to decrease after intense 5-minute play periods. It is not known what effect, if any, interval or aerobic type training may have on repeatability of intense efforts during rugby play as measured through GPS technology. This is important because improved aerobic fitness has been implicated in sprint and recovery and fatigue resistance (6,25,32,33). Future investigation using a larger data set on typical work rates in elite rugby union may be of interest. With this, it may be possible to determine minimal requirements in aerobic fitness and role of o2 kinetics in recovery from short-term anaerobic efforts. Previous research has shown that despite a disparity in distances (assessed through 10-minute intervals) covered between game halves, a rugby player's ability to perform high-intensity activity with increasing game duration is not limited (30). In the present study, total running distance, peak running speed, number of peak accelerations, average speed, and peak HR were all greater during the second 40-minute play period. Cross-quarter (∼20 minutes) comparisons revealed the third playing quarter (after halftime) to be most intense. This was reflected by lower work to rest ratios and high-intensity (>18 km·h−1) running distance per minute. Interestingly, these results compare favorably with previous work, which has shown that most injuries occur in the third quarter of the game (5). These data suggest that player fatigue was not a factor between halves. Indeed, of the high-level impacts undertaken by the forward, 66% of them occurred during the second half of play. Furthermore, players reached maximum speeds of 28.7 km·h−1 (back) and 26.3 km·h−1 (forward), both of which occurred during the second half. Expressing these values as a percentage of individual peak running speed (measured using GPS 2 weeks before the study) revealed that both players were capable of reaching 92% of their peak running speed. This shows that players do reach maximum levels of locomotor activity during rugby play despite previous game activity. In this study, the average work to rest ratio during the game was 1:5.7, indicating that for every 1 minute of running, there was almost 6 minutes of lower intensity activity. Figures drop for both players during the second half, indicating that play periods were more frequent with less recovery (Table 2). Although work to rest ratios provide important information on demands of the sport, in the case of rugby union, player work to rest ratios calculated from player locomotor activity may underestimate actual work time. Considerable time spent in specific game-related phases such as pushing/pulling in rucks/mauls/scrums may register as low-intensity activity using current GPS technology despite intense static player efforts. Although the above ratio provides information on the intermittent nature of elite rugby union, it may not provide a true reflection of player work rates, in particular for forwards. Combining objective GPS data with qualitative analysis of time spent in non-running exertion and utility movements may help in establishing more defined work to rest ratios and setting of fitness goals. Although the game in the present investigation was outside normal competition, it was played between 2 of the topsides normally participating in the Celtic League and Guinness Premiership. The game occurred at the end of preseason training before commencement of the regular club season. Both teams contained a large number of first-team regulars, were evenly matched, and the game served as an important element in seasonal preparation. Therefore, data, although limited by subject number, do provide some insight into game demands at the top level in European rugby. Our results suggest that players exercise at ∼80 to 85% o2max (Table 8) during the course of the game. This is similar to those reported in rugby league (80% o2max) (8) and higher than values observed in Gaelic footballers (∼72% o2max) (29) during competitive matches. It is possible that elevations in HR may have overpredicted aerobic demand because it has been suggested that changes in HR may not accurately reflect changes in energy cost occurring over short-term high-intensity activities (20). Factors other than oxygen uptake such as environmental temperature, emotions, continuity of exercise, and perhaps more importantly muscle actions and body position can influence HR response to exercise. With particular reference to rugby, players are required to exert forces dynamically and statically during various game activities. These activities often demand both upper- and lower-body musculature, for example, during scrummaging, rucking, and mauling. In such cases, elevations in HR may not accurately predict actual oxygen uptake. Indeed, it has been shown that when muscles act statically in straining-type exercise, HRs are consistently higher compared with dynamic leg exercise only at a particular oxygen uptake (21,27). This should be taken into account when prescribing training drills based on HR. Nevertheless, measurement of game HR does provide a useful index of overall physiological strain. Mean game HR was 172 b·min−1 (∼88% HRmax; Figure 1), higher than values of 166 ± 10 b·min−1 recorded within semiprofessional rugby league (8). Extrapolation of laboratory-based HR-o2 relationships in the EE during intermittent activity has been previously shown to reflect metabolic expenditure during soccer activity (17). Furthermore, the HR-o2 regression has been shown to be a good predictor of aggregate responses to irregular exercise including vigorous anaerobic activity (7). Using this method, data above (Table 8) show that estimated values for EE were 6.9 and 8.2 MJ for the back and forward, respectively. These values correspond to ∼13 metabolic equivalents and are similar to those reported in semiprofessional rugby league (7.9 MJ) but ∼25% greater than those reported in professional soccer players (31,3). It is perhaps not surprising that EE is so high in professional rugby union given the nature and intensity of the game, involvement of total body musculature, and, most importantly, player size. This is significant because energy cost of locomotion increases directly with increasing body mass (23). In this study, players weighed 92 kg (back) and 107 kg (forward), respectively. Nevertheless, potential errors in estimation of EE using this method may occur and have been the subject of recent attention (13). This “averaging out” approach may be criticized on the basis that the regression line is based on steady state responses, conditions not found in intermittent sport (12). Consequently, the current EE data should be regarded as a crude estimation only and does not take into account resting energy expenditure. Nonetheless, the data suggest that replenishment of energy after a game is of great importance. This has obvious implications for maintenance of muscle mass during a season and for replenishment of energy reserves between games. Previous research has shown that there is a 2-day delay in muscle glycogen replenishment after a game of soccer despite administration of a high carbohydrate diet (4). Therefore, repeated exposure to heavy exercise should be monitored closely so as to avoid any adverse effects on player well-being. The data presented in this case study are of a descriptive nature only, are limited by subject number, and do not reflect variations in game activity/player demands, which may occur within and between participation levels. Further data on players from different playing levels, positions, and teams will help in defining physiological demands and evolutionary trends. Nevertheless, the current report does provide insight into the intense intermittent nature of elite rugby union. These findings seem to confirm that the contemporary rugby union player runs longer and harder than previously thought, data of which have not been described previously using this technology. Such data may have important applications in terms of training replication of game demands, conditioning of player groups, and in the evaluation of overall game stress. Use of GPS-accelerometry technology offers a valuable insight into physiological demands during match play, not previously available through HR-based collection methods and video analysis. Further work using this technology, in particular detailed analysis of accelerometer and player impact data, may help fitness experts in evaluating player work rates outside that of traditional locomotor activity. Combination of GPS software with game recordings may provide more insight into categorization of forces/accelerations received/exerted during the many contact elements within the game. Appropriate classification of these contact loads may help in devising individual recovery programs specific to the player in question. The author wishes to thank the players and staff of Llanelli Scarlets RFC, Wales, for participation and facilitation of this study. No grant aid was received in conjunction with this work, and no conflicts of interest are declared. 1. Álvarez, JCB and Castagna, C. Activity patterns in professional futsal players using global position tracking system. J Sports Sci Med 10: 208, 2007. 2. Astrand, PO and Rodahl, K. Textbook of Work Physiology . New York, NY: McGraw-Hill, 1977. 3. Bangsbo, J. Energy demands in competitive soccer. J Sports Sci 12: S5-S12, 1994. 4. Bangsbo, J, Mohr, M, and Krustrup, P. Physical and metabolic demands of training and match-play in the elite football player. J Sports Sci 24: 665-674, 2006. 5. Bathgate, A, Best, JP, Craig, G, Jamieson, M, and Wiley, JP. A prospective study of injuries to elite Australian rugby union players. Br J Sports Med 36: 265-269, 2002. 6. Bogdanis, GC, Nevill, ME, Boobis, LH, and Lakomy, HK. Contribution of phosphocreatine and aerobic metabolism to energy supply during repeated sprint exercise. J Appl Physiol 80: 876-884, 1996. 7. Bot, SDM and Hollander, AP. The relationship between heart rate and oxygen uptake during non-steady state exercise. Ergonomics 43: 1578-1592, 2000. 8. Coutts, A, Reaburn, P, and Abt, G. Heart rate, blood lactate concentration and estimated energy expenditure in a semi-professional rugby league team during a match: A case study. J Sports Sci 21: 97-103, 2003. 9. Deutsch, MU, Kearney, GA, and Rehrer, NJ. Time-motion analysis of professional rugby union players during match-play. J Sports Sci 25: 461-472, 2007. 10. Deutsch, MU, Maw, GJ, Jenkins, D, and Reaburn, P. Heart rate, blood lactate and kinematic data of elite colts (under-19) rugby union players during competition. J Sports Sci 16: 561-570, 1998. 11. Docherty, D, Wenger, HA, and Neary, P. Time motion analysis related to the physiological demands of rugby. J Hum Mov Stud 14: 269-277, 1998. 12. Drust, B, Atkinson, G, and Reilly, T. Future perspectives in the evaluation of the physiological demands of soccer. Sports Med 37: 783-805, 2007. 13. Dugas, LR, van der Merwe, L, Odendaal, H, Noakes, TD, and Lambert, EV. A novel energy expenditure prediction equation for intermittent physical activity. Med Sci Sports Exerc 37: 2154-2161, 2005. 14. Eaton, C and George, K. Position specific rehabilitation for rugby union players. Part I: Empirical movement analysis data. Phys Ther Sport 7: 22-29, 2006. 15. Edgecomb, SJ and Norton, KI. Comparison of global positioning and computer-based tracking systems for measuring player movement distance during Australian football. J Sci Med Sport 9: 25-32, 2006. 16. Edwards, S. High performance training and racing. In: The Heart Rate Monitor Book . Edwards, S, ed. Sacramento, CA: Feet Fleet Press, 1993. pp. 113-123. 17. Esposito, F, Impellizzeri, FM, Margonato, V, Vanni, R, Pizzini, G, and Veicsteinas, A. Validity of heart rate as an indicator of aerobic demand during soccer activities in amateur soccer players. Eur J Appl Physiol 93: 167-172, 2004. 18. Hartwig, TB, Naughton, G, and Searl, J. Defining the volume and intensity of sport participation in adolescent rugby union players. Int J Sports Physiol Perform 3: 94-106, 2008. 19. Larsson, P. Global positioning system and sport-specific testing. Sports Med 33: 1093-1101, 2003. 20. Little, T and Williams, AG. Measures of exercise intensity during soccer training drills with professional soccer players. J Strength Cond Res 21: 367-371, 2007. 21. Maas, S, Kok, ML, Westra, HG, and Kemper, HC. The validity of the use of heart rate in estimating oxygen consumption in static and in combined static/dynamic exercise. Ergonomics 32: 141-148, 1989. 22. Macleod, H and Sunderland, C. Reliability and validity of a global positioning system for measuring player movement patterns during field hockey. Med Sci Sports Exerc 39: 209-210, 2007. 23. McArdle, WD, Katch, FI, and Katch, VL. Exercise Physiology: Energy, Nutrition, and Human Performance (6th ed.). Baltimore: MD, Lippincott Williams and Wilkins, 2006. 24. McLean, DA. Analysis of the physical demands of international rugby union. J Sports Sci 10: 285-296, 1992. 25. McMahon, S and Wenger, HA. The relationship between aerobic fitness and both power output and subsequent recovery during maximal intermittent exercise. J Sci Med Sport 1: 219-227, 1998. 26. Nicholas, CW. Anthropometric and physiological characteristics of rugby union football players. Sports Med 23: 375-396, 1997. 27. Patterson, R and Pearson, J. Work-rest periods: Their effects on normal physiologic response to isometric and dynamic work. Arch Phys Med Rehabil 66: 349-352, 1985. 28. Portas, M, Rush, C, Barnes, C, and Batterham, A. Method comparison of linear distance and velocity measurements with global positioning satellite (GPS) and the timing gate techniques. J Sports Sci Med S10: 7-8, 2007. 29. Reilly, T and Doran, D. Science and Gaelic football: A review. J Sports Sci 19: 181-193, 2001. 30. Roberts, SP, Trewartha, G, Higgitt, RJ, El-Abd, J, and Stokes, KA. The physical demands of elite English rugby union. J Sports Sci 26: 825-833, 2008. 31. Shephard, RJ. The energy needs of a soccer player. Clin J Sport Med 2: 62-70, 1992. 32. Tomlin, DL and Wenger, HA. The relationship between aerobic fitness and recovery from high intensity intermittent exercise. Sports Med 31: 1-11, 2001. 33. Tomlin, DL and Wenger, HA. The relationships between aerobic fitness, power maintenance and oxygen consumption during intense intermittent exercise. J Sci Med Sport 5: 194-203, 2002. 34. Wisbey, B and Montgomery, P. Quantifying AFL Player Game Demands Using GPS Tracking. FitSense Australia, 2005. Available at: http://www.fitsense.com.au/downloads/GPS %20Research%20Report%20-%202005.pdf. Accessed April 20, 2008. 35. Witte, TH and Wilson, AM. Accuracy of non-differential GPS for the determination of speed over ground. J Biomech 37: 1891-1898, 2004.
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
Coughing is an important way to keep your throat and airways clear. But too much coughing may mean you have a disease or disorder. Some coughs are dry. Others are productive. A productive cough is one that brings up mucus. Mucus is also called phlegm or sputum. Coughs can be either acute or chronic: - Acute coughs usually begin rapidly and are often due to a cold, flu, or sinus infection. They usually go away after 3 weeks. - Subacute coughs last 3 to 8 weeks. - Chronic coughs last longer than 8 weeks. Common causes of coughing are: Other causes include: If you have asthma or another chronic lung disease, make sure you are taking medicines prescribed by your health care provider. Here are some tips to help ease your cough: - If you have a dry, tickling cough, try cough drops or hard candy. Never give these to a child under age 3, because they can cause choking. - Use a vaporizer or take a steamy shower to increase moisture in the air and help soothe a dry throat. - Drink plenty of fluids. Liquids help thin the mucus in your throat making it easier to cough it up. - Do not smoke, and stay away from secondhand smoke. Medicines you can buy on your own include: - Guaifenesin helps break up mucus. Follow package instructions on how much to take. Do not take more than the recommended amount. Drink lots of fluids if you take this medicine. - Decongestants help clear a runny nose and relieve postnasal drip. Check with your provider before taking decongestants if you have high blood pressure. - Talk to your child's provider before you give children ages 6 years or younger an over-the-counter cough medicine, even if it is labeled for children. These medicines likely do not work for children, and can have serious side effects. If you have seasonal allergies, such as hay fever: - Stay indoors during days or times of the day (usually the morning) when airborne allergens are high. - Keep windows closed and use an air conditioner. - DO not use fans that draw in air from outdoors. - Shower and change your clothes after being outside. If you have allergies year-round, cover your pillows and mattress with dust mite covers, use an air purifier, and avoid pets with fur and other triggers. Treat the underlying cause (per above) as directed by your health care provider. When to Contact a Medical Professional Call 911 or the local emergency number if you have: - Shortness of breath or difficulty breathing - Hives or a swollen face or throat with difficulty swallowing Call your provider right away if a person with cough has any of the following: - Heart disease, swelling in your legs, or a cough that gets worse when you lie down (may be signs of heart failure) - Have come into contact with someone who has tuberculosis - Unintentional weight loss or night sweats (could be tuberculosis) - An infant younger than 3 months old who has a cough - Cough lasts longer than 10 to 14 days - Cough that produces blood - Fever (may be a sign of a bacterial infection that requires antibiotics) - High-pitched sound (called stridor) when breathing in - Thick, foul-smelling, yellowish-green phlegm (could be a bacterial infection) - Violent cough that begins rapidly What to Expect at Your Office Visit The provider will perform a physical exam. You will be asked about your cough. Questions may include: - When the cough began - What it sounds like - If there is pattern to it - What makes it better or worse - If you have other symptoms, such as a fever The provider will examine your ears, nose, throat, and chest. Tests that may be done include: Treatment depends on the cause of the cough. Chung KF, Mazzone SB. Cough. In: Broaddus VC, King TE, Ernst JD, et al, eds. Murray and Nadel's Textbook of Respiratory Medicine. 7th ed. Philadelphia, PA: Elsevier; 2022:chap 37. Kraft M. Approach to the patient with respiratory disease. In: Goldman L, Schafer AI, eds. Goldman-Cecil Medicine. 26th ed. Philadelphia, PA: Elsevier; 2020:chap 77. Denis Hadjiliadis, MD, MHS, Paul F. Harron Jr. Associate Professor of Medicine, Pulmonary, Allergy, and Critical Care, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA. Also reviewed by David Zieve, MD, MHA, Medical Director, Brenda Conaway, Editorial Director, and the A.D.A.M. Editorial team. The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997- A.D.A.M., a business unit of Ebix, Inc. Any duplication or distribution of the information contained herein is strictly prohibited. All rights reserved.
The suspected culprit is a mosquito-borne virus called Zika. Officials in Colombia, Ecuador, El Salvador and Jamaica have suggested that women delay becoming pregnant. And the Centers for Disease Control and Prevention has advised pregnant women to postpone travel to countries where Zika is active. Zika virus was discovered almost 70 years ago, but wasn’t associated with outbreaks until 2007. So how did this formerly obscure virus wind up causing so much trouble in Brazil and other nations in South America? Where did Zika come from? Zika virus was first detected in Zika Forest in Uganda in 1947 in a rhesus monkey, and again in 1948 in the mosquito Aedes africanus, which is the forest relative of Aedes aegypti. Aedes aegypti and Aedes albopictus can both spread Zika. Sexual transmission between people has also been reported. Zika has a lot in common with dengue and chikungunya, another emergent virus. All three originated from West and central Africa and Southeast Asia, but have recently expanded their range to include much of the tropics and subtropics globally. And they are all spread by the same species of mosquitoes. Until 2007 very few cases of Zika in humans were reported. Then an outbreak occurred on Yap Island of Micronesia, infecting approximately 75 percent of the population. Six years later, the virus appeared in French Polynesia, along with outbreaks of dengue and chikungunya viruses. How did Zika get to the Americas? Genetic analysis of the virus revealed that the strain in Brazil was most similar to one that had been circulating in the Pacific. Brazil had been on alert for an introduction of a new virus following the 2014 FIFA World Cup, because the event concentrated people from all over the world. However, no Pacific island nation with Zika transmission had competed at this event, making it less likely to be the source. There is another theory that Zika virus may have been introduced following an international canoe event held in Rio de Janeiro in August of 2014, which hosted competitors from various Pacific islands. Another possible route of introduction was overland from Chile, since that country had detected a case of Zika disease in a returning traveler from Easter Island. Most people with Zika don’t know they have it According to research after the Yap Island outbreak, the vast majority of people (80 percent) infected with Zika virus will never know it – they do not develop any symptoms at all. A minority who do become ill tend to have fever, rash, joint pains, red eyes, headache and muscle pain lasting up to a week. And no deaths had been reported. In early 2015, Brazilian public health officials sounded the alert that Zika virus had been detected in patients with fevers in northeast Brazil. Then there was a similar uptick in the number of cases of Guillain-Barré in Brazil and El Salvador. And in late 2015 in Brazil, cases of microcephaly started to emerge. How Zika might affect the brain is unclear, but a study from the 1970s revealed that the virus could replicate in neurons of young mice, causing neuronal destruction. Recent genetic analyses suggest that strains of Zika virus may be undergoing mutations, possibly accounting for changes in virulence and its ability to infect mosquitoes or hosts. The Swiss cheese model for system failure One way to understand how Zika spread is to use something called the Swiss cheese model. Imagine a stack of Swiss cheese slices. The holes in each slice are a weakness, and throughout the stack, these holes aren’t the same size or the same shape. Problems arise when the holes align. With any disease outbreak, multiple factors are at play, and each may be necessary but not sufficient on its own to cause it. Applying this model to our mosquito-borne mystery makes it easier to see how many different factors, or layers, coincided to create the current Zika outbreak. A hole through the layers The first layer is a fertile environment for mosquitoes. That’s something my colleagues and I have studied in the Amazon rain forest. We found that deforestation followed by agriculture and regrowth of low-lying vegetation provided a much more suitable environment for the malaria mosquito carrier than pristine forest. Increasing urbanization and poverty create a fertile environment for the mosquitoes that spread dengue by creating ample breeding sites. In addition, climate change may raise the temperature and/or humidity in areas that previously have been below the threshold required for the mosquitoes to thrive. The second layer is the introduction of the mosquito vector. Aedes aegypti and Aedes albopictus have expanded their geographic range in the past few decades. Urbanization, changing climate, air travel and transportation, and waxing and waning control efforts that are at the mercy of economic and political factors have led to these mosquitoes spreading to new areas and coming back in areas where they had previously been eradicated. For instance, in Latin America, continental mosquito eradication campaigns in the 1950s and 1960s led by the Pan American Health Organization conducted to battle yellow fever dramatically shrunk the range of Aedes aegypti. Following this success, however, interest in maintaining these mosquito control programs waned, and between 1980 and the 2000s the mosquito had made a full comeback. The third layer, susceptible hosts, is critical as well. For instance, chikungunya virus has a tendency to infect very large portions of a population when it first invades an area. But once it blows through a small island, the virus may vanish because there are very few susceptible hosts remaining. Since Zika is new to the Americas, there is a large population of susceptible hosts who haven’t previously been exposed. In a large country, Brazil for instance, the virus can continue circulating without running out of susceptible hosts for a long time. The fourth layer is the introduction of the virus. It can be very difficult to pinpoint exactly when a virus is introduced in a particular setting. However, studies have associated increasing air travel with the spread of certain viruses such as dengue. When these multiple factors are in alignment, it creates the conditions needed for an outbreak to start. Putting the layers together My colleagues and I are studying the role of these “layers” as they relate to the outbreak of yet another mosquito-borne virus, Madariaga virus (formerly known as Central/South American eastern equine encephalitis virus), which has caused numerous cases of encephalitis in the Darien jungle region of Panama. There, we are examining the association between deforestation, mosquito vector factors, and the susceptibility of migrants compared to indigenous people in the affected area. In our highly interconnected world which is being subjected to massive ecological change, we can expect ongoing outbreaks of viruses originating in far-flung regions with names we can barely pronounce – yet.
Harry S. Truman Harry S. Truman was born on May 8, 1884, in Lamar, Missouri. He spent most of his youth in Independence, Missouri, where he attended the local public schools. Upon graduating from high school, Truman went to work to help support his parents and his siblings. He held numerous positions, including railroad timekeeper, mail clerk, bookkeeper, and bank clerk. From 1906 to 1917, he worked as a farmer near Grandview, Missouri. With World War I’s outbreak, Truman enlisted in the United States Army, rising to the rank of major and commander of an artillery battery. Upon the war’s conclusion, Truman returned to Missouri, where he married and opened a men's clothing store in Kansas City. In 1922, Truman closed the store due to a lack of business. It was not until 1923 when Truman decided to make politics his career, although he had previously served as the road overseer of Jackson County, Missouri, and the postmaster of Grandview, Missouri. In 1923, he became a county judge in Jackson County, Missouri, an office he held until 1924. In 1927, he won election again as a county judge in Jackson County, remaining in this position for the next seven years. Interestingly, Truman did not preside over any judicial cases during his time as a judge. County judges in Missouri during this time primarily oversaw the maintenance and construction of roads and other public structures. In 1934, Truman, a member of the Democratic Party, won election as one of Missouri’s two United States senators. As a senator, Truman worked closely with President Franklin Delano Roosevelt. With the Great Depression gripping the United States, Roosevelt sought to implement his New Deal program, and Truman helped to force the president’s agenda through Congress. Because of Truman’s loyalty, Roosevelt selected Truman as his vice-presidential running mate in the election of 1944. Roosevelt and Truman won the election, making Truman Vice President of the United States in 1945. Truman remained vice president for just eighty-two days. On April 12, 1945, President Roosevelt died, making Truman president. As president, Truman saw the surrender of Germany and authorized the use of two atomic bombs on Japan, resulting in the end of World War II. He also faced increasing opposition from the Republican Party, which generally hoped to restrict workers’ rights and hinder labor unions. Truman also saw the outbreak of the Cold War, and he stringently opposed the expansion of communism through both economic and military means. Despite these difficulties, Truman won election in 1948, although at least one newspaper declared that Truman’s opponent had actually won. Illustrating how divided Ohioans were politically, Truman carried Ohio by just 7,107 votes over his Republican opponent in the election. Tensions between the United States and the Soviet Union dominated Truman’s second term in office. At Truman’s urging, the United States helped establish and joined the North Atlantic Treaty Organization. Truman also committed United States troops to the Korean War, hoping to stop the spread of communism into South Korea. Within the United States, the Second Red Scare erupted, with political leaders, like Senator Joseph McCarthy, and common people in the U.S. actively seeking out communist supporters. In Ohio, state leaders also participated in these efforts to find communists, forming agencies like the Ohio Un-American Activities Committee. In 1952, Truman chose to not run for reelection, although he did claim on several occasions that he would run for president again when he was ninety years old. Truman returned to Independence, Missouri, where he retired from politics. He died on December 26, 1972, two years shy of his ninetieth birthday.
This Teacher’s Guide offers a collection of lessons and resources for K-12 social studies, literature, and arts classrooms that center around the experiences, achievements, and perspectives of Asian Americans and Pacific Islanders across U.S. history. Archival visits, whether in person or online, are great additions to any curriculum in the humanities. Primary sources can be the cornerstone of lessons or activities involving any aspect of history, ancient or modern. This Teachers Guide is designed to help educators plan, execute, and follow up on an encounter with sources housed in a variety of institutions, from libraries and museums to historical societies and state archives to make learning come to life and teach students the value of preservation and conservation in the humanities. The National Endowment for the Humanities has compiled a collection of digital resources for K-12 and higher education instructors who teach in an online setting. The resources included in this Teacher's Guide range from videos and podcasts to digitized primary sources and interactive activities and games that have received funding from the NEH, as well as resources for online instruction. This collection of free, authoritative source information about the history, politics, geography, and culture of many states and territories has been funded by the National Endowment for the Humanities. Our Teacher's Guide provides compelling questions, links to humanities organizations and local projects, and research activity ideas for integrating local history into humanities courses. For more than 400 years, Shakespeare’s 37 surviving plays, 154 sonnets, and other poems have been read, performed, taught, reinterpreted, and enjoyed the world over. This Teacher's Guide includes ideas for bringing the Bard and pop culture together, along with how performers around the world have infused their respective local histories and cultures into these works. What are we teaching and learning when we analyze films? Who’s missing from the story? This resource is offered for teachers across the humanities who use film and incorporate opportunities for students to develop media analysis skills.
If you’re suffering from hypothyroidism fatigue, thyroid hormone therapy can provide a natural hormone balancing solution. What is the Thyroid Gland? The thyroid gland is a butterfly-shaped organ located at the front of the neck. It consists of two lobes located on both sides of the windpipe (trachea). A normal thyroid gland is not usually visible to the eye nor able to be felt by touch. The thyroid gland absorbs iodine from food to produce hormones that regulate metabolism and digestive functions, brain development and bone maintenance. The hypothalamus and pituitary gland in the brain send chemical messages to the thyroid such as thyroid-stimulating hormone (TSH). TSH instructs the thyroid on how many hormones to produce. Due to its role in regulating metabolism, the thyroid breaks down fat and promotes weight loss. Thyroid hormones control how fast carbohydrates and fat are burned which helps regulate body temperature. Thyroid hormones also lower cholesterol and protect against cardiovascular disease. They influence cerebral metabolism, prevent cognitive impairment and nourish thin, sparse hair, dry skin and thin nails. Types of Thyroid Gland Hormones The thyroid gland produces 20% T3 and 80% T4. - Thyroxine (T4): Inactive prohormone that liver and kidneys convert to T4. - Triiodothyronine (T3). Active hormone that regulates metabolic rate, heart and digestive functions, muscle control, brain development and function, and bone maintenance. - Calcitonin: Active hormone that regulates calcium and phosphate in the blood. Calcitonin helps maintain bone health. Causes of Hypothyroidism If the thyroid gland doesn’t produce enough T3, then hypothyroidism can develop. Hypothyroidism is a disease caused by autoimmune conditions, poor iodine intake or drug side effects. T3 and T4 are essential for physical and mental development. Untreated hypothyroidism during childhood can cause mental impairment and reduced growth. Hypothyroidism in adults causes a weak metabolism. Symptoms can include fatigue, intolerance to cold temperatures, low heart rate, weight gain, reduced appetite, poor memory, depression, stiff muscles and reduced fertility. According to Your Hormones, “Hypothyroidism is common. It affects women more frequently than men and usually around middle-age. Hypothyroidism affects approximately 1 in 1,000 men and 18 in 1,000 women. How Does Thyroid Hormone Therapy Work? Thyroid hormone therapy uses natural hormones to raise abnormally low levels T3 and T4. The hormones can be delivered in pill form and the most commonly prescribed is pure thyroxine T4. According to Endocrine Web, “The benefit of taking only T4 therapy is that you’re allowing your body to perform some of the actions it is meant to do, which is taking T4 and changing it into T3. The half-life of T4 is also longer compared to T3 (7 days versus 24 hours), that means that it will stay for a longer time in your body after ingestion.” Providers often use blood test results to determine how much T4 will balance out an individual’s hormone levels. The blood results also reveal thyroid-stimulating hormone (TSH) levels released by the pituitary gland. Increased levels of TSH may indicate an underactive thyroid or that therapy levels need to be increased. Hypothyroidism can be a progressive disease and therefore dosage may need to increase over time. Hopkins Medicine states: “To make sure that thyroid hormone replacement works properly, consider the following recommendations: - Maintain regular visits to your healthcare provider. - Take your thyroid medicine at least 1 hour before breakfast and any calcium or iron medicines you may take. Or take at bedtime, or at least 3 hours after eating or taking any calcium or iron medicines. - Tell your healthcare provider of your thyroid hormone treatment before beginning treatment for any other disease. Some treatments for other conditions or diseases can affect the dosage of thyroid hormone therapy. - Let your healthcare provider know if you become pregnant. - Tell your healthcare provider of any new symptoms that may arise. - Tell all healthcare providers of your thyroid condition and medicine dosage.” 5 Health Benefits of Thyroid Hormone Therapy According to the Thyroid Association of America, “The goal of thyroid hormone treatment is to closely replicate normal thyroid functioning. Pure, synthetic thyroxine (T4) works in the same way as a patient’s own thyroid hormone would. Thyroid hormone is necessary for the health of all the cells in the body. Therefore, taking thyroid hormone is different from taking other medications, because its job is to replace a hormone that is missing.” By balancing out T4 levels, thyroid hormone therapy can restore metabolism and digestive functions leading to more energy, focus and productivity. - Increase energy, body temperature, and warmth - Balance temperature, metabolism, and cerebral function - Increase fat breakdown resulting in weight loss and lower cholesterol - Reduce risk of cardiovascular disease and cognitive impairment - Increase hair, skin, and nail health Contact Natural Bio Health Today Since 1999, Natural Bio Health has helped thousands of people regain their quality of life, prevent disease and maintain a strong immune system for optimal health. Through a careful review of health history, genetics and lifestyle, our providers are able to create individualized wellness plans to address the root-cause of your underlying health concerns. Whether your personal struggles are with hormone imbalance, weight, sleep, energy, libido, migraines or depression, Natural Bio Health is committed to help you restore your quality of life and achieve optimal mental and physical health.
Solar steam appears to have a significant edge over photovoltaic type solar cells: according to the researchers, 82 percent of sunlight went directly to generating steam, yielding an overall 24 percent energy efficiency of the steam generation process. Photovoltaic solar panels, by comparison, typically have an overall energy efficiency around 15 percent. While solar energy is generally thought of in terms of electrical power, the researchers say that is only a small part of the new technique’s potential. “This is about a lot more than electricity,” said Naomi Halas (pictured at right), the lead scientist on the project. “With this technology, we are beginning to think about solar thermal power in a completely different way.” About 90 percent of the planet’s electricity is produced from steam (created by fossil fuels) and steam is also used to sterilize, prepare food, and to purify water. Most industrial steam is produced in large boilers, but Halas said solar steam’s efficiency could allow steam generation to become economical on a much smaller scale. Hallas explains that the efficiency of solar steam is due to the light-capturing nanoparticles that convert sunlight into heat. When submerged in water and exposed to sunlight, the particles heat up so quickly they instantly vaporize water and create steam. “We’re going from heating water on the macro scale to heating it at the nanoscale,” she explained. “Our particles are very small – even smaller than a wavelength of light – which means they have an extremely small surface area to dissipate heat. This intense heating allows us to generate steam locally, right at the surface of the particle, and the idea of generating steam locally is really counterintuitive.” To show just how counterintuitive, Rice researcher Oara Neumann (pictured left) videotaped a solar steam demonstration in which a test tube of water containing light-activated nanoparticles was submerged into a bath of ice water. Using a lens to concentrate sunlight onto the near-freezing mixture in the tube, Neumann showed she could create steam from nearly frozen water. Inexpensive, compact solar steam devices could soon be rolled out to developing countries, according to Neumann. Rice engineering undergraduates have already created a solar steam-powered autoclave that’s capable of sterilizing medical and dental instruments at clinics that lack electricity. Halas also won a Grand Challenges grant from the Bill and Melinda Gates Foundation to create an ultra-small-scale system for treating human waste in areas without sewer systems or electricity. Discuss this article in our forum Novel solar cells utilize light’s magnetic properties “Green” biofuels anything but Virus improves solar cell efficiency Artificial leaf generates electricity
Dehydration happens when the fluid output is more than the fluid intake. Body fluids are lost through tears, sweat, vomiting, diarrhea, or urine. If lost fluids are not replaced, dehydration occurs. It mostly occurs in children and older adults. An average adult approximately loses 2.5 liters, a drop of 2% of our total body fluid can lead to a headache, short-term memory loss, loss of concentration, lethargic, dry skin, cognitive impairment and digestive problems. Dehydration can be mild, moderate, and severe. With that, it can lead to serious complications. To help confirm and evaluate the severity of dehydration, blood tests and urinalysis is needed. Blood tests evaluate numerous factors, it even shows how the kidney functions. Urinalysis is done on dehydrated patients to evaluate their urine. The urine can determine what degree you are dehydrated. Urinalysis can also determine for signs of a bladder infection. However, aside from the laboratory tests that confirm dehydration, there are signs that a person manifest when they have dehydration. Here are the common signs and symptoms of dehydration. - Dry mouth - Decreased urination/dark colored urine - Sunken eyes - Baggy eyes, wrinkled skin - Low blood pressure Heat stroke, it ranges from mild heat cramps to heat exhaustion which can lead to a potentially life-threatening heatstroke. Urinary and kidney problems. Prolonged or relapsing dehydration can cause kidney stones, urinary tract infection, or worst, kidney failure. Seizures. When you are experiencing dehydration, there is also an imbalance with the electrolytes potassium and sodium. As this electrolytes aids in the transmission of electrical signals from one cell to another, it is a risk when these electrolytes are imbalanced, it may lead to involuntary muscle contractions and loss of consciousness. Hypovolemic shock or low blood volume shock. This is the most serious, life-threating complication of dehydration. This occurs when the low blood volume decreases, and the amount of the oxygen in the body will also drop. Dehydration can also lead to an increased risk of obesity, affiliated with type 2 diabetes, high blood pressure, and cancer, among many others. However, this can be prevented by drinking two 8-ounce glasses of water before breakfast, lunch, and dinner Thus, prevent dehydration to avoid experiencing its detrimental effects on our body. Most of the time, dehydration can be prevented through drinking eight glasses of water a day. When you are involved with the high-energy level of activity, or you are sick drink more of the suggested amount of water.
In a lab in Oxford University’s experimental psychology department, researcher Roi Cohen Kadosh is testing an intriguing treatment: He is sending low-dose electric current through the brains of adults and children as young as 8 to make them better at math. A relatively new brain-stimulation technique called transcranial electrical stimulation may help people learn and improve their understanding of math concepts. The electrodes are placed in a tightly fitted cap and worn around the head. The device, run off a 9-volt battery commonly used in smoke detectors, induces only a gentle current and can be targeted to specific areas of the brain or applied generally. The mild current reduces the risk of side effects, which has opened up possibilities about using it, even in individuals without a disorder, as a general cognitive enhancer. Scientists also are investigating its use to treat mood disorders and other conditions. Dr. Cohen Kadosh’s pioneering work on learning enhancement and brain stimulation is one example of the long journey faced by scientists studying brain-stimulation and cognitive-stimulation techniques. Like other researchers in the community, he has dealt with public concerns about safety and side effects, plus skepticism from other scientists about whether these findings would hold in the wider population. There are also ethical questions about the technique. If it truly works to enhance cognitive performance, should it be accessible to anyone who can afford to buy the device—which already is available for sale in the U.S.? Should parents be able to perform such stimulation on their kids without monitoring? “It’s early days but that hasn’t stopped some companies from selling the device and marketing it as a learning tool,” Dr. Cohen Kadosh says. “Be very careful.” The idea of using electric current to treat the brain of various diseases has a long and fraught history, perhaps most notably with what was called electroshock therapy, developed in 1938 to treat severe mental illness and often portrayed as a medieval treatment that rendered people zombielike in movies such as “One Flew over the Cuckoo’s Nest.” Electroconvulsive therapy has improved dramatically over the years and is considered appropriate for use against types of major depression that don’t respond to other treatments, as well as other related, severe mood states. A number of new brain-stimulation techniques have been developed, including deep brain stimulation, which acts like a pacemaker for the brain. With DBS, electrodes are implanted into the brain and, though a battery pack in the chest, stimulate neurons continuously. DBS devices have been approved by U.S. regulators to treat tremors in Parkinson’s disease and continue to be studied as possible treatments for chronic pain and obsessive-compulsive disorder. Transcranial electrical stimulation, or tES, is one of the newest brain stimulation techniques. Unlike DBS, it is noninvasive. If the technique continues to show promise, “this type of method may have a chance to be the new drug of the 21st century,” says Dr. Cohen Kadosh. The 37-year-old father of two completed graduate school at Ben-Gurion University in Israel before coming to London to do postdoctoral work with Vincent Walsh at University College London. Now, sitting in a small, tidy office with a model brain on a shelf, the senior research fellow at Oxford speaks with cautious enthusiasm about brain stimulation and its potential to help children with math difficulties. Up to 6% of the population is estimated to have a math-learning disability called developmental dyscalculia, similar to dyslexia but with numerals instead of letters. Many more people say they find math difficult. People with developmental dyscalculia also may have trouble with daily tasks, such as remembering phone numbers and understanding bills. Whether transcranial electrical stimulation proves to be a useful cognitive enhancer remains to be seen. Dr. Cohen Kadosh first thought about the possibility as a university student in Israel, where he conducted an experiment using transcranial magnetic stimulation, a tool that employs magnetic coils to induce a more powerful electrical current. He found that he could temporarily turn off regions of the brain known to be important for cognitive skills. When the parietal lobe of the brain was stimulated using that technique, he found that the basic arithmetic skills of doctoral students who were normally very good with numbers were reduced to a level similar to those with developmental dyscalculia. That led to his next inquiry: If current could turn off regions of the brain making people temporarily math-challenged, could a different type of stimulation improve math performance? Cognitive training helps to some extent in some individuals with math difficulties. Dr. Cohen Kadosh wondered if such learning could be improved if the brain was stimulated at the same time. But transcranial magnetic stimulation wasn’t the right tool because the current induced was too strong. Dr. Cohen Kadosh puzzled over what type of stimulation would be appropriate until a colleague who had worked with researchers in Germany returned and told him about tES, at the time a new technique. Dr. Cohen Kadosh decided tES was the way to go. His group has since conducted a series of studies suggesting that tES appears helpful improving learning speed on various math tasks in adults who don’t have trouble in math. Now they’ve found preliminary evidence for those who struggle in math, too. Participants typically come for 30-minute stimulation-and-training sessions daily for a week. His team is now starting to study children between 8 and 10 who receive twice-weekly training and stimulation for a month. Studies of tES, including the ones conducted by Dr. Cohen Kadosh, tend to have small sample sizes of up to several dozen participants; replication of the findings by other researchers is important. In a small, toasty room, participants, often Oxford students, sit in front of a computer screen and complete hundreds of trials in which they learn to associate numerical values with abstract, nonnumerical symbols, figuring out which symbols are “greater” than others, in the way that people learn to know that three is greater than two. When neurons fire, they transfer information, which could facilitate learning. The tES technique appears to work by lowering the threshold neurons need to reach before they fire, studies have shown. In addition, the stimulation appears to cause changes in neurochemicals involved in learning and memory. However, the results so far in the field appear to differ significantly by individual. Stimulating the wrong brain region or at too high or long a current has been known to show an inhibiting effect on learning. The young and elderly, for instance, respond exactly the opposite way to the same current in the same location, Dr. Cohen Kadosh says. He and a colleague published a paper in January in the journal Frontiers in Human Neuroscience, in which they found that one individual with developmental dyscalculia improved her performance significantly while the other study subject didn’t. What is clear is that anyone trying the treatment would need to train as well as to stimulate the brain. Otherwise “it’s like taking steroids but sitting on a couch,” says Dr. Cohen Kadosh. Dr. Cohen Kadosh and Beatrix Krause, a graduate student in the lab, have been examining individual differences in response. Whether a room is dark or well-lighted, if a person smokes and even where women are in their menstrual cycle can affect the brain’s response to electrical stimulation, studies have found. Results from his lab and others have shown that even if stimulation is stopped, those who benefited are going to maintain a higher performance level than those who weren’t stimulated, up to a year afterward. If there isn’t any follow-up training, everyone’s performance declines over time, but the stimulated group still performs better than the non-stimulated group. It remains to be seen whether reintroducing stimulation would then improve learning again, Dr. Cohen Kadosh says.
SCSD Vision of Mathematics Education A sound understanding of mathematics is key to a student’s success in our rapidly changing 21st Century society. Our mission is to develop successful mathematicians…where students will become lifelong learners and critical thinkers. This requires high expectations for all students, a consistent implementation of curriculum that is aligned to Common Core State Standards, and teachers using the best research-based instructional practices in the classroom. Our students need to be college and career-ready. SCSD is committed to providing the best mathematics education for our students. We believe that an understanding of math concepts and application to real-world situations are essential. Our goal is to provide students with opportunities to learn new things. Our classrooms promote a “culture” where asking good questions is encouraged and collaborating with other students is part of the process. We look forward to working with parents, teachers, and administrators as we foster a love of learning math for our students. More information about what your student will be learning in each grade available here Curriculum Central Mathematics Information The Common Core Learning Standards for Mathematics emphasize focus and coherence. The standards focus on core conceptual understandings and procedures starting in the early grades, thus enabling teachers to take the time needed to teach core concepts and procedures well—and to give students the opportunity to master them. The standards progress from grade to grade, coordinate with each other within a grade and are clustered together into coherent bodies of knowledge.
What Is Croup & Who Is Most at Risk? Croup is most commonly the result of an infectious respiratory illness. It causes a change in breathing causing a brassy, barking cough. Children are most at risk of croup from around 4 months of age (when the natural immunity received from their Mother starts to decline) until around 3 years old (when their airway is considerably larger). Can age make a difference? The size of a child’s airway is roughly proportional to their height. From birth to adulthood, a child’s height will typically increase by a factor of 3 to 4, and so will the size of their airway. A reduction in the radius of an infant’s airway can cause resistance in their breathing. The two are very sensitively linked. Croup can cause the airway to become inflamed (swelling), which increases the resistance of the infant’s airway and their ability to breathe with ease. However, this may only cause minor discomfort for an older child. Infectious Croup vs Spasmodic Croup – what’s the difference? Croup can be broadly divided into two categories: Infectious Croup and Spasmodic Croup. (1) Infectious Croup Most cases of croup are caused by a virus. The virus usually enters through a child’s upper airways. The first symptoms will be those of a cold; a congested, runny nose, a low-grade fever or a mild sore throat. As the virus travels further down the throat, the linings of the voice box and windpipe become red, swollen, narrow and irritated - triggering hoarseness, a barking cough and loud, raspy breathing (stridor). The symptoms from infectious croup usually resolve over 3 to 5 days. (2) Spasmodic Croup Spasmodic Croup can be triggered by an infection, but it isn’t caused by infection. It tends to run in families and may be caused by an allergic reaction. Spasmodic Croup tends to come on suddenly without fever. Episodes of cough and loud raspy breathing usually start without warning, often in the middle of the night. Symptoms typically improve within a few hours, though typically reappear several nights in a row. The period between such episodes can vary. Signs and Symptoms: Most children with croup are mildly ill, and do not develop significant breathing problems. Symptoms of more severe illness include: Breathing faster than normal Having difficulty breathing and feeding Recessions - grooves between the ribs or between the ribs and the tummy Most children with croup can be safely managed on an outpatient basis, but some children will require hospital admission. When should I call my child’s doctor? Contact your child’s doctor if the croup prevents your child speaking. If your child is dribbling or drooling or has a high persistent fever, then this is a sign of a possibly more a serious infection. As croup looks so dramatic, especially when it happens for the first time, and usually in the middle of the night, the best immediate course of action is to call an ambulance. There are several relatively simple treatments which can be used to help croup, and with confidence parents can possibly give these treatments at home. However, the safety of the child is of the greatest importance, so always call an ambulance if you are worried. For more information on the diagnosis and treatment of Croup please click here. (link to our previous article)
Updated: Jan 7 You may be perplexed after reading the title of this blog entry. The notion may sound familiar to you, but it's conceivable that you've forgotten what it means since you graduated from high school English. You may be even more perplexed if you are learning English as a non-native speaker. Fortunately, if you continue reading, you'll realize that 'interjections' are more widespread than we think and play an important part in English literature. There are also other instances of what interjections are and where they are employed in a sentence. If your curiosity has been piqued, keep reading on to find out more about this underrated and underused form of speech. To address the question presented in the first paragraph, an interjection is one of the eight components of speech, alongside nouns, verbs, pronouns, adjectives, adverbs, prepositions, and conjunctions. While interjections are unneeded in most sentences and are regarded as the least significant component of speech, they can nevertheless be used effectively to capture the reader's attention, especially when writing a short tale or book. You need nouns, verbs, and even adjectives to construct a complete sentence that makes grammatical sense, but you don't need an interjection to do so. Interjections, on the other hand, are a vital aspect of speech that must be employed in those circumstances to portray the depths of emotion within a sentence. Interjections can be used to express emotions or feelings, but they are not as grammatically significant as other parts of speech. If you want to use interjections in academic or professional writing, I would advise against it. Interjections, on the other hand, are ideal for more artistic or creative styles of writing that allow you, as the author, to express your thoughts or feelings to the readers. When it comes to the placement of interjections within a sentence, you have a couple of options to consider. To indicate a strong feeling, it is usual to use an interjection such as "Wow!" or "Oh no!" followed by an exclamation at the beginning of a sentence. As an example, "Oh no! I didn't do my assignment last night!" “Wow! That was a thrilling football game!" Interjections do not have to be located at the beginning of a sentence; they can also be found in the middle or at the end. Emotions and feelings can be expressed in any portion of a phrase as long as it makes sense within the context of the tale or article. For instance, "So, it's going to rain today, huh?" The interjection in this example is "huh?" which occurs after the sentence to convey disappointment or amazement that it's going to rain. When used as an interjection in English, 'huh' indicates uncertainty rather than enthusiasm or amazement, allowing you to use an exclamation point after another interjection (!) such as wow! Consider the following example of an interjection that can be used in the middle of a statement. "According to what you just told me, my gosh, that's the most intelligent argument you've ever made," for example. In this scenario, you don't need an exclamation mark, but you should realize that the author of this quote is expressing his opinion that what the other person just said was incredibly brilliant and insightful. Last but not least, interjections can function as whole sentences when followed by an exclamation mark (!) or a question mark (?). Even though interjections are only one or two words long, the emotion or sentiment they represent might constitute its phrase. The entire notion has already been communicated in terms of the emotion felt by the narrator or character. In this unusual circumstance, a full sentence does not require either a subject or an action. "Wow!" is an example of an interjection that can stand alone as a sentence. "Huh?" or "Oh my goodness!" If you're still not sure what interjections are, let me offer you some more popular instances to assist you out. Hah, Boo, Ew, Dang, Darn, Gosh, Oh, Oh No, Ouch, Shoot, Uh-Oh, Ugh, Yikes are some examples. There are hundreds of interjection examples online that you may use to begin employing in your own stories or articles. These are only a few of the most popular ones you'll come across in the English language. There are some examples of interjections used within longer phrases that I can show you to help you understand. Remember that knowing when and where to utilize these interjections is critical. Interjections should be avoided in academic and formal writing because you are attempting to convey facts and pertinent information rather than emotions and feelings. When it comes to artistic or creative writing, though, you can use as many interjections as you wish. While interjections are thought to be an unpopular and relatively obscure kind of speech, they are fairly common and very popular when you think about it. Interjections play an important function in the English language and should not be disregarded when messaging back and forth with pals or writing a gripping fictional novel. Have a look at our programs by clicking on the link here:- We have attached a google form below, fill that and hurray your details are now with us, we will get back to you asap and help you enroll for the course easily.
Posted on Scientific American: By David Biello — The purple sea urchin may be able to evolve to cope with ocean acidification, but that does not mean other species will be able to mimic the trick FAST EVOLUTION: New research suggests that the purple sea urchin may be able to evolve to cope with the ocean acidification brought on by climate change. Image: Kirt L. Onthank The oceanic pincushion known as the purple sea urchin relies on its many spines and pincers for protection and food. An inability to form its spiny shell would devastate the species, which thrives on rocky shores off North America’s west coast. Unfortunately for the purple sea urchin, higher carbon dioxide levels in the atmosphere as a result of human fossil-fuel burning presage a more acidic ocean that might make it harder to form such shells. But new research suggests that the purple sea urchin may have the genetic reserves to combat this insidious threat. A study published in Proceedings of the National Academy of Sciences on April 8 found that exposing purple sea urchins to the kinds of acidified ocean conditions possible in the future unleashed genetic changes that may help the animal survive. The researchers showed that although the exterior of sea urchin larvae changed very little, theirgenetics adapted to high CO2 environmental conditions in a single life span. Shifting environmental conditions have always played an outsize role in driving evolution. A climate change from cold to hot transforms everything an organism needs to survive and thrive, so each animal, plant, microbe and fungus species must adapt or die—as happened during the transition out of the most recent ice age. So the question isn’t if the current bout of human-induced climate change will drive evolution, but how—and maybe when? In the case of the purple sea urchin exposing urchin larvae to current and projected levels of ocean acidification—and then sampling their genes at set dates of development—revealed a population undergoing genetic changes under more acidic conditions. Simply put, those larvae with versions of genes better adapted to high CO2 conditions became more common. “In a sense, it is the beginning of evolution,” explains biologist Melissa Pespeni of Indiana University Bloomington, who lead the experiment. “Only the individuals with the ‘right’ gene copies would be able to pass their genes on to the next generation.” The genes in question code for proteins involved in processes like extracting shell-building minerals from seawater or fat metabolism. The larvae exposed to today’s conditions showed none of the changes seen in those exposed to higher CO2 conditions. And the effect grew over time—some selection could be detected after one day, but an even more prominent shift was apparent by the seventh day of development. Previous studies have suggested that such purple sea urchins—and other shell-forming organisms—would struggle to grow and develop as the ocean grew more acidic, results that the new study ascribes to differing lab conditions, particularly how densely the urchin larvae are packed. Although purple sea urchins like to cluster close together, testing larvae under these conditions may have exacerbated the impact of ocean acidification. Of course, purple sea urchins are unlikely to face stress only from ocean acidification; other threats include overfishing for urchin roe. This research suggests that the key to any evolved response to ocean acidification is having enough diversity in the population to allow natural forces to pick and choose what survives and thrives. Plus, “we don’t know if there are negative side effects of such rapid evolutionary change,” Pespeni notes. The genes lost as a result of selection to cope with high acidity could prove to play an important role in anything from avoiding predators to immune system responses. Regardless, such evolution does not have to be slow, as this sea urchin work shows. Research in soil mites published on April 8 in Ecology Letters reaffirms that point, finding that laboratory-induced natural selection—in this case for shorter maturation time—can work in as little as 15 generations. Then again, the purple sea urchin may be uniquely prepared to face a future of increased acidity. The upwelling ocean environment where it lives periodically fluctuates between high- and low-CO2 seawater conditions anyway. That means the population may have retained the genetic capacity to deal with high CO2—Pespeni notes they have more genetic variability than most other organisms, a genetic reservoir that may serve the urchins well as they face the effects of climate change. That also suggests other organisms at sea and on land without that history of exposure will not share the same genetic resilience—as well as often lacking the purple sea urchins’ large population sizes. “Right now, it’s really unclear what sorts of species are likely to be able to evolve their way out of trouble,” says ecologist Dov Sax of Brown University, who was not involved in this research. “It’s a giant question that needs to be resolved and feeds into the issue of who is most at risk of extinction from climate change.” Read this article on Scientific American.
You know the three R’s (Reduce, Reuse, Recycle) – now let’s talk about the three E’s: Energy, Economy, and Environment. Recycling saves energy because the manufacturer doesn't have to produce something new from raw natural resources. When energy consumption goes down, production costs go down. Savings are passed on to the consumer, and when recycled products are purchased, a demand for more recycled goods is promoted. This improves the economy. Waste has a huge negative impact on the natural environment, including the land, air, and water. Recycling reduces the need for more landfills, where harmful chemicals and greenhouse gasses are released into the air and land. By recycling, you are actively improving the planet’s overall health by helping keep the air, water and land clean.
Atomic weight refers to the mass of an average atom of a particular element. This weight is calculated by adding together the number of protons and neutrons in the average nucleus (electrons, which also form an atom, are of extremely low mass and are therefore usually omitted for the sake of convenience). In chemical formula shorthand, the atomic weight for an element or isotope is usually written directly after the element name, such as Carbon-12 (C-12) or Uranium-235 (U-235). Atomic weight is important not only for identifying what types of atoms may make up a sample based on the sample’s mass, but also for differentiating isotopes of a particular element, some of which are stable, but others of which are radioactive. HOW TO CALCULATE THE ATOMIC WEIGHT Atomic weight is calculated both for the average atom of an element (by taking the weighted average of the atomic weights of all isotopes based on their normal frequency within a sample), or, for a sample of a single isotope, the number of neutrons and protons in the nucleus of that particular isotope. Chemists are now able to identify with complete accuracy the number of neutrons and protons in the nuclei of stable compounds; the more difficult task, for the moment, is identifying isotope frequency. For example, there are three isotopes of hydrogen, two of which are stable: hydrogen-1 and hydrogen-2 (also known as deuterium). Deuterium makes up only a very small amount of naturally occurring hydrogen, so the atomic weight, or the average atomic mass of all hydrogen in a sample is 1.008 units. In chemistry, atomic weight is usually expressed in terms of the mole – that is, the mass of about about 602 sextillion, or 6.02 x 10^23, atoms. This number is special because 602 sextillion protons or neutrons weighs 1 gram – and, therefore, one mole of any atom will have a mass equal to its atomic number. ATOMIC WEIGHT vs. ATOMIC MASS In physics and chemistry, weight and mass are technically different properties: that is, there is a difference between mass and weight. Note that the same is true of the terms atomic weight and atomic mass, although, very confusingly, the difference between mass and weight is different than the difference between atomic weight and atomic mass. In this case, atomic mass is the mass of a particular atom at rest, which must be of a specific isotope (since any particular atom cannot be of multiple isotopes at once). Whereas atomic weight refers to the average mass of all of the atoms of an element in the sample, weighted according to the expected frequency of isotopes, atomic mass refers to the specific mass of a single atom of a particular isotope.
Decision making is the study of identifying and choosing alternatives based on the values and preferences of the decision maker. Making a decision implies that there are alternative choices to be considered, and in such a case we want not only to identify as many of these alternatives as possible, but to choose the one that best fits with our goals, objectives, desires, values, and so on. (Harris 1980). There are two types of decision making, individual and organizational. This article will give a thorough explanation of theories, models and processes of decision making, examine decision making Apple vs Google, and present groupthink phenomenon. Decision Making Theories Classical decision making theory based on the orientation of the decision makers, such as, economics or market condition values. According to (March, 1994), a “rational choice” is one which is based on relatively fixed preferences and follows a logic of consequence, by which current actions are dictated by anticipation of the value associated with future outcomes. After finding and gathering info and fact, decision makers are assumed to choose among alternatives by some values (minimizing bad consequence and regret, maximazing profits). However, some critics said individual's rationality is limited by the information they have. Meanwhile, human abilities to absorbed every stimuli that received by empirical experience also limited. So, impossible to implement fully rational process of finding an optimal choice based on information. This classical theory also impossible to implement in organizations, either managers or company executives only have limited time to consider every available information. Besides, this theory presupposes that there is one best outcome. Rational theory thinks that important to consider every option, and its consequences. Meanwhile, due to limited time to make decisions, others several factors such as, company culture. Decision makers shape their decision based on their preferences, not own preferences. Sometimes decision makers ignored their preference and following other advice or tradition. Contrast with classical decision making theory, administrative or also known as behavioral decision making theory assumed humans as limited formation processes, with neither the inclination nor the ability to make the sort of “consequential” calculations described by a rational choice perspective (Anderson, 1983; Hastie, 1986; Simon, 1979). Behavioral decision theories are much more likely to be concerned with the dynamic processes of how decisions are made, with information search and with strategies for making choices. There are several assumptions in behavioral decision theories. Firstly, the decision makers need to make the problem simpler and reduce its complexity due to their limited individual abilities, and organizational conditions. The decision makers are bounded by their rationality, so the model of the problems have to be simplified w...
The Sun is 864,400 miles (1,391,000 kilometers) across. This is about109 times the diameter of Earth. The Sun weighs about 333,000 times as much as Earth. It is so large that about 1,300,000 planet Earths can fit inside of it. The innermost layer of the sun is the core. With a temperature of 15 million kelvins (27 million degrees Fahrenheit) The outermost layer of the sun is the corona. Only visible during eclipses, it is a low density cloud of plasma with higher transparency than the inner layers. The white corona is a million times less bright than the inner layers of the sun, but is many times larger. The corona is hotter than some of the inner layers. Its average temperature is 1 million K (2 million degrees F) but in some places it can reach 3 million K (5 million degrees F). Temperatures steadily decrease as we move farther away from the core, but after the photosphere they begin to rise again. There are several theories that explain this, but none have been proven. In the corona, above sunspots and areas of complex magnetic field patterns, are solar flares. These sparks of energy sometimes reach the size of the Earth and can last for up to several hours. Their temperature has been recorded at 11 million K (20 million degrees F). The extreme heat produces x rays that create light when they hit the gases of the corona. The sun is the source of virtually all heat for our planet. The earth is insignificant in comparison. If you want to feel even more insignificant, this guy ran a program to prove that every human alive would fit into one 900-meter ball. And yet if our politicians are to be believed, our failure to give them more control over the energy economy is boiling the planet. Seriously, people.... It's time to get over yourselves. It's the sun, and the occasional lack of it. You aren't that big of a deal. Go here for more fun details.
If you want to detect life on another planet, look for biomarkers—spectroscopic signatures of chemicals that betray the activity of living things. And in fact we may have already found a biomarker. In 2003 Earth-based astronomers caught glimpses of methane in the Martian atmosphere. The discovery was initially controversial, so much so that the discoverers themselves held back from publishing it. But the two of us and our colleagues recently confirmed the presence of methane using NASA’s Curiosity rover. It is the most tangible evidence we have ever collected that we may not be alone in the universe. Almost no matter where the methane comes from, it’s an intriguing discovery. If you dropped a molecule of methane into the atmosphere of Mars, it would survive about 300 years—that’s how long, on average, it would take for solar ultraviolet radiation and other Martian gases to destroy the molecule. By rights, the Martian atmosphere should have been scrubbed of its methane eons ago. So, the methane we see must come either from a source that is producing methane today or from a subsurface reservoir that is venting methane produced sometime in the past. On Earth, 95 percent of methane is biological in origin. The class of bacteria known as methanogens feeds on organic matter and excretes methane. They populate our planet’s wetlands, which account for nearly a quarter of the methane present in the Earth’s atmosphere globally. Cows’ gut bacteria are the second largest producers. It is the possibility of microbial life that has propelled the search for methane on Mars. But even if the methane there comes from geologic processes, it would give us a profound new respect for what looks outwardly like a geologically dead world. Methane can be produced by the geochemical process of serpentinization, which is widespread in Earth’s crust, especially at warm and hot hydrothermal vents on the ocean floor known as Lost City and Black Smokers. This process requires a source of geologic heat as well as liquid water. Those happen to be two main ingredients of life, as well. Mars is indeed active and has the potential of harboring past or present microbial life. The mystery isn’t just that we see methane when we shouldn’t. It’s also that, in a sense, we see too much of it. The Mars methane abundance varies dramatically in location and time, implying not only an unknown source, but also an unknown sink. The variation was evident in the very first detections from telescopes in Hawaii and Chile, reported by NASA astronomer Michael Mumma at a meeting of the Division of Planetary Sciences in 2003. The following year, Vittorio Formisano of the Institute for Interplanetary Space Physics in Rome and his team (including one of us, Atreya) published findings from the European Space Agency’s Mars Express orbiter. Like Mumma, Formisano’s team observed variations in methane abundance, although the values measured from Mars Express were much lower, about 15 parts per billion by volume (ppbv) global average. By comparison, the methane abundance on Earth is 1875 ppbv. (Gas concentrations are commonly measured by the volume a gas occupies, as opposed to its mass.) Both sets of observations sought the infrared spectral fingerprint of methane in sunlight reflected from the Martian atmosphere. The ground-based telescopic observations looked out through Earth’s own air, which also contains methane, so the analysis had to separate the Martian and terrestrial methane signals. Although the orbital data did not suffer from this problem, they had their own confounding factors, such as the presence of other gases with overlapping spectral lines in the same region. Both teams were very careful, but their observations remain controversial to this day. To resolve the issue, NASA decided in 2004 to dedicate an instrument on the Mars Science Laboratory mission (with its rover, Curiosity) to the methane question. The Sample Analysis at Mars (SAM) instrument package, built and operated by a team led by Paul Mahaffy of NASA, included a tunable laser spectrometer (TLS). The TLS performs an in-situ measurement of methane in a well-defined atmospheric volume of known temperature and pressure. The instrument first ingests Martian air into a cell about the size of a coffee cup. Then it fires an infrared laser into the gas to see how much light is absorbed. The laser scans across wavelengths to look for the distinctive fingerprint of methane and other gases. On its own, the TLS can measure methane to within about 2 ppbv. To achieve even higher sensitivities, SAM flows the ingested gas slowly over a compound that scrubs out the dominant carbon dioxide gas, thereby enriching the methane signals, and reducing the measurement uncertainty to about 0.1 ppbv. On Earth, the TLS technique has been used since the 1980s and produced the first airborne measurements of chlorine reservoirs in the ozone hole, the deuterium-to-hydrogen ratio in cirrus clouds, and methane measurements at numerous locations. The instrument does have one potential source of error. In the weeks prior to launch, it is normal for a spacecraft and its instruments to be exposed to Earth’s air during assembly test and launch operations at the launch site. In our case, the instrument foreoptics chamber (through which the laser beam passes before entering the sample chamber) took in a small amount of Florida air containing Earth methane. We compensate for this contamination by taking each measurement on Mars three times. First, we pump out all the Martian air, so that the sample chamber is a vacuum; that way, the only methane we measure will be the Floridian stowaway. Then, we let in the Martian air and measure again. Finally, we again empty the sample chamber and measure once more. In this way, we can isolate and subtract the Earth contaminants. Furthermore, we have seen no sign of leakage from the foreoptics chamber over its years on Mars. Because the effect of the contaminants remains constant, they cannot account for any variation we observe. Our instrument began its work when the Curiosity rover landed in Gale Crater on Mars in August 2012. Over a three-year period on the surface of Mars, the TLS-SAM generally observed low background levels (about 0.5 ppbv). The background levels oscillated with the Martian seasons, which was the first time that any Mars methane measurement has showed any repeatability. This background methane could have originated in comets and meteorites that crash periodically onto Mars. Or it could have come from interplanetary dust particles that flutter down to the Martian surface, bringing organic material that the sun’s ultraviolet radiation breaks down into methane. The seasonal pattern seems to correlate with the ultraviolet light flux reaching the surface and will tell us a lot about the delivery of organic material to the Martian surface. Surprisingly, during a single two-month period, four sequential observations reported a spike of 7 ppbv. These values were much too high to explain by comets, meteorites, or dust. They must have been of Martian origin—perhaps a burp from a relatively small and localized subsurface source to the north of the landing site. The Martian winds would blow that methane away over several months, explaining why the signal went away when it did. Alternatively, that pulse could be from a distant and much bigger source, which would require some other unknown mechanism to remove methane quickly. Like the earlier observations of plumes, the spikes seen by Curiosity remain a tantalizing clue to a still-enigmatic Mars. The methane data have shown that Mars is indeed active and has the potential of harboring past or present microbial life. But many puzzles remain, demonstrating how any potential biomarker we see will always require meticulous follow-up work. The European Space Agency’s ExoMars Trace Gas Orbiter, which reached Mars last month, includes powerful instrumentation for sensing methane, looking either straight down or at the limb of the planet when backlit by the sun. The pair of methods can measure how the abundance varies with altitude and is spread out across the planet. As Curiosity continues its near-surface measurements with ExoMars peering down from above, we will be able to answer the question: To what extent are the rover measurements representative of the whole planet? And then we can begin to understand whether we share our solar system with Martian microbes. Sushil K. Atreya is a professor at the University of Michigan at Ann Arbor and a Distinguished Visiting Scientist at the Jet Propulsion Laboratory in Pasadena, California. A specialist on the origin and evolution of planetary atmospheres, he has worked on Voyager, Galileo, Cassini-Huygens, Venus Express, Mars Express, Mars Science Laboratory, and Juno missions. Christopher R. Webster is the director of the Microdevices Laboratory at the Jet Propulsion Laboratory in Pasadena, California. He has pioneered the development of tunable laser spectrometers for balloons, aircraft, and spacecraft. He has led over 500 aircraft and 20 high-altitude balloon missions for Earth studies, leading up to the selection of his spectrometer for the Mars Curiosity rover. The newest and most popular articles delivered right to your inbox! WATCH: The legendary film editor Walter Murch tells us what happened after the discovery of Uranus.
Sugar alcohols (also called polyhydric alcohols, polyalcohols, alditols or glycitols) are organic compounds, typically derived from sugars, that comprise a class of polyols. They are white, water-soluble solids that can occur naturally or be produced industrially from sugars. They are used widely in the food industry as thickeners and sweeteners. In commercial foodstuffs, sugar alcohols are commonly used in place of table sugar (sucrose), often in combination with high intensity artificial sweeteners to counter the low sweetness. Xylitol and sorbitol are popular sugar alcohols in commercial foods. Production and chemical structure Sugar alcohols have the general formula HOCH2(CHOH)nCH2OH. In contrast, sugars have two fewer hydrogen atoms, for example HOCH2(CHOH)nCHO or HOCH2(CHOH)n−1C(O)CH2OH. The sugar alcohols differ in chain length. Most have five- or six-carbon chains, because they are derived from pentoses (five-carbon sugars) and hexoses (six-carbon sugars), respectively. They have one OH group attached to each carbon. They are further differentiated by the relative orientation (stereochemistry) of these OH groups. Unlike sugars, which tend to exist as rings, sugar alcohols do not. They can however be dehydrated to give cyclic ethers, e.g. sorbitol can be dehydrated to isosorbide. Sugar alcohols occur naturally and at one time, mannitol was obtained from natural sources. Today, they are often obtained by hydrogenation of sugars, using Raney nickel catalysts. The conversion of glucose and mannose to sorbitol and mannitol is given: - HOCH2CH(OH)CH(OH)CH(OH)CH(OH)CHO + H2 → HOCH2CH(OH)CH(OH)CH(OH)CH(OH)CHHOH More than a million tons of sorbitol are produced in this way every year. Xylitol and lactitol are obtained similarly. Erythritol on the other hand is obtained by fermentation of glucose and sucrose. Consumption of sugar alcohols affects blood sugar levels, although much less than does sucrose comparing by glycemic index. Sugar alcohols, with the exception of erythritol, may also cause bloating and diarrhea when consumed in excessive amounts. Common sugar alcohols Both disaccharides and monosaccharides can form sugar alcohols; however, sugar alcohols derived from disaccharides (e.g. maltitol and lactitol) are not entirely hydrogenated because only one aldehyde group is available for reduction. Sugar alcohols as food additives |Name||Sweetness relative to sucrose||Food energy |Sweetness per food energy||Food energy for equal sweetness||Glycemic index| As a group, sugar alcohols are not as sweet as sucrose, and they have slightly less food energy than sucrose. Their flavor is like sucrose, and they can be used to mask the unpleasant aftertastes of some high intensity sweeteners. Sugar alcohols are not metabolized by oral bacteria, and so they do not contribute to tooth decay. They do not brown or caramelize when heated. In addition to their sweetness, some sugar alcohols can produce a noticeable cooling sensation in the mouth when highly concentrated, for instance in sugar-free hard candy or chewing gum. This happens, for example, with the crystalline phase of sorbitol, erythritol, xylitol, mannitol, lactitol and maltitol. The cooling sensation is due to the dissolution of the sugar alcohol being an endothermic (heat-absorbing) reaction, one with a strong heat of solution. Sugar alcohols are usually incompletely absorbed into the blood stream from the small intestine which generally results in a smaller change in blood glucose than "regular" sugar (sucrose). This property makes them popular sweeteners among diabetics and people on low-carbohydrate diets. However, like many other incompletely digestible substances, overconsumption of sugar alcohols can lead to bloating, diarrhea and flatulence because they are not fully absorbed in the small intestine. Some individuals experience such symptoms even in a single-serving quantity. With continued use, most people develop a degree of tolerance to sugar alcohols and no longer experience these symptoms. As an exception, erythritol is actually absorbed in the small intestine and excreted unchanged through urine, so it contributes no calories even though it is rather sweet. The table above presents the relative sweetness and food energy of the most widely used sugar alcohols. Despite the variance in food energy content of sugar alcohols, EU labeling requirements assign a blanket value of 2.4 kcal/g to all sugar alcohols. - Hubert Schiweck, Albert Bär, Roland Vogel, Eugen Schwarz, Markwart Kunz, Cécile Dusautois, Alexandre Clement, Caterine Lefranc, Bernd Lüssem, Matthias Moser, Siegfried Peters (2012). "Sugar Alcohols". Ullmann's Encyclopedia of Industrial Chemistry. Weinheimdoi=10.1002/14356007.a25_413.pub3: Wiley-VCH.CS1 maint: Uses authors parameter (link) - Bradshaw, D.J.; Marsh, P.D. (1994). "Effect of Sugar Alcohols on the Composition and Metabolism of a Mixed Culture of Oral Bacteria Grown in a Chemostat". Caries Research. 28 (4): 251–256. doi:10.1159/000261977. PMID 8069881. - Honkala S, Runnel R, Saag M, Olak J, Nõmmela R, Russak S, Mäkinen PL, Vahlberg T, Falony G, Mäkinen K, Honkala E. (May 21, 2014). "Effect of erythritol and xylitol on dental caries prevention in children". Caries Res. 48 (5): 482–90. doi:10.1159/000358399.CS1 maint: Multiple names: authors list (link) - Mattila, P. T.; et al. (2001). "Increased bone volume and bone mineral content in xylitol-fed aged rats". Gerontology. 47: 300–305. doi:10.1159/000052818. PMID 11721142. - Sato, H.; et al. (2011). "The effects of oral xylitol administration on bone density in rat femur". Odontology. 99: 28–33. doi:10.1007/s10266-010-0143-2. PMID 21271323. - Sue Milchovich, Barbara Dunn-Long: Diabetes Mellitus: A Practical Handbook, p. 79, 10th ed., Bull Publishing Company, 2011 - Paula Ford-Martin, Ian Blumer: The Everything Diabetes Book, p. 124, 1st ed., Everything Books, 2004 - "Eat Any Sugar Alcohol Lately?". Yale New Haven Health. 2005-03-10. Retrieved January 6, 2018. - Cammenga, HK; LO Figura; B Zielasko (1996). "Thermal behaviour of some sugar alcohols". Journal of thermal analysis. 47 (2): 427–434. doi:10.1007/BF01983984. - "The Other 26 Sweeteners". The Sugar Association, Inc. Retrieved 2015-06-03. - Sugar Alcohol Fact Sheet – An International Food Information Council publication
The assonance is a rhyming form and a stylistic device of rhetoric. Assonance meets us primarily in lyrical texts, but can appear in works of all kinds and genres. The assonance is a vocal half-rhyme, which means that in neighboring words a consonance of the suits (vowels, a, e, i, o, u, ä, ö, ü, eu, au) is found. This form of the rhyme is particularly widespread in Old-Polish and Old-French poetry, and is often found in lyrical imitation or translation of these works (see Romanticism). Note: The word assonance is derived from the Latin (ad ~ zu, an; sonare ~ klingen) and can be translated as “with” or “even”. This interpretation is quite good, which basically describes the assonance. The assonance by means of an example with the vowel “A” to sleep complain The above verse line illustrates the principle of assonance. The two words sleep and complain basically do not rhyme. In any case, they do not sound identical because of the last syllable. However, they have the same vowel – the “A” – and thus get a vocal harmony. This means that the vowels in neighboring words – words that follow one another – are the same and thus form a kind of rhythm within the poem. Let us take a look at the first stanza by Ernst Jandl’s poem Otto Mops: ottos mops defies otto: fortpops fort ottos mops hopst fort Consequently, Jandl also uses the consonance of the vowel “O”. It is true that we can find a final rhyme between the second and third verses, but the rest of the poem is based on the fact that the single letter is repeated continuously and thus forms a unity. Examples of the assonance Heart | ache sore Here, we are dealing with a double assonance, since the consistency is to be found within two words. This form of the semi-rhyme is very often used in rap to create a rhythm in the respective lyric. The two nouns “Stab” and “Macht” also form in the immediate sequence an assonance which is effected by the same vowel “A”. In this way, the two words, if they follow each other, sound similar and rhythmize a text. Finally, let’s look at an example from Brentano’s Romances of the Rosary. Black ladies, black gentlemen walk through Bologna’s streets. Will they go to the corpse? Who is so late to the grave? But no priest is seen, Cross and flag not worn. Everything flows loudly and lively, and the fast cars rattled. Not to the Metten oer Vesper Misere, Salve, Ave, also to no dead fair: These are only read in the evening. Note: In this example, only the assonances were color-coded in the first stanza. The most important to the Assonance at a glance The assonance is basically described as vocal half-rhyme. Furthermore, we can only speak of such a conspicuousness when a vowel repeats itself in at least two, almost three, neighboring words. The assonance is related to the alliteration, since by the repetition of initial syllables a similar pattern can arise within a work (→ examples of the alliteration). In lyric poetry, the assonance usually serves as a stylistic means to combine verses The superiority of the assonance is one of the most common stylistic devices, even though it is sometimes difficult to recognize.
LAWRENCE — “Insects dominate our world,” according to University of Kansas researcher Michael Engel. Thus, anything scientists can learn about the evolution of insects leads to a better grasp of how biology in general has changed over time. “More than half of all known species on the planet are insects, and they rule virtually all terrestrial and freshwater ecosystems,” said Engel, a professor of ecology and evolutionary biology. “Many insect lineages are ecologically ubiquitous — such as bees, ants, termites — and they impact our daily lives in a big way. They pollinate our crops; they are the sources of many of our medicines or other chemicals; and some are tied to the spread of disease.” Understanding the factors that led to insect origins and fueled their successes, as well as what pushed particular groups to extinction, such as the influence of climate change, is vital to human health and security. Engel has just co-authored a paper in the prestigious journal Nature that sheds new light on the evolution of the Eumetabola, a scientific term for the group of organisms that includes most insect species. “Beetles, bees, ants, wasps, flies, butterflies, moths, fleas, lacewings, lice, thrips, aphids, true bugs and all of their close relatives are eumetabolous insects,” said Engel. “If you are talking about insects, then you are likely talking about a eumetabolan insect.” The researcher, who also serves as a senior curator at KU’s Biodiversity Institute, said that in spite of the importance of this group of insects, their earliest origins have been difficult to pin down. “There’s been a lack of identifiable fossils from the Carboniferous Period or earlier deposits,” Engel said. “Until now, the first definitive specimens assignable to the Holometabola, the big chunk of the already massive Eumetabola, were from the early Permian — but those aren’t the most primitive of their kind, except in a few cases, and pointed to much earlier diversification events. From the Carboniferous, the immediately preceding time period, we only had specimens of much more primitive insect lineages. “ The size of the fossils makes them difficult to detect, according to Engel. “They’re tiny, so unless you are hunting for them, they would be easy to overlook,” he said. “Also, fossils from these deposits aren’t preserved with a strong contrast between them and the surrounding rock. Thus, it takes specialized lighting to get them to easily pop out.” Nevertheless, Engel and his co-authors in Nature describe newly discovered specimens and fragments that can be confidently tied to holometabolan lineages. More significant, the specimens are not of the typical orders but are far more primitive. “For example, there’s a species related to the lineage that eventually would give rise to the wasps in Triassic,” he said. “It wasn’t a wasp itself and instead would look more like some kind of generalized primitive group, but it already had a few of the evolutionary novelties that would later be part of the order of wasps, ants and bees.” Among the other five species Engel and his colleagues describe in Nature, one is an early relative of true bugs and their relatives; one is an early relative of barklice, and then ultimately true lice; and one is an early relative of the lineage that would give rise to the beetles in the Permian. KU is a leader in paleontology generally, and Engel’s lab is one of a few worldwide with expertise in the fossil record of insects — and, among those labs, an even smaller subset have sufficient expertise in the Paleozoic. Engel said an understanding of ancient development and origins of insects is vital for an understanding of our modern world. “Our own evolution — biotic and cultural — is inextricably woven into the lives of insects,” he said. “Insects have been on the planet for at least 410 million years and were first to fly, first to develop agriculture and first to form complex societies. We like to talk about the ‘Age of Dinosaurs’ or the ‘Age of Mammals,’ but all of these are greatly dwarfed by an overarching ‘Age of Insects.’ If humans vanished from the Earth it would have only a beneficial effect on the greater biota. If insects disappeared, then life itself would struggle to persist.”
Canada is known for its large deer, ferocious bears, and a variety of avian species. Still, one of its greatest treasures is the Canadian lynx, is a unique and beautiful feline. The Canadian lynx is a medium-sized cat belonging to the family Felidae. It is recognizable by its long ear tufts that flare up into points, thick grey coat, and its short bobbed tail similar to that of a bobcat, although the two are not to be confused. The Canadian Lynx is a more dangerous predator and is somewhat larger than the Bobcat. Standing between 48 to 56 cm tall at the shoulder, spanning 76 to 110 cm in length, and weighing in at 8 to 11 kg, it is one of the largest cats to range through the Canadian North and parts of the United States of America. As they are spread throughout a relatively large portion of North America, the Canadian lynx's diet changes according to location. The main diet of lynxes in Canada and Alaska consists of the snowshoe hare, which is found in much larger numbers throughout these northern climates. Lynxes living further south will hunt mice, voles, grouses, red squirrels, and ptarmigans and other small birds. The Canadian lynx is a solitary animal and hunts alone by stalking its prey. Their usual snowshoe hare diet means they must be agile and fast hunters in order to kill their evasive, protein-rich prey. Habitat And Range Where their prey goes, the lynx follows, and as such most Canadian lynxes are distributed where the snowshoe hare can be found. These areas are commonly found in dense boreal forests, as well as in open forests and on rocky mountainsides. Although lynxes can be found in the North-Central US and Alaska, about 95% of the Canadian lynx population is found in Canada, where they are an indigenous species. Their thick coats allow them to deal with the cold winter temperatures there, and their wide-spread paws make it easier for walking across the snow without sinking. According to the Internation Union for the Conservation of Nature's Red List of Threatened Species, the Canadian lynx is classified as a species of "Least Concern," though its population is threatened in certain regions, especially in the United States. The Canadian lynx is a nocturnal hunter, and their big eyes and exceptional hearing help this predator to catch its prey. Although they are fast enough to outrun a small rabbit, they are not known for their speed. Instead, the Canadian lynx is a sneaky hunter that covertly stalks its prey for up to several hours. Common enemies to the Canadian lynx include mountain lions, wolves, and coyotes. At times, lynxes and other predatory animals will fight if they come into contact with one another, which often doesn’t end well for the Canadian lynx who is often the smaller of the two mammals in question. Hunters also pose a threat to lynxes, as their beautiful furs are hunted to be made into top dollar pelts in the fashion industry. During the later winter months, mating season commences for many wild animals, and that of the Canadian lynx begins upon the return of warm weather. Breeding lasts only a short period of time for the Canadian lynx, roughly spanning the month of April across most parts of their range. Once pregnant, female Canadian lynxes will produce litters between one to eight kittens in size after being pregnant for 63 to 70 days. After 10 to 17 days from birth, the kittens will finally open their eyes, and they will begin walking at 24 to 30 days. By five months old, they are weaned off of their mothers' milk and begin eating solid meat to meet their energy needs. After this time, it takes around 23 months for lynxes to fully mature into adults.
A leptocephalus (meaning “slim head”) is the flat and transparent larva of the eel, marine eels, and other members of the Superorder Elopomorpha. This is the most diverse group of teleosts, containing 801 species over the span of 24 orders, 24 families, and 156 genera. It is supposed that this group arose in the Cretaceous period over 140 million years ago. Fishes with a leptocephalus larva stage include the most familiar eels such as the conger, moray eel, and garden eel, and the freshwater eels of the family Anguillidae, plus more than 10 other families of lesser-known types of marine eels. These are all true eels of the order Anguilliformes. The fishes of the other four traditional orders of elopomorph fishes that have this type of larva are more diverse in their body forms and include the tarpon, bonefish, spiny eel, and pelican eel. Leptocephali (singular leptocephalus) all have laterally compressed bodies that contain transparent jelly-like substances on the inside of the body and a thin layer of muscle with visible myomeres on the outside. Their body organs are small and they possess only a simple tube for a gut. This combination of features results in them being very transparent when they are alive. While leptocephali have dorsal and anal fins that are confluent with caudal fins, they lack pelvic fins. They also lack red blood cells until they begin to metamorphose into the juvenile glass eel stage when they start to look like eels. Leptocephali are also characterized by their fang-like teeth that are present until metamorphosis, when they are lost. Leptocephali differ from most fish larvae because they grow to much larger sizes (about 60–300 mm and sometimes larger) and have long larval periods of about 3 months to more than a year. Another distinguishing feature of these organisms is their mucinous pouch.They move with typical anguilliform swimming motions and can swim both forwards and backwards. Their food source was difficult to determine because no zooplankton, which are the typical food of fish larvae, were ever seen[by whom?] in their guts. It was recently found though, that they appear to feed on tiny particles floating free in the ocean, which are often referred to as marine snow. Leptocephali larvae live primarily in the upper 100 meters of the ocean at night, and often a little deeper during the day. Leptocephali are present worldwide in the ocean from southern temperate to tropical latitudes, where adult eels and their close relatives live. American eels, European eels, conger eels, as well as some oceanic species spawn and are found in the Sargasso Sea. Indian glassy fish The Indian glassy fish, Parambassis ranga, is a species of freshwater fish in the Asiatic glassfish family (family Ambassidae) of order Perciformes. It is native to an area of south Asia from Pakistan to Malaysia. The Indian glassy fish has a striking transparent body revealing its bones and internal organs; the male develops a dark edge to the dorsal fin. The fish grows to a maximum overall length of 80 millimetres (3.1 in). The glass catfish (Kryptopterus bicirrhis) is an Asian glass catfish of the genus Kryptopterus. Until 1989, it included its smaller relative the ghost catfish, now known as K. minor. Its scientific name and common name are often still used in the aquarium fish trade to refer to the ghost catfish; as it seems, the larger and more aggressive K. bicirrhis was only ever exported in insignificant numbers, if at all.
This Simple Graphic Shows Us All How An Electric Car Works Visualization makes is super simple to understand. When Nikola Tesla invented the alternating current motor in 1887, he paved the way for the invention of the electric vehicle more than a century later. Electric vehicles could make gas- and diesel-powered vehicles obsolete by the year 2025, effectively ending the reign of the internal combustion engine. The acceptance of electric vehicles into car culture has already begun, with the Tesla Model S winning the Motor Trend Car of the Year in 2013. Understanding how an electric vehicle works is actually much simpler than understanding how a gas- or diesel-powered car works. That’s why The Zebra created the below infographic — to help readers understand the basics of electric vehicles and how they are instrumental in changing our environment for the better. Electric vehicles are more efficient in just about every way compared to our standard gas and diesel-powered engines. We highlighted some of the main reasons why electric vehicles are better below: - High performance – Electric vehicles have instant acceleration, allowing them to reach incredible speeds in seconds. The Tesla Model S is the second fastest production vehicle with a 0–to-60 mph time of 2.28 seconds. - No noise – With no internal combustion engine, electric vehicles are significantly quieter than gas or diesel powered vehicles. - No pollution – According to the EPA, motor vehicles collectively cause 75 percent of carbon monoxide pollution in the U.S. Electric vehicles produce no pollution at the vehicle level. - Lower maintenance and driving cost – With fewer mechanical parts and an overall simpler design, electric vehicles don’t risk the same mechanical issues that gas and diesel-powered vehicles do. Electric vehicles are helping us pave the wave for real environmental change, but if you don’t have the resources to buy a new Tesla there are still many things you can do to help improve the world we live in. Consider using natural cleaners and reducing your plastic waste. When driving, stay calm and alert and make sure your vehicle’s maintenance and insurance are up-to-date. Source: The Zebra
Goal-directed Instructional Design Plan - English Research 1.A problem or a need – there must be a problem of practice or an educational need that should be addressed during the lesson. As we finish the unit for Animal Farm, our research-based character sketch project requires 10th grade students to be able to identify and then utilize credible sources from the Internet. This has been a huge obstacle in the past. Students often immediately go to google, or use sources that allows for editing such as wikipedia. They will need to reapply this knowledge during the Julius Caesar unit. 2.A real-world performance – how the learning objective fit into a real-world activity or need. The ability to research, and most specifically to use the Internet to find credible sources is a tool students can use for the rest of their life. It is a wonderful skill to be able to discern which sources are credible and which are not. Students are already inundated with a constant flow of information, and they need to learn to analyze what they are being presented with. Research will absolutely be a part of their lives in that they will need to do research when buying a car, buying a house, etc. Having practice with research using credible house will prepare them for the future. They can transfer the knowledge that comes with practicing using credible sources to many other areas of their lives, not just for research papers for a grade. 3.An instructional objective – the objectives are based on the final outcome, activity or test. These objectives will each be different for the four types of knowledge; performing skills, recalling facts, identifying examples of concepts, and applying principles. a.Students will take notes on what requirements credible sources must have. b.Students will learn the basics about the Internet including web addresses, types of websites, why certain sites are the first to come up when a search is done on google/yahoo, using key terms when searching, etc. c.Formatively, students will take a quiz on what types of sources are credible or not. d.Students will go to the Media Center to be instructed on the use of databases from the Michigan Electronic Library, and they will use our media center’s checklist with each source they select to use in their paper. e.Students will see sample sources and how they may be cited within an essay, along with a sample works cited page. f.Students will cite their sources on a works cited paper g.Students will annotate these selected articles. h.Students will write a paper using these sources to back up their claims with research. 4.A set of essential content – the basic ideas and skills that will allow the learner to complete the task or understand the content. Students will learn essential Internet terminology in terms of types of web addresses (.org, .edu, .com, etc.) and also how to utilize the Boolean system of searching (or at least in terms of using key words/phrases). Students will need to learn to gauge which sites will produce which types of information. 5.An evaluation consisting of a test or observation – an assessment, observation or product showing that the objectives can be accomplished in the real-world setting. This can be assessed in a multitude of ways, whether it be formatively or summatively. One way I can begin to assess their knowledge of whether a source is credible or not, is by filling out the Media Center Credible Source Checklist Documents for each of the sources they plan to use in their Animal Farm Character Sketch Project. at the end of each checklist they then have to explain why the source was, in their own words, credible or not. They need to justify their use of that source in their paper. This will be an indication of how much they have understood since the lectures, use of examples, quiz, etc., but also what needs reiterating. Using annotations of the full articles found, I can see also that they read the whole article to try and analyze whether or not it served as a credible source. Perhaps before they turn in their final research project, they could create a Prezi or Glogster as a way to either present the information or have it constructed in a way that they can best understand as if they were to present it. In this way I can see what they took away from the information as being most important in terms of research, the Internet, credibility terminology, searching for sources, and deciding whether or not a source is credible. This could be done individually, in small groups, or in partners and could be presented on days before or in between going to the labs to research. Various discussions will also help me to gauge their level of understanding. I will be able to fully analyze their ability to find and utilize credible sources once their paper has been turned in. I will be able to see whether the sources were from educational portals or databases of published articles. 6.A method to help participants learn – the method to deliver the content; a lesson. 1. Access prior knowledge--what do students normally do when researching? Discuss with them whether they always just go straight to google and click on the first source that is presented to them, or if they try different web searches and key terms to find what they need. Do they know that Wikipedia can be edited? Have they used like sources when they’ve done research in the past. This will allow me to see where students are in terms of their understanding of this concept 2. I have a handout that I use that goes through how sources are credible or not. It has Internet terminology listed, it discusses works cited within a paper but also the bibliography, and it goes through what it means to use sources within the context of school and otherwise. It also discusses accuracy, relevance, and reliability in terms of the websites/sources. This can either be projected with the use of a projector or through a PowerPoint-like presentation. Students will take notes and be able to ask questions as the class goes through to discuss. I will be able to demonstrate examples using my overhead projector as I go. I can also pull up where they can find the Media Center’s credibility checklist on its webpage. 3. After taking notes, discussing, taking a quiz, and then also discussing those answers, they will go to the lab to create Glogster/Prezi presentations, individually or in pairs, of this information prior to actually researching. 4. After about two-tree days of creating their presentations and presenting, I will allow them to go to the lab to begin their research. The requirements will be that for each source they plan on using within their paper, they must also fill out a credibility checklist along with writing their own justification for using that specific source. I believe they will have been scaffolded enough at this point to be responsible for their own learning as they can apply all that we have done thus far. 5. I will check in the checklists as they go, as I will till them they cannot use a source if I have not signed off on its credibility. 6. Once I have signed off on their 3-4 sources, they can begin creating the actual paper/project. ○Meaningfulness – content and activities must have meaning for the learner As they know they are going to be doing a large project that requires research and is worth a major portion of their grade, this work should actually seem extremely meaningful because they know it will be of immediate and necessary use. It will also be presented in a way that if they learn this now, all research assignments in their future high school classes will go much more smoothly. ○Pleasant consequences – the effects that achieving the goal will have on the learner The students will have a better understanding of what sources are actually reliable and can actually serve the purpose of justifying their claims, because the credible source has also done research. The checklist will make them slow down the process an not just jump on the first webpage they see. They will actually have to read the article/document and justify their use of it. It makes them think about why or why not something can be considered reliable, and just because it is on the internet does not make it true. ○Novelty – an attention-getting, humorous or curious manner that relates to the useful information in your lesson I like to use examples about times that people purposely changed Wikipedia pages for their own benefit, or just to be funny. I also have pretty cool cartoons/pictures I use on documents to connect to the topic at hand. ●Socialization - a strong motivator for student learning They are all working towards the same goal. They are all in need of coming to understand and being able to apply the concepts being taught in a group discussion format. They can share their own knowledge and help each other through the process. ●Audience – For what audience are you designing this lesson? Consider the following: High School Sophomores (ages 15-16) in regular English 10. ○Skill level (including technology skills) As this is a regular English class, it has a wide variety of kids. Some kids may not have a computer at home and simply have difficulties using Microsoft word, and other may be extremely proficient. In this way those who are proficient may finish sooner or completed aspects of the assignment more quickly and thus can help those who are less than proficient. Students will have of course have used computers and done research in the past, but perhaps not in the way I am asking them to this time around. Some may know about the Michigan Electronic Library, some may never have heard of it It may depend on their previous English teacher or research assignment and its requirements. Their ability level will be completely different across the board. For another example, some may type quickly because they have a computer at home, while those who don’t, may need more time. ●Technology Needs – the computers, software, programs (such as Angel or other CMS’s) printers, equipment, Internet access, time in the computer lab will be needed to successfully complete your technology-rich lesson.
No Man Is an Island. (a famous line from "Meditation XVII," by the English poet John Donne) An expression that emphasizes a person's connections to his or her surroundings and/or other persons. Therefore, I can connect it to the language itself, known as the most powerful tool that can a person have. In order to get connected to other persons, to start socializing because of different backgrounds, job related or friends related, we use the language as a tool for communication. Different languages throughout the world, different teachers, different students and approaches. To emphasize the importance of the languages in the communication here is my personal example of an approach of teaching English as a foreign language. The method to be used will be the Total Physical Response (TPR). The purpose of the course/language learning will be the students to start communicating in English so it will motivate them to master the target language. ''Learning emotions or feelings'' will be the objective of the course. The skills that the students will learn on this day is related to this topic. They will be able to say how they feel, show their emotions using the vocabulary and the phrases presented. Teacher's purpose and the student's needs will be in consonance. Students will be motivated and at the same time it is expected that they should enjoy their learning experience because the teacher will be imitating all the feelings and in response to that the students will be able to distinguish their meaning and imitate it back. That will make an interesting environment to work in and it will encourage the shy students to interact with the others. This explains the affective factors regarding this particular course. Taking into consideration the language skills more emphasis will be put on the listening and speaking macro skills as the teacher will be showing orally all the vocabulary and phrases simultaneously with the material used and the students will be able to speak, pronounce it orally as well. So, we have receptive and productive skills involved, paying more attention on fluency than accuracy. On the other hand, the vocabulary and the pronounciation will be the main micro skills to be used. The focus is on communicative competence rather than on linguistic or grammatical competence. Based on our teaching and learning goals our syllabus will be more function-based as it will be used to expresses feelings, emotions. Since the purpose of the course is students to start communicating in English in a particular topic, expressing feelings in this case, the material that the teacher will be using will be diverse. The teacher will start the course by asking a simple question ''How do you feel today?'' and will start Imitating, by yawning, that he is feeling sleepy. Then, he will ask the same question implying to some of the students. If they do not understand he will start rubbing his belly and will say ''You are feeling hungry!'' simultaneously pointing at the student so that the student will try to understand what the teacher is explaining. This is how the teacher will introduce the new topic ''Feelings''. In the beginning he can say the literal meaning of the topic in the native language so that the students can be at the right pace, they will know what exactly they are learning that day. But the rest of the flow in the class will be conducted only in English. Since the students are familiar with the topic of the course and the teacher has already modelled, demonstrated the structure, he continues with the class activities. First, he shows in front of the class different pictures, flashcards showing the facial and body expressions of a young boy. As the teacher is showing the cards, he is imitating the expressions explaining each individual feeling and makes the students repeat and show after him. The phrases that will be introduced in the class will be: ''How do you feel today? ''; ''I feel…(sleepy)''; ''You feel…(sleepy)''; ''He/She feels…(sleepy)''. On the other hand, the vocabulary used will be: happy, sad, angry, surprised, scared, bored, shy, shocked, sleepy, hot, cold, hungry, thirsty. This activity can be followed by a video material showing an animated story about a young boy following different sequences of events. He has just woken up still feeling sleepy and that is why he is yawning. He rubs his belly because he feels hungry and rushes into the kitchen. There he sees a table full of different dishes that his mum has already prepared for him. His face shows that he is surprised and shocked as his eyes are becoming wide open. He takes the sandwich and looks more than happy showing a big smile on his face. Then he makes a sound ''Aaaah'' in this case showing that he is thirsty as well so, that is why he is reaching the bottle of fresh orange juice. Suddenly his mother shows up in the kitchen and her face is turning red which means that she is angry because he did not wait for the rest of the family to have the breakfast together. After each sequence and expression there is a voice that says how the boy is feeling. For example: ''I feel sleepy''. This is an activity that gives the students the opportunity to visualize every single action, feeling so they know how to pronounce it and what it means. The next activity will be using flashcards again, which have pictured individually the different sequences in the story, each showing a certain feeling. The teacher mixes the flashcards and gives the student to take one each. The video will be playing again and as it goes the students should come up in front of the class and follow the sequences of the story showing the appropriate flashcard and pronouncing the correct expression, for instance ''I feel hungry!''. As a last activity the teacher introduces a game. He divides the students into two groups and shows them some items like a bottle, chocolate, pillow, blanket… and explains them that he will say a sentence and they need to go and take the right item that can help them imitate and show that certain feeling. For example, when the teacher says "You feel thirsty!" the student should run and grab the bottle and utter ''I feel thirsty!". The one that will be faster will bring one point to his group. In the end of the game they can all get a candy as a reward! Considering the material, we use a lot of realia in the form of movies, real items like pillows, chocolates, bottles, pictures so, they can feel, experience, visualize the new words, phrases and that eases and makes the whole learning process more interesting. The more interesting it is the more efficient it will be. Although in the course vocabulary is more emphasized over grammar, at the same time the students adopt the grammatical forms ''I feel/You feel/He feels'' which makes them able to make a difference between the pronouns and to add -s for the 3rd person singular using the Present Simple Tense. As we can noticed in class they practice role plays and a lot of repetition which again helps them to acquire the knowledge easier. This gives a clear picture about the student's and the teacher's roles. While introducing the activities teachers are guides and then they have the role of facilitators, assessors and simple playmakers. Students, on the other hand, are imitators when repeating the words, phrases and communicators while doing the role play. The teacher's attitude towards errors is quite free. They correct them when they have the wrong pronunciation and the correct use of the pronouns like ''I/You'' so they can be aware of the difference at the same time. The above explained example is meant more for teaching beginners at an young age. If this material is meant for more advanced students we will use more complex dialogues at a higher level of communication and more adopted material like text books where they can practice debates, role plays, drills. In the end, we need to conduct the evaluation or assessment. To the teacher, this determines how far the learner has progressed towards the language learning. To the learner, it is an evidence of learning. We will evaluate the students with the use of anecdotal notes as this is meant for young beginners and their skills need to be assessed on a daily bases. Their knowledge needs to be retained by repetition of the old material every day before introducing the new one so, this notes help us to show whether there is an improvement or we need to pay more attention on the repetition of the old material. All things into consideration, learning the language which is a material constantly changing and its acquisition from the students side depends on so many various factors, the role of the teacher is very important. That’s why choosing the right approach with the correspondent techniques is essential for the learning process.
He struggles in particular with the idea essentially the play of macbeth is about power use macbeth e-text contains the full text of macbeth by william. Macbeth: a study in power a study in power by dr jennifer minter in macbeth there would have been no doubt that the witches had the power to play moral havoc. Analysis of macbeth and his struggle for power - in william shakespeare’s play macbeth, there macbeth was shown deference violence, and power struggles were. Lady macbeth is the focus of much of the exploration how does shakespeare play with gender roles in in which play did william shakespeare state that. Influences on shakespeare's macbeth to include multitudes of moral struggles in his plays, including macbeth titular character in shakespeare's play macbeth. Get an answer for 'show how shakespeare presents the theme of power in macbeth play, but here i'll focus on macbeth and power in shakespeare's macbeth. Power in macbeth essaysthere this is especially true in william shakespeare's macbeth lady macbeth has a great deal of power over macbeth the power. Macbeth power and ambition in scene iii an illustration inspired by william shakespeares macbeth what is the importance of banquo in shakespeares play macbeth. The importance of the witches in macbeth gentleman and in the end of the play he is shown to have at a time when power struggles between the. The power struggle in shakespeare's macbeth representations in relation to power in william shakespeare’s macbeth in william shakespeare’s play macbeth. Shakespeare and the uses of power yet shakespeare’s history play never doubts that it is reasonable even in a play haunted, as macbeth is. Shakespeare in scotland: what did the author of macbeth know and macbeth play darnley’s were killed or attacked by their hosts in struggles for political. Find relevant to contemporary religious struggles and awareness that shakespeare’s plays “have the power to pose pressing john d cox recently has shown 7. Explore how shakespeare presents the theme of explore how shakespeare presents the theme of power in the this is similar to the beginning of the play. 2 his inner conflict is shown explicitly in can you please explain the internal conflicts with three characters in the play macbeth macbeth quiz william. Canadianamagazine pdf - free this course has also informed me on the struggles made king and everything is good againthroughout the play, macbeth and. Power relationships in the tempest power, control, and colonization in the tempest share gonzalo is one of the few characters in the play who is honest. The play macbeth by william shakespeare is a perfect example of aristotelian in shakespeares famous play, macbeth macbeth is corrupted by power.
The ISS orbits the Earth at 51.6° to the Equator, following the direction of the Earth’s rotation from west to east. The Earth itself is tilted at 23.4° to the plane of its orbit around the sun (sun vector), so the ISS is orbiting at 75° to the sun vector. The ISS’s altitude varies between 320 to 410 km, and it takes 92 minutes to circle the Earth. The orbit inclination offers good coverage of most of Earth’s surface. Below is a screenshot from TsUP’s ISS orbital tracking page showing the ISS’s current position. As the map is laid out flat in a two-dimensional view, the orbital ground track appears as a wavy line (sine curve), but it is in reality straight as the Station circles the Earth. Similar tracking maps are projected onto the wall screens at Houston MCC and TsUP, Moscow. The TsUP map shows latitude & longitude, as well as period (time to complete one orbit), degree of orbit, maximum & minimum heights, and the cities in which the main control centers are (or will be) located: Houston (Хьюстон), Ottawa (Оттава), Paris (Париж), Moscow (Москва), Baikonur (Байконур), Tokyo (Токио). The ISS completes around 15½ orbits per 24 hours at a varying height of 350-400 km. It does not orbit over the same spot every 24 hours, though, as the Earth is also turning eastward. After one orbit, the Earth has rotated approximately 15° of longitude (at the Equator). The ISS’s ground track will thus be further behind on a map projection by 15°. From Star-Crossed Orbits by James Oberg: It takes about 90 minutes for the satellite to complete one full circuit. When it gets back to its starting point, Earth’s surface has moved eastward. It moves 360 degrees in 24 hours, or 15 degrees per hour. So after each satellite circuit of 1.5 hours, the Earth’s surface has moved about 1.5 times 15 degrees, or 22.5 degrees, further to the east. This precession also means that passes over Russian ground sites will also move forward by a certain amount each 24 hours (an hour or so, as best as I can figure out). It is also the reason why the Station sighting times for cities vary from one day to the next, or don’t appear at all for a week or so. Here is a question and answer from NASA’s “Ask MCC” segment which explains why 51.6° was chosen: From: Patrick Donovan, of Cameron Park, Calif. To: John Curry, flight director Question: Why is the space station in a 51.6-degree inclined orbit instead of something less or something more? Answer: Good question, Patrick! The short answer is that 51.6 degrees is the lowest inclination orbit into which the Russians can directly launch their Soyuz and Progress spacecraft. Both of these vehicles serve an important role in ISS operations. The Soyuz – there is always one attached to ISS – serves as an escape vehicle in the event the ISS would need to be abandoned in an emergency. The Progress spacecraft is basically a cargo version of the Soyuz and is used to bring up fresh food and supplies to the ISS. Ideally, one would want to launch due east from a launch site to maximize the cargo-to-orbit capability for a given launch vehicle. This is because the Earth, rotating from west to east, gives rockets a “free” head start in the right direction. Launching due east from Kennedy Space Center would place the shuttle in a 28.5-degree inclination orbit. Notice that the inclination is the same as the latitude of KSC. Launching due east from Russia’s main launch site, Baikonur, would place spacecraft in a 45.6-degree inclination orbit – the launch site latitude. However, doing so would also drop the lower stages of the boosters on China. To avoid this, the Russians crank up the minimum inclination to 51.6 degrees. Although the shuttle does trade some payload capacity for propellant needed to make up the difference between launching at 28.5 degrees vs. 51.6 degrees, doing so allows the Russians to participate in the ISS program. It also has the added benefit to Earth Sciences since ISS flies over more of the Earth’s surface – about 75 percent, which covers about 95% of the inhabited lands – at the higher inclination orbit. I hope this answers your question, Patrick. Thanks for asking MCC! – Jim Cooney, ISS Trajectory Operations Officer (TOPO), Orbit 3 (planning shift) for STS-112 Latitude & longitude For those (like me) who haven’t done any geography since their school days, here is a quick reminder of what latitude & longitude (as indicated on the TsUP map above) are! - Latitude (North-South): the lines parallel to the Equator (horizontal on the map), running east to west. Zero degrees is at the Equator, then the lines run to 90° north (+90°) at the North Pole, and 90° south (−90°) at the South Pole. (The North Hemisphere is positive, the South is negative.) - Longitude (East-West): the vertical lines (meridians) running from the North Pole to the South Pole. These divide the sphere of the Earth into 360 slices. Zero degrees starts at the Prime Meridian, which runs through the observatory at Greenwich, England. The east and west meridians meet at 180° on the opposite side of the Earth; this line is also the International Date Line. It takes 24 hours for any point on the Earth to traverse the full 360°. - 24 hours of time = 360° of longitude. - 1 hour of time = 15°. - 4 minutes of time = 1°. - 1 minute of time = 15′. - 1 second of time = 15″. Both latitude & longitude are divided into degrees (°), then minutes (′) and seconds (″). Degrees can also be expressed as decimal co-ordinates. (Degrees calculator at CSGNetwork.com) Co-ordinates for main control centers (all in the Northern Hemisphere, Latitude-Longitude): |ISS Control Center||Latitude||Longitude| At ~350-400 kilometers altitude, the ISS is above most, but not all, of the Earth’s atmosphere, and tenuous though it is at the Station’s height, some drag is produced as atoms collide with the ISS. The Station’s orbit decays by a few hundred meters each 24 hours. The decay is variable as solar activity causes changes in the density of the outer atmosphere. Periodic reboosts are thus necessary every few months, usually undertaken by a Progress cargo ship docked to the Russian segment, or an Orbiter when docked to Destiny. ISS orbit data is posted at the end of each daily On-Orbit Report at Spaceref. Here is data for 4 November 2004: ISS Orbit (as of this morning, 6:37 a.m. EST [= epoch]): - Mean altitude – 358.1 km - Apogee height – 364.0 km - Perigee height – 352.3 km - Period – 91.70 min. - Inclination (to Equator) – 51.64 deg - Eccentricity – 0.0008684 - Solar Beta Angle – 32.8 deg (magnitude increasing) - Orbits per 24-hr. day – 15.70 - Mean altitude loss in last 24 hours – 135 m - Revolutions since FGB/Zarya launch (Nov. 98) – 34,036 - Spaceref diagram showing the rate of ISS orbital decay (and reboosts) from the first week of orbit to 4 November 2004. - EarthKAM: Exploring Maps. Downloadable PDF documents - Heavens Above: a resource for all satellite observing and data, including sighting times for the ISS
Blood Types Around the World by Anshool Deshmukh and Bhabna Banerjee via Visual Capitalist The Most Widespread Blood Types, by Country Blood is essential to the human body’s functioning. It dispenses crucial nutrients throughout the body, exchanges oxygen and carbon dioxide, and carries our immune system’s “militia” of white blood cells and antibodies to stave off infections. But not all blood is the same. The antigens in one’s blood determine their blood type classification: There are eight common blood type groups, and with different combinations of antigens and classifications, 36 human blood type groups in total. Using data sourced from Wikipedia, we can map the most widespread blood types across the globe. Overall Distribution of Blood Types Of the 7.9 billion people living in the world, spread across 195 countries and 7 continents, the most common blood type is O+, with over 39% of the world’s population falling under this classification. The rarest, meanwhile, is AB-, with only 0.40% of the population having this particular blood type. Breaking it down to the national level, these statistics begin to change. Since different genetic factors play a part in determining an individual’s blood type, every country and region tells a different story about its people. Regional Distribution of Blood Types Even though O+ remains the most common blood type here, blood type B is relatively common too. Nearly 20% of China’s population has this blood type, and it is also fairly common in India and other Central Asian countries. Comparatively, in some West Asian countries like Armenia and Azerbaijan, the population with blood type A+ outweighs any others. The O blood type is the most common globally and is carried by nearly 70% of South Americans. It is also the most common blood type in Canada and the United States. Here is a breakdown of the most common blood types in the U.S. by race: O+ is a strong blood group classification among African countries. Countries like Ghana, Libya, Congo and Egypt, have more individuals with O- blood types than AB+. The A blood group is common in Europe. Nearly 40% of Denmark, Norway, Austria, and Ukraine have this blood type. O+ and A+ are dominant blood types in the Oceanic countries, with only Fiji having a substantial B+ blood type population. More than 41% of the population displays the O+ blood group type, with Lebanon being the only country with a strong O- and A- blood type population. Nearly half of people in Caribbean countries have the blood type O+, though Jamaica has B+ as the most common blood type group. Here is the classification of the blood types by every region in the world: Unity in Diversity Even though ethnicity and genetics play a vital role in determining a person’s blood type, we can see many different blood types distributed worldwide. Blood provides an ideal opportunity for the study of human variation without cultural prejudice. It can be easily classified for many different genetically inherited blood typing systems. Our individuality is a factor that helps determine our life, choices, and personalities. But at the end of the day, commonalities like blood are what bring us together. Reprinted with permission of Visual Capitalist.
Right to Participation and Right to Identity When a girl or a boy is born, birth registration is very important. The official record of the child’s existence gives the child an identity and legal recognition. Yet, the Right to Identity is much more than this. It is the possibility for each child to develop his or her individuality, as a unique and unrepeatable child. Identity means belonging to a culture, sharing a language and maintaining family ties. Without an identity, there are no rights! Participation is a right and one of the fundamental principles of the Convention on the Rights of the Child. To participate means to be part of something: to become interested, to reflect, to make decisions and to perform actions towards a common goal. If one wants to learn how to participate, the best thing to do is: to participate!
Supporting Language and Literacy in the Early Years David K. Dickinson [email protected] Lynch School of Education Boston College New York State Even Start Conference Language & Literacy and Social Development & Self-Regulation Language is social. It is used to: –Create and deepen relationships. –Solve problems. –Share experience and knowledge. –Play with friends. Language helps with self regulation: –It helps with understanding emotions.of oneself and others. –It can provide self-control strategies. Strong language and literacy builds a sense of competence and efficacy. Language Is Fostered In and Helps Build Strong Relationships Teachers ratings of closeness to children is linked to childrens rate of growth. More positive emotional climate is linked to more extended and intellectually challenging conversations. Why? –Teachers learn about children through extended conversations. –Children feel valued. Question Language is key for social/emotional development of children. What strategies would you model with parents to promote language in ways that will help children grow socially and emotionally? Question Adults learn about children through extended conversations with them. Give examples of what you do now to encourage conversation between you and the child. What are some new ideas on how you will do this? Why Such a Focus on Literacy? Reading failure is bad for you. –Poor employment opportunities. –More likely to be involved in crime. –Poorer health. Early difficulty has serious implications. –Less likely to have academic success. –More likely to drop out. The children you serve are at increased risk of reading failure. Your program can have a huge impact. Components of Early Literacy Reading & Writing Extended Discourse Rich Vocabulary Phonological Sensitivity Phonemic Awareness Letter knowledge Sound-symbol correspondence Uses of Print World Knowledge Question What new ideas did you gain that you might use to design future intentional instruction sessions with parents to implement the components of early literacy? Question What resources would you use to help explain the different components of early literacy to parents and how they all fit together? Question What are some strategies and activities staff can demonstrate for parents, to reinforce their childrens skill development in the various components of early literacy? Oral Language: From Conversations to Literacy Conversations Short turns Check understanding Shared experiences & knowledge In the same location: –Gesture –Eye gaze Intonation signals how you feel, marks importance Reading No turns! You monitor alone Cannot assume shared knowledge Not shared location No voice to signal feeling or importance Rely on words, syntax (grammar), world knowledge Occasions That Give Rise to Literacy-Supporting Language Content that moves beyond the immediate present: recounting past and future events discussing objects that are not present considering ideas and language speculating, wondering pretending Finding Time: Implications for Classrooms and Homes Find special times to talk –Meal times –Waiting –Traveling –Book Reading –Others … Protect those special times –Set up and maintain routines for talking –Model good listening by ignoring distractions –Draw other children into the conversation Question What do you do now to promote literacy supporting language with parents? Question How can you help parents use conversation that moves beyond the immediate present? Question Give specific examples of how, where and when this can happen. Home & School Study of Language & Literacy Visited homes & classrooms from age 3. Audio-taped teachers and children throughout the day. Assessed language & literacy beginning in kindergarten. Continued to grade 7. Dickinson & Tabors, 2001, Beginning Literacy with Language, Paul Brookes Publishing Co. (www.brookespubishing.com) Predicting Childrens Kindergarten Receptive Vocabulary Scores Using Home Control and Classroom Variables from Dickinson & Tabors, Beginning Literacy with Language, Brookes Publishing Correlations Between Kindergarten Predictors and Grade Seven Reading & Oral Vocabulary from Dickinson & Tabors, Beginning Literacy with Language, Brookes Publishing Closing Questions As David Dickinson said, we play an important role in the effort to improve the language/literacy statistics of most-in-need families. Question 1 What is your greatest accomplishment in moving your parents towards the Even Start Parenting education literacy goals. Question 2 How can you accomplish the goal of helping parents realize their role in supporting their childs language and literacy development? Question 3 Based on what you learned in the video and from other research, what are your specific next steps for parenting education?
There are quite a few methods of irrigation that are classed as modern. These include surface irrigation, localised, drip, sprinkler, centre pivot, lateral move and sub irrigation. In a surface irrigation system, water moves over the land in order to wet and infiltrate the soil. It is often called flood irrigation when the method results in complete or new complete flooding of the land. Historically, this has been considered the most common method of irrigation. Localised irrigation is where water is applied under low pressure in a pre-determined pattern through a piped network. Drip irrigation is considered to be a very modernised method of irrigation because it saves both water and fertiliser by along the water to drip slowly to the roots of the plants either directly through the root zone or onto the soil surface through a series of valves, pipes and tubing. Sprinkler irrigation is where water is piped to one or more central locations within a field and is distributed through overhead high pressure sprinklers to reach as much land as possible. These sprinklers can also be mounted on moving platforms that are connected to the water source by a hose. Centre pivot irrigation consists of a series of pipes each with a wheel about 1.5 metres in diameter with water being supplied through one end with a hose. Downward facing sprinklers are placed at equal length along the pipes. This is most commonly used method for small or oddly shaped fields such as those in mountainous regions. Sub irrigation, also known as seep irrigation, is mostly used in field with high water tables. Through this method, the water table is artificially raised to allow the plants to be watered directly into their roots. A system of pumps, pipes, canals, weirs and gates allow farmers to control the water level. This method is also commonly used in greenhouses for potted plants.
About This Chapter Lyndon B. Johnson and the Vietnam War - Chapter Summary and Learning Objectives Watch this chapter's video lessons to learn how Johnson's administration escalated U.S. involvement in the Vietnam War. Our experienced instructors can help you examine events factoring into this decision and sort through the sometimes confusing alliances between forces fighting in North and South Vietnam. By the end of this chapter, you should be familiar with the following: - Leadership changes in South Vietnam - Consequences of the Gulf of Tonkin crisis - Significant U.S. air and ground campaigns - North Vietnamese and U.S. military tactics - American responses to the war at home |Gulf of Tonkin Crisis and Resolution: Events and Congressional Response||Unravel the 1964 Gulf of Tonkin crisis and explain the subsequent congressional resolution in response to these events.| |A Revolving Door of Leadership in South Vietnam||Develop an understanding of the fluid and often questionable leadership in South Vietnam following the Diem assassination.| |The Air War: Events and Key Operations||Detail Operation Rolling Thunder as well as other Johnson-era air campaigns.| |Johnson Americanizes the War in Vietnam||Learn about Johnson's decision in March of 1965 to introduce American combat troops to the war and further escalate U.S involvement in July. Outline his decision-making process from the time of Kennedy's death to July 1965.| |Johnson's Military Strategies for American Success||Examine the military strategies and tactics for defeating North Vietnam, including Westmoreland's three phase strategy, search-and-destroy missions, body counts, pacification, the Combined Action Program, the draft and more.| |Opposition to the Vietnam War, 1965-1968||Analyze the first wave of American dissent against the war in Vietnam.| |People's Army of Vietnam and the National Liberation Front||Identify the leadership and tactics of the People's Army of Vietnam (PAVN) and the National Liberation Front (NLF), also known as the Viet Cong.| 1. Lyndon B. Johnson and the Vietnam War: Learning Objectives & Activities Use the lessons in this chapter to understand President Lyndon B. Johnson's strategies related to the Vietnam War. Then apply your knowledge to the key questions and learning activities below. 2. Gulf of Tonkin Crisis and Resolution: Events and Congressional Response The Gulf of Tonkin Crisis brought the United States closer to war with North Vietnam. Learn about the event, the United States' response and its impact on broadening the Vietnam War in this lesson. 3. Primary Source: Gulf of Tonkin Resolution The Gulf of Tonkin Incident drew the United States into the Vietnam War in 1964. President Lyndon Baines Johnson greatly increased American participation by creating a resolution that easily passed through Congress. 4. A Revolving Door of Leadership in South Vietnam Following the loss of Ngo Dinh Diem, the United States faced a cycle of leadership changes in South Vietnam. Learn about the revolving leadership and how it impacted the United States' effort in the Vietnam War in this lesson. 5. The Air War: Events and Key Operations The United States fought a two-front war in Vietnam consisting of both air and ground operations. Learn about the American air war in Vietnam, including its overall objectives, main operations and successes and failures in this lesson. 6. Johnson Americanizes the War in Vietnam The path toward the full Americanization of the Vietnam War was a gradual process. Learn about President Lyndon B. Johnson's initial strategy, his response to North Vietnam and his decision for escalation in this video lesson. 7. Johnson's Military Strategies for American Success From 1965 to 1967, President Lyndon B. Johnson was forced to develop military strategies for ensuring success for the United States in the Vietnam War. Learn about how Johnson expected to defeat North Vietnam in this lesson. 8. Opposition to the Vietnam War, 1965-1968 The antiwar movement from 1965 to 1968 was the first wave of American dissent against the Vietnam War. Learn about the movement, including its leadership, organizations, and impact on President Lyndon B. Johnson in this lesson. 9. People's Army of Vietnam and the National Liberation Front The People's Army of Vietnam and the National Liberation Front represented North Vietnam during the Vietnam War. This lesson will focus on the overall ideology, the respective operations of the groups and the tactics employed against the United States. Earning College Credit Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Transferring credit to the school of your choice Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you. Other chapters within the History 108: History of the Vietnam War course
Like carpenter bees, Leaf-cutters are solitary bees that outfit their nests in tunnels in wood. Unlike carpenter bees, they're unable to chew out their own tunnels, and so rely on existing ones. This year, I've observed a large leaf-cutter - yet to be identified - reusing a tunnel bored in previous years by the large Eastern carpenter bee, Xylocopa virginica. They use the leaf segments to line the tunnels. The leaves of every native woody plant in my garden has many of these arcs cut from the leaves. The sizes of the arcs range widely, from dine-sized down to pencil-points, reflecting the different sizes of the bee species responsible. Tiny arcs cut from the leaves of Wisteria frutescens in my backyard. I speculate that different species of bees associate with different species of plants in my gardens. The thickness and texture of the leaves, their moisture content, and their chemical composition must all play a part. I've yet to locate any research on this; research, that is, that's not locked up behind a paywall by the scam that passes for most of scientific publishing. Although I've observed the "damage" on leaves in my garden for years, this was the first time I witnessed the behavior. Even standing in the full sun, I got chills all over my body. I recognize now that the "bees with big green butts" I've seen flying around, but unable to observe closely, let alone capture in a photograph, have been leaf-cutter bees. As a group, they're most easily identified by another difference: they carry pollen on the underside of their abdomen. A bee that has pollen, or fuzzy hairs, there will be a leaf-cutter bee. An unidentified Megachile, leaf-cutter bee, I found in my garden. Another behavior I observe among the leaf-cutters in my garden is that they tend to hold their abdomens above the line of their body, rather than below, as with other bees. Perhaps this is a behavioral adaptation to protect the pollen they collect. In any case, when I see a "bee with a perky butt," I know it's a leaf-cutter bee. When they're not collecting leaves, they're collecting pollen. Having patches of different plant species that bloom at different times of the year is crucial to providing a continuous supply of food for both the adults and their young. An individual bee will visit different plant species (yes, I follow them to see what they're doing). And different leaf-cutter species prefer different flowers. All the plants I've observed them visit share a common trait: they have tight clusters of flowers holding many small flowers; large, showy flowers hold no interest for the leaf-cutter bees.
Pre-service teachers using Project Learning Tree (PLT) in the K-6 classroom This activity has benefited from input from faculty educators beyond the author through a review and suggestion process. This review took place as a part of a faculty professional development workshop where groups of faculty reviewed each others' activities and offered feedback and ideas for improvements. To learn more about the process On the Cutting Edge uses for activity review, see http://serc.carleton.edu/NAGTWorkshops/review.html. This page first made public: Mar 17, 2010 Strengths of the activity: - Uses science as a basis for a teaching lesson – promotes science literacy at all levels of education - Connection to relevant issues in environmental/natural science - Promotes outdoor learning - PLT curriculum comes with a large number of resources – websites, alternate activities, reading connections Skills and concepts that students must have mastered - Students must understand the science content behind the activity they will present - Students must be able to adapt a lesson to a particular grade-level How the activity is situated in the course Content/concepts goals for this activity - Contributes to environmental education and general science literacy - Students gain experience communicating scientific information - Teaching science content prepares students for subsequent science methods coursework - Students appreciate the level of interest and prior knowledge elementary students have with respect to science Higher order thinking skills goals for this activity - Breakdown and modify content learned in class in order to present science lessons for children - Collaborate with classmates and the elementary school classroom teacher when preparing the lesson - Assess the activity for its connection to course content Other skills goals for this activity - Understand basic concepts in physics, chemistry, earth science, and biology - Analyze problems using scientific processes - Investigate issues using critical thinking - Students understand the importance of the goal of science literacy for citizens and understand the role of the elementary teaching in achieving that goal Description of the activity/assignment "...use the forest as a "window on the world" to increase students' understanding of our complex environment, to stimulate critical and creative thinking, to develop the ability to make informed decisions on environmental issues and to instill the confidence and commitment to take responsible action on behalf of the environment." All students must learn the philosophy and goals of PLT program and become familiar with the type of activities that the program promotes. Once the students become familiar with PLT, they must choose an activity from the resource book that addresses a topic that has been covered in class. They must discuss their choices with the classroom teacher and get his/her feedback about the activity they have suggested. After the classroom teacher and the student agree to a plan, the student must demonstrate mastery of the content and demonstrate the activity to their classmates. The student will then present this material to the elementary class. The student will be required to write a reflection that is guided by a rubric about the experience and about how it relates to the content that has been presented in class. Determining whether students have met the goals Students are required to write a Project Description Form. This currently requires them to indicate the place where they have chosen to work, the name of their supervisor and a short description of the activity. This form must be signed by the supervisor or an email must be attached, indicating the supervisor's willingness to work with the student. In addition to the project description form, the students must answer questions which require them to reflect on: the partner organization, how their project contributes to the goals of the partner organization, how the project relates to the science content of the course, and what they hope to gain by completing the activity. At the end of the activity, the students write a reflection. In their reflections, the students provide a description of their project plan (which may be a lesson plan) and indicate the connection it has with the content/skills learned in the classroom. The students also describe the impact the activity had on their own learning. Download teaching materials and tips - Activity Description/Assignment: Project Description form (Microsoft Word 34kB Feb11 10)
It may sound like some kind of prehistoric creature, but riparian refers to the strips of green vegetation alongside streams, creeks, rivers, lakes, sloughs and other bodies of water. Riparian areas are found across Alberta: in northern boreal forest, parkland, foothills, mountains and prairie grasslands. Although riparian areas make up a small percentage of the landscape, they are definitely a big deal. Riparian areas have far reaching benefits to water, land, livestock, wildlife and humans. Healthy riparian areas act as filters to keep sediment and pollutants from entering the water, resulting in clean water for livestock and humans and quality habitat for fish. Healthy riparian areas will produce forage on a stable basis, which helps reduce the impacts of drought and acts as a buffer system during floods. Riparian areas are good for land too, acting as a sponge to collect and slowly release needed moisture over the landscape. Vegetation in riparian areas helps prevent soil erosion on the banks of water bodies. Plant root systems reduce erosion and stabilize shorelines, and create an abundance of forage and shelter for livestock and wildlife. Riparian areas are frequently part of the rangelands that livestock in Alberta graze upon. Rangeland is land that supports vegetation for animal grazing and is managed as a natural ecosystem. In Alberta, it is estimated that rangelands provide forage to about 14 per cent of the Alberta beef cattle herd. - Did you know that there are about 8 million acres of grazing land in Alberta? - The first domestic livestock arrived in Alberta with the fur trade and eventually ranching became established by the 1870’s. - A cow eats about 12 kg of forage a day (measured as dry material) and requires 40 to 60 litres of water to digest that forage. Maintaining the balance An unhealthy riparian area may show the following symptoms: - Slumped shorelines - Absence of vegetation and wildlife - Murky looking water with sediment buildup These are indicators that the health of the landscape is not being maintained in a balanced way. It is important to balance the health of the land, vegetation and water with grazing needs. Today, grazing is viewed as a natural process and tool for perpetuating rangeland ecosystems to be managed along with other factors like fire, disturbance, and human activity. The Grazing Lease Stewardship Code of Practice has helpful information: What can you do? When it comes to grazing especially, there are several range management practices that can help protect natural landscapes and riparian areas. - Don’t overgraze – this means leaving enough carryover vegetation to protect soil, conserve moisture and trap sediment; - Distributing livestock evenly – not allowing livestock to linger and overuse an area; - Rotational grazing – this means using several pastures for grazing, one of which is grazed while the others are rested before re-grazing; - Planning for periods of rest on the landscape to assist in restoring and maintaining a healthy riparian area; - Avoiding or minimizing grazing the area during fragile or vulnerable periods – for example, in late summer / early autumn grasses have dried out, while plants within the riparian zone are still green and looking mighty tasty to cows… making them vulnerable to overuse. Cows and Fish is a non-profit organization that educates Albertans on the proper management of riparian areas. Looking for a more hands on experience? You can set up a workshop, presentation and/or training opportunity with Cows and Fish. Off highway vehicles (OHVs), including quads, trikes and off-road motorcycles can also cause significant lasting damage to the landscape. Keep 100 m away from water in a Public Land Use Zone, or 30 m anywhere else. Keep OHV wheels out of streams, rivers and lakes by using established stream crossing bridges instead. Don’t rip up riparian areas – keep the green zone healthy for long-term benefits!
Sleep is defined as a state of unconsciousness from which a person can be aroused. In this state, the brain is relatively more responsive to internal stimuli than external stimuli. Sleep is essential for the normal, healthy functioning of the human body. It is a complicated physiological phenomenon that scientists do not fully understand. Historically, sleep was thought to be a passive state. However, sleep is now known to be a dynamic process, and our brains are active during sleep. Sleep affects our physical and mental health, and is essential for the normal functioning of all the systems of our body, including the immune system. The effect of sleep on the immune system affects one’s ability to fight disease and endure sickness. States of brain activity during sleep and wakefulness result from different activating and inhibiting forces that are generated within the brain. Neurotransmitters (chemicals involved in nerve signaling) control whether one is asleep or awake by acting on nerve cells (neurons) in different parts of the brain. Neurons located in the brainstem actively cause sleep by inhibiting other parts of the brain that keep a person awake. Here are the basics tips of sleep brought to us by Tasty Placement and should give us the necessary building blocks to setup our sleep cycle just like it’s supposed to be. Take a brief look at the infographic: Via : Daily Infographic
Eye Health and Vision Conditions The senses play a very important role in our lives. On a daily basis we can see, hear, smell, touch and taste things that we happen upon. These five senses help us make sense of the world on a daily basis. For example, with our sight we see things that we know will make us happy, sad or even be a potential danger. The sense of sight is given to us from our eyes. The eye is a remarkable organ, which transmits images from the world to our brain. Once sent to the brain, it is processed and understood. The eyes are the gateway for our lives to the outside world. Because of this, the eyes have a vital role in our lives. - National Eye Institute official government website providing a wide range of information on eye health, eye care and other important topics. - Eyes for Kids educational page containing information on the eye with a focus on kids and education. - Parts of the Eye informative site aimed at kids to help teach them the various parts of the eye and their function. - Common Eye Disorders helpful summary of the various types of disorders and concerns associated with the eye. - Eye Simulator interesting information on how the eye operates with a useful eye simulator page. However, the eyes need to be functioning properly in order for us to utilize our sense of sight. When it does not function properly, this will hinder this sense from operating efficiently. There are a number of eye health conditions that will or can create vision problems for people. They range from inconvenient problems such as color blindness to more serious problems which can lead to permanent vision loss such as Macular Degeneration and Glaucoma. Here are some helpful websites with information on a number of common eye disorders. Please browse these resources to learn more about them - Astigmatism Information informative site providing information on symptoms, diagnosis and treatment options for the eye disorder. - Definition of Astigmatism brief explanation and definition of what the eye condition is and how it affects people. - What is Astigmatism? useful site containing an explanation of the eye disorder and what can help it. - Cataract Overview helpful page discussing the problems the people encounter who have cataracts and the impact on vision. - Cataracts and Senior Health informative overview of the various aspects of senior citizens who have cataracts. - Cataract Information information from the CDC providing a wide range of topics pertaining to having cataracts. - Cataracts useful educational page with information on the eye condition and treatment options. - Color Blindness information on what causes color blindness in people and how it affects the way they see. - Color Blindness Overview useful site providing people with information on the causes, prevention and treatment of color blindness. - What is Color Blindness? description and information on the eye condition and what can be done to treat the disorder. - Diplopia informative site providing an overview of the condition commonly known as double vision. - Correcting Double Vision information on new techniques which can be used to correct double vision problems. - Diplopia Defined brief summary and definition of the problems associated with double vision. - Glaucoma Information informative government site with information on Glaucoma and its effect on the eye. - Glaucoma Risks useful site with information on how you may be at risk of having Glaucoma. - Information on Glaucoma eye health and the effect of having Glaucoma are talked about in this useful website. - Glaucoma Video helpful site providing a video and explanation of what is Glaucoma and how it affects peoples vision. Eye Health Resources - Healthy Eyes useful page containing information and resources on how to have healthy eyes and vision. - Vision Resources site providing some additional resources for healthy eyes and vision. - Eye Health Resources resources collection of links and information on eye health. - Eye Health Resource Guide helpful information and guide for people looking for more information on having and keeping healthy eyes and vision. - Eye Resources resourceful set of links on a number of useful topics about eye care. Our eyes and good vision is an important part of our lives. While we sometimes take for granted having good vision and eyesight it is important to protect the sight we do have. While medicine and corrective lenses can help in many cases, it is extremely important that you protect the vision you have.
Walker Lake lies within a topographically closed basin in west-central Nevada and is the terminus of the Walker River. Much of the streamflow in the Walker River is diverted for irrigation, which has contributed to a decline in lake-surface altitude of about 150 feet and an increase in dissolved solids from 2,500 to 16,000 milligrams per liter in Walker Lake since 1882. The increase in salinity threatens the fresh-water ecosystem and survival of the Lahontan cutthroat trout, a species listed as threatened under the Endangered Species Act. Accurately determining the bathymetry and relations between lake-surface altitude, surface area, and storage volume are part of a study to improve the water budget for Walker Lake. This report describes the updated bathymetry of Walker Lake, a comparison of results from this study and a study by Rush in 1970, and an estimate of the 1882 lake-surface altitude. Bathymetry was measured using a single-beam echosounder coupled to a differentially-corrected global positioning system. Lake depth was subtracted from the lake-surface altitude to calculate the altitude of the lake bottom. A Lidar (light detection and ranging) survey and high resolution aerial imagery were used to create digital elevation models around Walker Lake. The altitude of the lake bottom and digital elevation models were merged together to create a single map showing land-surface altitude contours delineating areas that are currently or that were submerged by Walker Lake. Surface area and storage volume for lake-surface altitudes of 3,851.5-4,120 feet were calculated with 3-D surface-analysis software. Walker Lake is oval shaped with a north-south trending long axis. On June 28, 2005, the lake-surface altitude was 3,935.6 feet, maximum depth was 86.3 feet, and the surface area was 32,190 acres. The minimum altitude of the lake bottom from discrete point depths is 3,849.3 feet near the center of Walker Lake. The lake bottom is remarkably smooth except for mounds near the shore and river mouth that could be boulders, tree stumps, logs, or other submerged objects. The echosounder detected what appeared to be mounds in the deepest parts of Walker Lake, miles from the shore and river mouth. However, side-scan sonar and divers did not confirm the presence of mounds. Anomalies occur in two northwest trending groups in northern and southern Walker Lake. It is hypothesized that some anomalies indicate spring discharge along faults based on tufa-like rocks that were observed and the northwest trend parallel to and in proximity of mapped faults. Also, evaporation measured from Walker Lake is about 50 percent more than the previous estimate, indicating more water is flowing into the lake from sources other than the Walker River. Additional studies need to be done to determine what the anomalies are and whether they are related to the hydrology of Walker Lake. Most differences in surface area and storage volume between this study and a study by Rush in 1970 were less than 1 percent. The largest differences occur at lake-surface altitudes less than 3,916 feet. In general, relations between lake-surface altitude, surface area, and storage volume from Rush's study and this study are nearly identical throughout most of the range in lake-surface altitude. The lake-surface altitude in 1882 was estimated to be between 4,080 feet and 4,086 feet with a probable altitude of 4,082 feet. This estimate compares well with two previous estimates of 4,083 feet and 4,086 feet. Researchers believe the historic highstand of Walker Lake occurred in 1868 and estimated the highstand was between 4,089 feet and 4,108 feet. By 1882, Mason Valley was predominantly agricultural. The 7-26 feet decline in lake-surface altitude between 1868 and 1882 could partially be due to irrigation diversions during this time.
Optical Coherence Tomography, or ‘OCT’, is a technique for obtaining sub-surface images of translucent or opaque materials at a resolution equivalent to a low-power microscope. It is effectively ‘optical ultrasound’, imaging reflections from within tissue to provide cross-sectional images. OCT is attracting interest among the medical community, because it provides tissue morphology imagery at much higher resolution (better than 10 µm) than other imaging modalities such as MRI or ultrasound. The key benefits of OCT are: • Live sub-surface images at near-microscopic resolution • Instant, direct imaging of tissue morphology • No preparation of the sample or subject • No ionizing radiation OCT delivers high resolution because it is based on light, rather than sound or radio frequency. An optical beam is directed at the tissue, and a small portion of this light that reflects from sub-surface features is collected. Note that most light is not reflected but, rather, scatters off at large angles. In conventional imaging, this diffusely scattered light contributes background that obscures an image. However, in OCT, optical coherence is used to record the optical path length of received photons allowing rejection of most photons that scatter multiple times before detection. Thus OCT can build up clear 3D images of thick samples by rejecting background signal while collecting light directly reflected from surfaces of interest. Within the range of noninvasive three-dimensional imaging techniques that have been introduced to the medical research community, OCT as an echo technique is similar to ultrasound imaging. Other medical imaging techniques such as computerized axial tomography, magnetic resonance imaging, or positron emission tomography do not utilize the echo-location principle. The technique is limited to imaging 1 to 2 mm below the surface in biological tissue, because at greater depths the proportion of light that escapes without scattering is too small to be detected. No special preparation of a biological specimen is required, and images can be obtained ‘non-contact’ or through a transparent window or membrane. It is also important to note that the laser output from the instruments is low – eye-safe near-infra-red light is used – and no damage to the sample is therefore likely.
Amit Singh, Independent researcher ‘Diversity’ is the word describes India in its true sense. With diverse and huge population comes a peculiar ‘strong ethnic ties or ethnic divisions. Due to the linguistic and regional heterogeneity of the population, the constitutional system of the India was made partly federal. Indian political system which was centered on major language groups later suffered the tyranny of the majority and had to adapt to the very strong pressure of the larger language group. After first post election, the electoral system was adapted to serve the interest of the majority populations of the Hindu. In such system, it was nearly impossible for Muslims to organize their own political parties and to get representation in legislatures (Vanhanen 2004). They were able to get representation through some major parties. In addition, tribal groups and the Dalits had faced same dilemma. They have constitutional safeguards, but, because of the British made/influenced electoral system, it has been difficult for them to get a representation in legislatures through their own parties except some minor development in recent political scenario. The Indian party system is more or less based on the caste system; political parties tend to support their ethnic affiliations thus creating an ‘ethnic rift’ in society. This rift in society often takes the forms of ethnic and communal conflict. Further aggravating the situation, parties take caste divisions into account while nominating candidates for elections, in particular, some parties nominate their own caste groups. This has resulted in the emergence of the regional parties formed on the basis of the ‘ethnic nepotism (Vanhanen 2004). Against this background, Caste and other ethnic interests becomes the principal catalyst in the Indian politics. In response to this situation, Indian democratic institutional structure has evolved and is struggling hard to accommodate various ethnic and minority strivings. Many aspects of the Indian political system become adapted to the requirement of the ethnic groups, but not sufficiently in all matters and in all parts of the country, which indicates it has not been possible to solve all ethnic conflicts through democratic institutions, or that democratic institutions are not sufficiently adapted to the requirement of the ethnic rift (Ibid). Example of the such failure are apparent in the ongoing ethnic conflict in the various parts of India; violent separatist movements of Muslims in Kashmir, naxalite movement in the different parts of the country; occasional communal violence between Hindu and Muslim and to a lesser degree with other religious groups (Christians and Sikh), too; territorial conflicts between language groups (Movement against Hindi in South India); and continual conflict between caste groups, particularly between Hindu and untouchables but also between the upper caste and the other backward castes (Ibid). Not to mention occasional outburst of Shiv Sena and Maharastra Navnirrman Sena against north Indians, runs the risk of dividing the people on the lines of the geographical regions. Though India’s federal system is struggling to mitigate the ethnic conflict to some extent, however, militancy in Kashmir and seven sister’s states has remained as an unsolved problem. In order to be more accommodative and conciliatory, Indian political system needs to be made more suitable to ethnic aspirations. The federal system can be strengthened and be made flexible. More autonomy can be extended to the various ethnic groups. The tribal state of Assam would need extensive forms of autonomy. In addition, there is a need to establish an autonomous territorial unit within states for tribal, linguistic, and religious minority. Along with it, electoral system needs to made proportional to the population of the ethnic groups; it could serve the needs of an ethnically heterogeneous society better than the present system. Ethnic conflicts cannot be completely eliminated; however, conflict can be mitigated by providing effective institutional canal for the expression of ethnic demands and competition (Lijphart 1996, Bachal 1997 cited in Vanhanen 2004). Electoral system by providing better representation, can bridge the increasing divide among various ethnic groups. In addition, more channels for the fearless expression of the repressed ethnic and minority sentiments needs to be created, because only electing the political candidate do not often guarantees the desired development and progress of the ethnic community as have been seen in cases of U.P. and Bihar. Inherent conflict ingrained in Indian electoral system/democracy/society, if do not redress on time, India, sooner or later, runs the risk of disintegrating on the lines of ethnicity. Vanhanen Tatu 2004, ‘Problems of Democracy in Ethnically Divided South Asian Countries’, paper presented at 18th European Conference on Modern South Asian Studies, Sweden.
Many regions in the EU-27 use more nitrogen and phosphorus in agriculture than is required. The main sources of nitrogen and phosphorus come from fertilisers and manure. They are typically removed from the ground by crops and animals which eat crops. However, any surplus can enter the environment, affecting both air quality and water systems. Air pollutants from agriculture include ammonia, nitrous oxide and methane emissions, which arise from livestock, including non-grazing livestock such as pigs and poultry. They also come from the housing, storage and application of manure as fertiliser and from excess artificial fertilisers in soil. Water pollutants include nitrates, phosphates and organic bound nitrogen and phosphorus. They can leach from stored manure and from runoff from agricultural soils into groundwater and surface waters. Funded by the EU, researchers developed MITERRA-EUROPE1 using information from existing models (GAINS and CAPRI) and a new nitrogen-leaching model. MITERRA-EUROPE applies a uniform approach, allowing comparisons to be made between different EU countries, such as the effects of different policies to reduce nitrogen pollution. From their findings, the researchers concluded that a combination of measures to abate ammonia and nitrous oxide emissions and nitrogen leaching was the most effective way to reduce both nitrogen and phosphorus pollution. Using the model, the researchers estimated for the EU-27 that: • there are large differences within the EU-27 in nitrogen surpluses, ammonia and nitrous oxide emissions and nitrogen leaching from agricultural land • the distribution of ammonia and nitrous oxide emissions and nitrogen leaching approximately matches the distribution of livestock across the EU-27 • intensive agricultural systems in northwest Europe have the highest emissions compared with lower emissions from non-intensive agricultural systems in South and Central Europe • major sources of ammonia came from dairy cattle (27 percent), other cattle (26 percent) and pigs (25 percent) • the largest sources of nitrous oxides are, in order, fertiliser application, grazing, manure housing, storage and application • ammonia emissions are directly related to livestock density and manure management systems whereas soil type and application of fertilisers and other sources of nitrogen also influence nitrogen leaching and ammonia emissions The researchers used the model to assess the efficiency of various policy measures, such as from the Nitrates Directive2, to reduce pollution from emissions of ammonia and leaching of nitrates. The study estimated the effects of measures to decrease single pollutants or a package of measures to reduce more than one pollutant. The results suggest that single measures to reduce ammonia emissions also increase nitrous oxide emissions and nitrogen leaching. But methods to decrease nitrogen leaching also decreases the emissions of ammonia and nitrous oxides.
Companion planting is an idea as old as gardening itself. When planted together, certain combinations of plants thrive, are healthier and have higher crop yields. These plants, be they herbs, flowers, fruit or vegetables, are said to “like” each other and be good companions. On the flipside, some combinations prove incompatible or make for “bad” companion plants. Seeing as plants don’t have feelings (despite some being known to sulk!), what does this really mean? Numerous lists of companion plants are widely available, many based on folklore and ideas passed on from gardener to gardener. Is there any scientific truth to companion planting? One of the earliest examples of companion planting was put into practice by the Native Americans, who grew corn, beans and squash together, and called this “three sisters” planting. Grown together, the three crops thrive. The corn provides shelter and a frame for the beans to climb up. The beans, being legumes, take nitrogen from the air and, with the aid of bacteria on their roots, make the nitrogen available in the soil for all three plants to use. The large leaves of the squash plants act as a living mulch, shading the soil, keeping in moisture and preventing weeds from growing. So, this plant association is very successful and these three plants are “good companions”. They have similar growing needs, assist each other’s growth and don’t outcompete each other for nutrients, so all three thrive. This principle is true for all good plant associations and is the basis for companion planting. Companion planting takes the “SNAP” approach to growing: plants are grown together to provide shelter (S), nectar (N) and alternative (A) food sources, such as pollen (P), to attract beneficial insects for pollination and to prey on insect pests. Tall plants or those with dense canopies provide shelter for lower growing plants that need shade or protection from wind. Others provide nectar for insects that pollinate surrounding plants, leading to increased flowers, fruit and vegetables. The blue-flowered phacelia and lavender, for example, provide nectar for bees that are valuable pollinators and for hoverflies whose larvae eat aphids. Some plants can be used to mask other, more desirable plants. The masking plant produces scents that confuse or deter insect pests so they won’t attack the crop plants. For instance, planting garlic near roses may deter aphids from attacking the roses. Many herbs, including tansy and hyssop, produce strong scents that repel insects, so are useful when planted among vegetable crops. The use of marigolds to repel pests is one of the most accepted principles of companion planting. The African marigold’s distinctive smell repels some insect pests, such as aphids, but attracts hoverflies that eat aphids. In addition, the roots give off a toxic chemical called thiopene, which kills nematodes, minute roundworms that live in the soil and damage many plant species. So, marigolds are good companion plants in the vegetable garden. Decoy or sacrificial plants can be used to lure insect pests away from crop plants. Nasturtiums are often used as they are attractive to a number of insects so may be attacked in preference to the crop plant. Insects, such as white butterflies, lay their eggs on the nasturtium leaves and the caterpillars eat these leaves instead of those of the crop plant. This works best when the decoy crop surrounds the preferred crop. This technique is also known as “trap cropping”. Some plants are beneficial to their companions as they improve the general growing conditions. Members of the legume family, such as peas, beans and clover are examples. They make nitrogen, a plant nutrient available in the soil, for surrounding plants to use, so all benefit. This is called symbiotic (mutually beneficial) nitrogen fixation. Providing plants and habitats in the garden where beneficial predatory insects can thrive will also be useful to all plants. These areas are sometimes called “refugia”. Many species of wasps, spiders, beetles, ladybirds and some flies are good predators of other insect pests, such as aphids, and should be encouraged by planting flowers in the Asteraceae family, such as daisies or cosmos, to attract them to the garden. By ensuring you create habitats that support beneficial insects you not only reduce pest damage to your crops, but also need to use less pesticides, which is better for the environment. The secret to companion planting is biodiversity. Think old fashioned cottage gardens where vegetables, herbs, fruit and flowers were planted together. By planting a rich mixture of flowering plants and herbs among vegetables and fruit trees, you encourage a range of beneficial insects and birds to visit your garden and they are the best companions for your plants.