content
stringlengths 275
370k
|
---|
Scientific Name: Propithecus diademaDownload as PDF
The Diademed sifaka is the largest sifaka species. Due its bright distinctive coloring, it isn’t confused with any other lemur species!
These lemurs are active during the day and found mostly living within the trees. Their diet changes depending on the season, but is mainly made up of ripe fruits, seeds, flowers and young leaves. The Diademed sifaka lives in groups of 2-8 individuals. Females are dominant, meaning that they have the power in the group.
The Diademed sifaka is listed as Critically Endangered on the IUCN Red List, with the number of individuals decreasing. Where the species is found numbers are very low and groups are found far away from others.
The Diademed sifaka lives in the rainforests of eastern Madagascar. It is thought to be the most widely spread sifaka species despite their numbers being low!
The main threats these sifaka face is habitat loss through slash and burn agriculture and timber (wood) extraction. Groups of sifakas within the Tsinjoarivo region have also seen a decrease in numbers due to illegal rum production. Some individuals are also kept as pets in Madagascar, which reduces populations because these animals are not breeding in the wild.
LCN Members Working to Save the Diademed Sifaka
Where to See these Lemurs in Madagascar
- Andasibe National Park (150km from Antananarivo)
- Anjozorobe Forest (100km from Antananarivo)
- Marotandrano Special Reserve
- Ambatovaky Special Reserve
Research the Diademed Sifaka in the Scientific Literature |
The environmental impact of mining activities is a key issue concerning the industry. The Surface Mining Control and Reclamation Act, enacted in 1977, provides many regulations to ensure mine sites are operated, and any environmental damage is remediated, in a responsible way. Read Mining and the Environment: What Happens When A Mine Closes? to learn about other U.S. regulations governing the mining industry and some of the issues they address. Remediation is just one part of reducing the environmental impact of mining; here we present a summary of some projects underway to initiate more responsible mining technologies, or “green mining.” In the Mining-technology.com article, Eco-friendly Mining Trends for 2014, Joshua Kirkey, Communications Advisor for Natural Resources Canada (NRC), defines green mining as “technologies, best practices and mine processes that are implemented as a means to reduce the environmental impacts associated with the extraction and processing of metals and minerals. Examples include the reduction of greenhouse gases, selective mining approaches to reduce the ecological footprint, and reduction in chemical use. Green mining technologies and practices offer superior performance with respect to energy efficiency, greenhouse gas emissions and the use of chemicals.” The article points out that green technologies are especially needed to address the tremendous amount of energy and water used by traditional mining methods, to improve mine closure processes, and that these practices need to be developed in a way that integrates well with current technologies. MIT’s Mission 2016: The Future of Strategic Natural Resources website addresses the need for more widespread Environmentally Sensitive “Green” Mining standards and techniques. The site presents a plan for improving efficiency and decreasing the environmental impact of mining is broken up into the following categories:
- Shutting down illegal and unregulated mines
- Choosing environmentally friendly general mining processes. In situ mining, for example, can be more environmentally friendly than underground mining and is cheaper than many mining methods.
- Implementing recently discovered green mining technologies. These include mining from tailings, dust suppression techniques, liquid membrane emulsion technology, sulphuric acid leaching extraction process, impermeable tailings storage, and improved energy efficiency by using better ventilation systems and diesel engines
- Cleaning up the sites of shut-down mines using R2 technology to recover metals while improving the condition of the land
- Reevaluating cut-off grades to reduce waste and increase efficiency
- Research and development of green mining technology in the areas of processing, clean water, and energy efficiency.
Mining Global’s article, Top 10 Ways to Make Mines More Environmentally Friendly echoes some of the suggestions put forth by Mission 2016:
- Closing illegal and unregulated mines
- Scrap mining and recycling
- Better legislation and regulations
- Improving environmental performance
- Accurate tallying of toxic mining waste
- Building from reusable waste
- Closing and reclaiming sites of shut-down mines
- Investing in research and development of Green Mining Technology
- Replenishing the environment
- Improving the efficiency of manufacturing processes.
In our next post we’ll take a closer look at Mission 2016’s outline for implementing environmentally-friendly mining technologies, including a few specific examples of green technologies in action, and we’ll demonstrate how handheld XRF technology is an important tool in achieving some green mining objectives. |
Here’s a way to remotely light a candle that has just gone out, using the smoke rising from the wick.
In this experiment, we’ll learn how to use the smoke coming from an extinguished candle to relight the candle. The experiment requires adult supervision!
· Two candles
· Long matches or a long-neck lighter
Watch the video to see how we conduct this experiment.
Candles are made of wax with a wick at the center. The wax is the candle’s main “flammable material” (the wick is also flammable but its part is relatively small compared to the wax) – it is what burns and makes the fire. But if you tried to light a candle without a wick, just the wax, you would find it really difficult, if not impossible – the wax simply wouldn’t burn.
We’ve conducted other experiments with flammable materials which don’t burn in some situations: A steel screw, for example, is inflammable – but if you turn it into thin threads, i.e., steel wool, it becomes flammable. Corn flour and cornstarch will not burn when piled into a heap – but if you blow on them and create a cloud, it can burst into flame.
The same happens with wax. It doesn’t burn when it’s in a solid lump – but using a burning wick that melts it (turning it into liquid – look at the wax under the flame) and then vaporizes it (by turning it into a gas around the wax-soaked wick) – will cause it to burn.
The reason this happens has to do with access to oxygen. For fire to burn, three elements are required all at once, AKA the “fire triangle”: Fuel, oxygen, and heat. If we have all three – we’ll have fire, but if even one is missing, then no fire.
There is plenty of oxygen in the air, but when you have a solid block of wax, the oxygen doesn’t mix with the wax and won’t react with it. There is only a small amount of oxygen near the surface of the wax that could possibly start the combustion, but it simply isn’t enough.
The same is true for flour and steel. Only when you blow the flour to form a cloud, or turn steel into thin strands of wool, or burn wax into gas, then enough oxygen can react with them and fuel a fire. That is because the oxygen, no longer restricted to the outer layer of material, has access to almost every grain of flour, strand of steel wool, or particle of vaporized wax. When materials are properly distributed in the air, all you need is some heat (from an external source) to get a fire going that can maintain itself – as it supplies itself more and more heat to continue burning.
In this experiment, we put out the candle and saw that in the first few seconds, after it was extinguished, white smoke was rising from the candle. That’s because in those first few seconds, the wick and the wax are still quite warm, so there is a trail of hot wax vapor rising up from the wick. As it rises, the vapors cool, thicken, and solidify, thus forming a “cloud” of small, white wax particles that float in the air; particles that were created from the gaseous wax as it cooled.
In this state, the flammable material (wax) is mixed well with oxygen – a cloud of tiny particles surrounded by oxygen, just like the “flour cloud”. The moment we bring heat close to this cloud, with another candle or a lighter, it lights up and burns through its entire “length,” until reaching the wick and relighting it.
This isn’t the easiest experiment to carry out, because the wax vapors are only produced in the few seconds after the candle is extinguished. The experiment also has to be conducted away from wind, so that the cloud isn’t dispersed: You will need an unbroken trail of white smoke reaching up to the wick. And of course, as with any experiment that involves fire – you must be very careful and perform it only under adult supervision. |
Cluster analysis is a concept that is often found in statistics courses, and that is present in the daily practice of many fields, including medicine and social science. While cluster analysis can seem like a confusing topic, it is really a basic organizational technique that helps scientists and analysts understand how things may be related to each other. A basic understanding of the underpinnings of this statistical tool make it less intimidating for students delving into the many fields that require research or data analysis.
Definition of Cluster Analysis
In it’s simplest form, cluster analysis is a method for making sense of data by organizing pieces of information into groups, called clusters. Data points can be survey responses, images, living organisms, chemical compounds, identity categories, or any other observable type of data that helps professionals explore problems and questions. Clusters can be made up of any number of data points that are related in any number of ways that are defined by the researcher. Algorithms are often a helpful tool for determining the data points that belong within a cluster. Most analysts will select the clustering model that best fits their data and choose an appropriate algorithm based on the model.
Cluster analysis is a broad umbrella for many different methods of statistical analysis that create “clusters” through different organizational means. While there is no set definition for what comprises a cluster, there are several common models for assembling various types of clusters. The selected model will vary depending on the needs of the researcher or the more general tenants of their field of study.
Some of the most common cluster models are hierarchical, density, and distribution. Hierarchical clustering uses an algorithm that connects data points by distance. The idea behind the hierarchical model is that data points nearer to one another are more related than ones farther away. For each set of points, analysts must determine the desired amount of distance required for points to be contained within a single cluster. Density clusters are defined by dense points within the field of data. The sparser areas separating clusters are not grouped within this model. Distribution clustering is most closely related to statistics, and mandates that clusters are determined via the distribution origin of each data point. Points belonging to the same distribution will be grouped together.
Cluster Analysis Professions
Biology, social science and marketing are just a few examples of professional fields that employ cluster analysis. Within biology, for instance, scientists use cluster analysis to group plant organisms within a genus or family that display similar attributes. Social scientists use cluster analysis to determine areas where certain types of crime occur at a higher rate. Population analyses and educational studies are other areas of social science that involve cluster analysis. Columbia University emphasizes the importance of cluster analysis to marketing and business professionals seeking to identify target groups for particular products and services.
Researchers and analysts require tools to find meaningful results from sets of data. Because of its ability to be manipulated for a numerous variety of research purposes, cluster analysis is a valuable resource for any professional working with data points. |
More than 2 million people in the United States suffer burns each year, most of which are minor. Some common types of burns are scalds (liquids, grease, steam); fire (flash and flame); direct contact with an extremely hot surface, and sunburn. Between outdoor cooking, holiday celebrations, and recreational activities, summer is, unfortunately, a common time for burns to occur. People of all ages are susceptible but burns typically impact people in their 20s and children 9 and under more than other age groups.
Outdoor Cooking Safety
- Keep grills several feet away from other objects, always stay near it, and keep children away.
- Wear short sleeves while grilling and use cooking utensils with long handles.
- Before lighting, check fuel connections for leaks and blockages and after use, shut off propane tank valve.
- Don't start a grill indoors or with the lid closed, never use gasoline as a starter fluid or add starter fluid to hot or warm coals.
- When lighting, keep starter fluid away from charcoal and for propane grills, turn on a long-handled utility lighter before turning on the gas.
- While cooking, keep grease and fat from building up to avoid flare-ups.
- After grilling, store utility lighters inside, dispose of hot coals properly.
Sparkler and Firework Safety
- Fireworks are illegal in many places so follow all local laws; leave fireworks shows to the professionals.
- NEVER allow children to handle or light fireworks.
- Light and hold sparklers one at a time, standing at least 6 feet away from others.
- Avoid wearing loose clothing with holding sparklers and drop used ones in a bucket of water.
- Take cover under a tree, umbrella or other shade during the hours of 10am and 4pm because this is when UV exposure is highest.
- Use broad-spectrum sunscreen with SPF 30 or higher, and reapply every two hours and after you swim or sweating.
- Keep children under one year out of the sun and don't apply sunscreen to those under six months of age.
- Don't use expired sunscreen or one that is more than three years old.
- Wear clothing to protect skin (look for some with built-in SPF) but keep in mind that wet clothes offer less protection.
- Wear a wide-brimmed hat but if you choose to wear a baseball hat, apply sunscreen to the back of your neck and ears.
- Wear sunglasses that wrap around and block as close to 100% of both UVA and UVB rays as possible (most sunglasses in the U.S. offer this, regardless of cost).
- Be careful of medications that increase photo (light) sensitivity, making you more susceptible to burning more easily.
- Use designated fire pits and clear the ground around the area before lighting
- Build the fire downwind
- Never use flammable liquid or leave the fire unattended
- Keep water or fire extinguisher nearby and douse the fire with water when finished
Thermal Burn Safety
- Always feel the surface of a slide or other playground equipment for several seconds before attempting to walk on it or slide down it.
- Metal slides are not always the culprit of thermal burns, which can also happen on plastic or rubber surfaces.
- Always dress children in appropriate clothing for the playground (e.g., shoes, pants).
- Wear shoes instead of groing barefoot to prevent asphalt burns.
Treatment for minor burns:
- Apply cool compresses or bathe the burned area.
- Use perfume-free, alcohol-free lotion or aloe to cool and moisturize the burn.
- Wear loose-fitting clothing that doesn't irritate the skin.
- Take over-the-counter pain medicine like ibuprofen (Advil, Motrin) as directed.
- Drink extra fluids.
Never use the following items on a burn:
- Petroleum jelly or ointment
- Harsh soaps
- Over-the-counter benzocaine creams or sprays (may cause allergic reaction)
- Home remedies (toothpaste, etc.)
Seek medical attention if the burn is accompanied by:
- Severe pain, blisters and/or swelling that causes difficulty in breathing.
- Fever over 101° F (38°C).
- If a first- or second-degree burn is larger than 2–3 inches or on the face, major joint, hands, feet, or the genitals.
- If an infant under 1 year old has been sunburned.
- Stop the burning process.
- Run cool water over burned area.
- Remove all clothing from the burned area.
- Cover with a clean dry cloth.
- Call 911.
CALL 911 IN THE EVENT OF A MAJOR BURN. |
Typically, moray eels grow to a length of about 1.5 metres. The largest known moray eel is the Slender giant moray, which can reach 4 metres in length. Moray eels live in coral reefs and rocky areas, at a depth of about 200m.
These reefs are usually in tropical or subtropical waters. They spend most of their time inside deep cracks in rocks. they are very dangerous and have big teeth Morays are carnivores. They prey on other fish, cephalopods, mollusks, and crustaceans. Groupers, other moray eels, and Barracudas are amongst their predators.They have razor sharp teeth which can pierce your skin.
|Wikimedia Commons has media related to: Muraenidae| |
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
In linguistics, the lexicon (or wordstock) of a language is its vocabulary, including its words and expressions. A lexicon is also a synonym of the word thesaurus. More formally, it is a language's inventory of lexemes. Coined in English 1603, the word "lexicon" derives from the Greek "λεξικόν" (lexicon), neut. of "λεξικός" (lexikos), "of or for words", from "λέξις" (lexis), "speech", "word", and that from "λέγω" (lego), "to say", "to speak".
The lexicon includes the lexemes used to actualize words. Lexemes are formed according to morpho-syntactic rules and express sememes. In this sense, a lexicon organizes the mental vocabulary in a speaker's mind: First, it organizes the vocabulary of a language according to certain principles (for instance, all verbs of motion may be linked in a lexical network) and second, it contains a generative device producing (new) simple and complex words according to certain lexical rules. For example, the suffix '-able' can be added to transitive verbs only, so that we get 'read-able' but not 'cry-able'.
Usually a lexicon is a container for words belonging to the same language. Some exceptions may be encountered for languages that are variants, like for instance Brazilian Portuguese compared to European Portuguese, where a lot of words are common and where the differences may be marked word by word.
When linguists study the lexicon, they study such things as what words are, how the vocabulary in a language is structured, how people use and store words, how they learn words, the history and evolution of words (i.e. etymology), types of relationships between words as well as how words were created.
An individual's mental lexicon, lexical knowledge, or lexical concept is that person's knowledge of vocabulary. The role the mental lexicon plays in speech perception and production, as well as questions of how words from the lexicon are accessed, is a major topic in the fields of psycholinguistics and neurolinguistics, where models such as the cohort model have been proposed to explain how words in the lexicon are retrieved.
- Function word
- Lexical access
- Lexical decision
- Lexical markup framework
- Morphology (linguistics)
- Word meaning
- Word recognition
- Aitchison, Jean. Words in the Mind: An Introduction to the Mental Lexicon. Malden, MA: Blackwell, 2003.
- Zuckermann, Ghil'ad (2003). Language Contact and Lexical Enrichment in Israeli Hebrew, Palgrave Macmillan.
- ↑ λεξικός, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library
- ↑ λέξις, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library
- ↑ λέγω, Henry George Liddell, Robert Scott, An Intermediate Greek-English Lexicon, on Perseus Digital Library
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| |
Answered By: Kimberly Boyd Last Updated: Feb 10, 2017 Views: 11
A "citation" is the way you tell your readers that certain material in your work came from another source. It also gives your readers the information necessary to find that source again, including:
- information about the author
- the title of the work
- the name and location of the company that published your copy of the source
- the date your copy was published
- the page numbers of the material you are borrowing
There are two parts to every citation the in-text citation and the reference entry on your references page at the end of your paper. |
Crying is the only way a baby can communicate. Babies cry to get your attention and to ensure that their needs are met. Crying begins from birth and often increases in intensity and frequency around 6-8 weeks of age.
Generally babies become more settled in their behaviour and temperament around 3-4 months of age. This is because it is around this time that their sleep pattern changes and they are capable of self settling.
All babies cry, with some being more demanding, irritable and unsettled than others by nature.
Why do babies cry?
Coping with a crying, unsettled baby can be very distressing for any parent or carer. Learning and understanding the most common reasons babies cry can help you cope during difficult and stressful times. It becomes a process of elimination, working calmly from top to bottom through a mental list of possible causes, eliminating as you go. Ask yourself, is my baby:
- Over stimulated?
- In pain? |
A Hall effect sensor is a transducer that varies its output voltage in response to a magnetic field. Hall effect sensors are used for proximity switching, positioning, speed detection, and current sensing applications.
In a Hall effect sensor, a thin strip of metal has a current applied along it. In the presence of a magnetic field, the electrons in the metal strip are deflected toward one edge, producing a voltage gradient across the short side of the strip (perpendicular to the feed current). Hall effect sensors have an advantage over inductive sensors in that, while inductive sensors respond to a changing magnetic field which induces current in a coil of wire and produces voltage at its output, Hall effect sensors can detect static (non-changing) magnetic fields.
In its simplest form, the sensor operates as an analog transducer, directly returning a voltage. With a known magnetic field, its distance from the Hall plate can be determined. Using groups of sensors, the relative position of the magnet can be deduced.
Frequently, a Hall sensor is combined with threshold detection so that it acts as and is called a switch. Commonly seen in industrial applications such as the pictured pneumatic cylinder, they are also used in consumer equipment; for example some computer printers use them to detect missing paper and open covers. They can also be used in computer keyboards, an application that requires ultra-high reliability.
Hall sensors are commonly used to time the speed of wheels and shafts, such as for internal combustion engine ignition timing, tachometers and anti-lock braking systems. They are used in brushless DC electric motors to detect the position of the permanent magnet. In the pictured wheel with two equally spaced magnets, the voltage from the sensor will peak twice for each revolution. This arrangement is commonly used to regulate the speed of disk drives.
A Hall probe contains an indium compound semiconductor crystal such as indium antimonide, mounted on an aluminum backing plate, and encapsulated in the probe head. The plane of the crystal is perpendicular to the probe handle. Connecting leads from the crystal are brought down through the handle to the circuit box.
When the Hall probe is held so that the magnetic field lines are passing at right angles through the sensor of the probe, the meter gives a reading of the value of magnetic flux density (B). A current is passed through the crystal which, when placed in a magnetic field has a "Hall effect" voltage developed across it. The Hall effect is seen when a conductor is passed through a uniform magnetic field. The natural electron drift of the charge carriers causes the magnetic field to apply a Lorentz force (the force exerted on a charged particle in an electromagnetic field) to these charge carriers. The result is what is seen as a charge separation, with a buildup of either positive or negative charges on the bottom or on the top of the plate. The crystal measures 5 mm square. The probe handle, being made of a non-ferrous material, has no disturbing effect on the field.
A Hall probe should be calibrated against a known value of magnetic field strength. For a solenoid the Hall probe is placed in the center.
When a beam of charged particles passes through a magnetic field, forces act on the particles and the beam is deflected from a straight path. The flow of electrons through a conductor form a beam of charged carriers. When a conductor is placed in a magnetic field perpendicular to the direction of the electrons, they will be deflected from a straight path. As a consequence, one plane of the conductor will become negatively charged and the opposite side will become positively charged. The voltage between these planes is called the Hall voltage.
When the force on the charged particles from the electric field balances the force produced by magnetic field, the separation of them will stop. If the current is not changing, then the Hall voltage is a measure of the magnetic flux density. Basically, there are two kinds of Hall effect sensors. One is linear which means the output of voltage linearly depends on magnetic flux density; the other is called threshold which means there will be a sharp decrease of output voltage at each magnetic flux density.
The key factor determining sensitivity of Hall effect sensors is high electron mobility. As a result, the following materials are especially suitable for Hall effect sensors:
Hall effect sensors are linear transducers. As a result, such sensors require a linear circuit for processing of the sensor's output signal. Such a linear circuit:
In some cases the linear circuit may cancel the offset voltage of Hall effect sensors. Moreover, AC modulation of the driving current may also reduce the influence of this offset voltage.
Hall effect sensors with linear transducers are commonly integrated with digital electronics. This enables advanced corrections to the sensor's characteristics (e.g. temperature coefficient corrections) and digital interfacing to microprocessor systems. In some solutions of IC Hall effect sensors a DSP is used, which provides for more choices among processing techniques.:167
The Hall effect sensor interfaces may include input diagnostics, fault protection for transient conditions, and short/open circuit detection. It may also provide and monitor the current to the Hall effect sensor itself. There are precision IC products available to handle these features.
A Hall effect sensor may operate as an electronic switch.
In the case of linear sensor (for the magnetic field strength measurements), a Hall effect sensor:
Sensing the presence of magnetic objects (connected with the position sensing) is the most common industrial application of Hall effect sensors, especially those operating in the switch mode (on/off mode). The Hall effect sensors are also used in the brushless DC motor to sense the position of the rotor and to switch the transistors in the right sequence.
Hall effect sensors may be utilized for contactless measurements of DC current in current transformers. In such a case the Hall effect sensor is mounted in the gap in magnetic core around the current conductor. As a result, the DC magnetic flux can be measured, and the DC current in the conductor can be calculated.
The Hall sensor is used in some automotive fuel level indicators. The main principle of operation of such indicator is position sensing of a floating element. This can either be done by using a vertical float magnet or a rotating lever sensor.
The neutrality of this section is disputed. (December 2017) (Learn how and when to remove this template message)
Developed by Everett A. Vorthmann and Joeseph T. Maupin for Micro Switch (a division of Honeywell) in 1969, the switch was known to still be in production until as late as 1990.The key-switches have been tested to have a lifetime of over 30 billion keypresses, and also has dual open-collector outputs for reliability. The Honeywell Hall effect switch is most famously used in the Space-cadet keyboard, a keyboard used on LISP machines.
None of the audio/visual content is hosted on this site. All media is embedded from other sites such as GoogleVideo, Wikipedia, YouTube etc. Therefore, this site has no control over the copyright issues of the streaming media.
All issues concerning copyright violations should be aimed at the sites hosting the material. This site does not host any of the streaming media and the owner has not uploaded any of the material to the video hosting servers. Anyone can find the same content on Google Video or YouTube by themselves.
The owner of this site cannot know which documentaries are in public domain, which has been uploaded to e.g. YouTube by the owner and which has been uploaded without permission. The copyright owner must contact the source if he wants his material off the Internet completely. |
People with type 2 diabetes have insulin resistance, which means the body cannot use insulin properly to help glucose get into the cells. In people with type 2 diabetes, insulin doesn’t work well in muscle, fat, and other tissues, so your pancreas (the organ that makes insulin) starts to put out a lot more of it to try and compensate. "This results in high insulin levels in the body,” says Fernando Ovalle, MD, director of the multidisciplinary diabetes clinic at the University of Alabama in Birmingham. This insulin level sends signals to the brain that your body is hungry.
Type 2 diabetes is often treated with oral medication because many people with this type of diabetes make some insulin on their own. The pills people take to control type 2 diabetes do not contain insulin. Instead, medications such as metformin, sulfonylureas, alpha-glucosidase inhibitors and many others are used to make the insulin that the body still produces more effective.
A study by Mayer-Davis et al indicated that between 2002 and 2012, the incidence of type 1 and type 2 diabetes mellitus saw a significant rise among youths in the United States. According to the report, after the figures were adjusted for age, sex, and race or ethnic group, the incidence of type 1 (in patients aged 0-19 years) and type 2 diabetes mellitus (in patients aged 10-19 years) during this period underwent a relative annual increase of 1.8% and 4.8%, respectively. The greatest increases occurred among minority youths.
A population-based, nationwide cohort study in Finland examined the short -and long-term time trends in mortality among patients with early-onset and late-onset type 1 diabetes. The results suggest that in those with early-onset type 1 diabetes (age 0-14 y), survival has improved over time. Survival of those with late-onset type 1 diabetes (15-29 y) has deteriorated since the 1980s, and the ratio of deaths caused by acute complications has increased in this group. Overall, alcohol was noted as an important cause of death in patients with type 1 diabetes; women had higher standardized mortality ratios than did men in both groups.
While this can produce different types of complications, good blood sugar control efforts can help to prevent them. This relies heavily on lifestyle modifications such as weight loss, dietary changes, exercise and, in some cases, medication. But, depending on your age, weight, blood sugar level, and how long you've had diabetes, you may not need a prescription right away. Treatment must be tailored to you and, though finding the perfect combination may take a little time, it can help you live a healthy, normal life with diabetes.
Although the signs of diabetes can begin to show early, sometimes it takes a person a while to recognize the symptoms. This often makes it seem like signs and symptoms of diabetes appear suddenly. That’s why it’s important to pay attention to your body, rather than simply brushing them off. To that end, here are some type 1 and type 2 diabetes symptoms that you may want to watch out for:
American Diabetes Association Joslin Diabetes Center Mayo Clinic International Diabetes Federation Canadian Diabetes Association National Institute of Diabetes and Digestive and Kidney Diseases Diabetes Daily American Heart Association Diabetes Forecast Diabetic Living American Association of Clinical Endocrinologists European Association for the Study of Diabetes
Random blood sugar test. A blood sample will be taken at a random time. Blood sugar values are expressed in milligrams per deciliter (mg/dL) or millimoles per liter (mmol/L). Regardless of when you last ate, a random blood sugar level of 200 mg/dL (11.1 mmol/L) or higher suggests diabetes, especially when coupled with any of the signs and symptoms of diabetes, such as frequent urination and extreme thirst.
Yet carbs are processed differently in the body based on their type: While simple carbs are digested and metabolized quickly, complex carbs take longer to go through this system, resulting in more stable blood sugar. “It comes down to their chemical forms: A simple carbohydrate has a simpler chemical makeup, so it doesn’t take as much for it to be digested, whereas the complex ones take a little longer,” Grieger explains.
Adult and pediatric endocrinologists, specialists in treating hormone imbalances and disorders of the endocrine system, are experts in helping patients with diabetes manage their disease. People with the disease also may be cared for by a number of primary care providers including family or internal medicine practitioners, naturopathic doctors, or nurse practitioners. When complications arise, these patients often consult other specialists, including neurologists, gastroenterologists, ophthalmologists, acupuncturists, surgeons, and cardiologists. Nutritionists, integrative and functional medicine doctors, and physical activity experts such as personal trainers are also important members of a diabetes treatment team. It is important to interview a new health care professional about their experience, expertise, and credentials to make sure they are well qualified to help you.
Although urine can also be tested for the presence of glucose, checking urine is not a good way to monitor treatment or adjust therapy. Urine testing can be misleading because the amount of glucose in the urine may not reflect the current level of glucose in the blood. Blood glucose levels can get very low or reasonably high without any change in the glucose levels in the urine.
Type 2 diabetes used to be called adult-onset diabetes or non-insulin dependent diabetes because it was diagnosed mainly in adults who did not require insulin to manage their condition. However, because more children are starting to be diagnosed with T2D, and insulin is used more frequently to help manage type 2 diabetes, referring to the condition as “adult-onset” or “non-insulin dependent” is no longer accurate.
With gestational diabetes, risks to the unborn baby are even greater than risks to the mother. Risks to the baby include abnormal weight gain before birth, breathing problems at birth, and higher obesity and diabetes risk later in life. Risks to the mother include needing a cesarean section due to an overly large baby, as well as damage to heart, kidney, nerves, and eye.
In the sunshine, molecules in the skin are converted to vitamin D. But people stay indoors more these days, which could lead to vitamin D deficiency. Research shows that if mice are deprived of vitamin D, they are more likely to become diabetic. In people, observational studies have also found a correlation between D deficiency and type 1. "If you don't have enough D, then [your immune system] doesn't function like it should," says Chantal Mathieu, MD, PhD, a professor of experimental medicine and endocrinology at Katholieke Universiteit Leuven in Belgium. "Vitamin D is not the cause of type 1 diabetes. [But] if you already have a risk, you don't want to have vitamin D deficiency on board because that's going to be one of the little pushes that pushes you in the wrong direction."
It isn't always easy to start an exercise regimen, but once you get into a groove, you may be surprised at how much you enjoy it. Find a way to fit activity into your daily routine. Even a few minutes a day goes a long way. The American Diabetes Association recommends that adults with diabetes should perform at least 150 minutes of moderate-intensity aerobic physical activity per week (spread over at least three days with no more than two consecutive days without exercise). You don't have to start with this right away, though. Start with five to 10 minutes per day and go from there. To stay motivated, find a buddy, get a fitness tracker, or use another measurement tool that can help you see your progress.
Type I diabetes, sometimes called juvenile diabetes, begins most commonly in childhood or adolescence. In this form of diabetes, the body produces little or no insulin. It is characterized by a sudden onset and occurs more frequently in populations descended from Northern European countries (Finland, Scotland, Scandinavia) than in those from Southern European countries, the Middle East, or Asia. In the United States, approximately three people in 1,000 develop Type I diabetes. This form also is called insulin-dependent diabetes because people who develop this type need to have daily injections of insulin.
What are the symptoms of diabetes in men? Diabetes is a common lifelong condition that affects the ability of the hormones to manage blood sugar levels. It affects men and women differently. Learn about the signs and symptoms of diabetes in men. This article includes information on how diabetes can affect sex and cause erectile dysfunction. Read now |
What is Directing Function of Management?
Instructing, guiding, supervising and influencing people enabling them to achieve organizational objectives is called directing. In the process of directing, employees are coached to develop communication and are encouraged to accomplish their goals.
Directing is a key element in the process of management. After formulating the plans for accomplishing the pre-determined goals, the organizational structure is prepared and suitable persons are designated to appropriate roles, and the organization commences its operations. However, necessary actions will only initiate after a command in chief provides direction to the higher-level management.
According to Earnest Dale, “Directing is what has to be done and in what manner through dictating the procedures and policies for accomplishing performance standards“.
Only after understanding a few concepts related to organization, effective direction can be ensured. These concepts relate to (i) Aims, objectives, and plans of the organization by each individual manager; (ii) The organization and its elements; (iii) Policies, procedures and rules under which the organization will operate, and the reasons thereof; (iv) Major problems that are faced by the concern and particularly what each manager can do to solve the problem; and (v) Complete and up-to-date information on significant factors such as business forecasts, changes in policies, procedures etc.
Importance of Directing Function of Management
Directing or Direction function is usually considered the central point for any organization around which objectives are achieved. Experts call Direction as “Life spark of an enterprise”. It is also referred to as an actuating function of management since direction is necessary for initiation of operations of an enterprise. Being at the center of all enterprise activities, it ensures numerous benefits to any organization, which are as follows:-
It Initiates Actions
Directing as a function is the primary starting point of the work operations for executioners. Direction initiates action, as executioners understand their roles and start as per the instructions laid for them. Directing function is closer to implementation than planning, it is a post-planning stage.
It Ingrates Efforts
By providing direction, the superiors coach, influence and instruct their subordinates to work. Direction can relate the efforts of different departments to each other and promote integration amongst them. Persuasive leadership and effective communication play a significant role in inculcating integration. Integration of efforts helps in bringing overall effectiveness and stability in operations.
Means of Motivation
Effective managers/ directors utilize directing function as an element of motivation for the employees and improvise their work performances. For instance, incentives or compensation are usually used as motivating tools, whether monetary or non – monetary, such measures should act as a “Morale booster” for the employees. For any organization, motivated employees are best considered as the mantra of success.
It Provides Stability
Directing ensures stability and balance in an organization which is necessary for long-term survival in the market. Managers utilize four following tools or elements of direction function – a judicious blend of persuasive leadership, effective communication, strict supervision and efficient motivation.
Coping up with the changes
A natural human instinct is to resist to any change. However, adapting to the changing environment is required for growing in a business and achieving ambitious goals. It is directing function which is of use to meet with changes in the environment, both internal as external. Further, effective communication also helps in coping up with the changes.
Manager’s role is to clearly communicate the nature of changes, their probable impact on the employees. This should help in easier adaptation and smoother running of the organization. For instance, if a company is shifting from electrical to solar power, this is an important and significant change in technique of power, the enterprise and production are fed with.
The resulting factors include renewed training of employees on how to work with the new power system. This may not be easily accepted by the subordinates since they view it is an extra effort on their part. The manager has to come forward and explain the longer-term benefits of the shift and the impact it would add to the repute of the brand and it is a shift that employees should be proud of in the outside world.
Efficient Utilization of Resources
Directing should aim utilizing the resources judiciously avoiding any wastages, overlapping efforts, unfair practices, any loopholes. Also, mapping right resource to right job is important. Timing should be critically used by any manager, few jobs should be taken care of at a very opportune time and it is necessary to align adequate resources for the job accordingly. This helps in extracting maximum value from those efforts helping the firm grow organically.
It is quite evident from the above discussion, that direction remains the heart of management process. Its function is to pump guidance and motivation to all parts of the organization and ensure a healthy environment. |
A Graphing Linear Functions Worksheet is some short questionnaires on a certain topic. A worksheet can be prepared for any subject. Topic serves as a complete lesson in one maybe a small sub-topic. Worksheet can be installed for revising the subject for assessments, recapitulation, helping the scholars to be aware of the niche more precisely or even improve the knowledge within the issue.
Objectives of your Graphing Linear Functions Worksheet
Graphing Linear Functions Worksheet needs to be child friendly. The actual level on the worksheet should be minimum. Worksheet have to have clarity in questioning avoiding any ambiguity. In a worksheet the questions shouldn’t have quite a few possible answer. Worksheet should function as a tool to enhance content of the child. Worksheet really should be pictorial. Worksheet will be include skills for example drawing, analyzing, descriptive, reasoning etc. Worksheet really should be short only 2 pages else it called to be a Workbook.
Designing a Graphing Linear Functions Worksheet With no trouble
Making a Graphing Linear Functions Worksheet isn’t really a hassle-free task. The worksheet have to be short, crisp, basic and child friendly. Various skills needed for designing a worksheet, different types of worksheets, and sample worksheets are explained in detail. Traditionally the worksheets are ready in numerous subjects and this can be short or elaborate, with or without pictures. The modern innovative accessible version of worksheet formulated for designing a worksheet will be 3 E’s Worksheet method. For a student both these worksheets are EASY, ENJOYABLE and EFFORTLESS. The worksheets would rekindle the teaching-learning procedure of student for the culmination of worksheet. That is necessary maximum of about 10 min to take on each worksheet. Skills included are applicative, conceptual understanding, diagrammatic, labeling and identification of terms.
Graphing Linear Functions Worksheet Can Be Utilized For Lots Of Intent
Graphing Linear Functions Worksheet can be used from a teacher/tutor/parent to enrich you possibly can comprehension of their student/child. Worksheets work extremely well to be a testing tool to check the Scholastic Aptitude and Mental Aptitude of child during admission procedures. Worksheets is also prepared as being a feedback activity after an area trip, study tour, educational trip, etc. Worksheets can be utilized as one tool to give extra knowledge and to see the improvement of your skills in a student which includes reading, comprehensive, analytical, illustrative etc. Worksheet helps the coed to succeed in a certain topic.
Features about a Graphing Linear Functions Worksheet
Graphing Linear Functions Worksheet is one kind of most handy tool to get a teacher. Student will have to just add the worksheet. As most of the difficulty is been already printed for him. So he/she feels happy to do it faster. Worksheet provides the student the fundamental revision across a topic. Student shall improve his application skills (ex: begin to see the ‘Answer in a word’in has a tendency to sample worksheet). A worksheet is proven to test any mode of learning like diagrams, elaborate writing, puzzling, quizzing, paragraph writing, picture reading, experiments etc. Worksheets might specifically for the’Gifted Children’to grant more inputs relating to the given topic beyond the textual knowledge. Worksheets should be a helping hand to ensure the quality of understanding in the’Slow Learners’.
Difficulties of Graphing Linear Functions Worksheet
As it is stated, every coin has two sides. Graphing Linear Functions Worksheet have got numerous perk several disadvantages. A statutory caution will be given, “Never use excessive worksheets”. Worksheets can be given as being a revision for any lesson after teaching that lesson or might given among the completion in the lesson being a assignment to evaluate the understanding of the child. Student becomes habitual to writing precise answers. Student gets habitual for the prompting. Correction of worksheets should be a problem for just a teacher. It could become difficult for students to preserve the worksheets and put them depending on topics. All said and done worksheets are surely the aids to help the student effectively. Although the benefits become more when than the disadvantages. One ought not disregard the difficulties.
Easy methods to Writing a Graphing Linear Functions Worksheet
To begin with divide the required topic into smaller, easily manageable parts (instead of taking an entire unit you can take lessons. In lessons an interest or possibly a sub-topic). Parameters, for example depth of topic, time required for completion, variety of skills to remain included and importantly the aim which is why a special Graphing Linear Functions Worksheet is framed for, be outlined.
Assortment of information plays a crucial role in designing the Graphing Linear Functions Worksheet. Data might be collected from all the available resources which includes various text books of publications, journals, newspapers, encyclopedias, etc. The sort of worksheet becomes the 2nd priority. The teacher really should be limited to how much the students and not add the topics/material out their suggested syllabus. Maybe you need to see this related articles below. |
How the Plight of Polar Bears Has Shown Us the Dangers of Climate Change
Here's why we need International Polar Bear Day.
The impacts of climate change are evident in every corner of our planet, but in recent years the plight of the polar bear has become especially apparent. For conservationists, Feb. 27 is a chance to highlight the decline of polars bears and call for conservation actions through what has become known as International Polar Bear Day.
The sea ice across the Arctic has been decreasing at a rate of 14% per decade, and this rapid loss deprives polar bears of their natural habitat used for building dens and catching seals, their main source of nutrition.
Take Action: Shout Out Smarter Cities to Help the Environment
As a result, polar bears have been increasingly pushed to the brink of starvation. Alaska and the Northwest Territories have already documented a loss of 40% polar bear population between 2001 and 2010 from 1,500 to 900 bears.
Effects of climate change in 2018 were felt everywhere: Africa, South America, Asia-Pacific, and Southeast Asia all suffered severe droughts; Indonesia and Canada experienced destructive wildfires; 70 tropical cyclones and hurricanes hit the Northern Hemisphere, compared to the long-term average of 53; parts of US suffered great economic damage caused by Hurricanes Florence and Michael.
But despite the fact that the 50 least developed countries in the world contribute only 1% to the overall greenhouse gas emissions, 99% of total fatalities and 90% of economic losses resulting from the effects of climate change directly affect the residents of these countries.
Climate change also puts more than half of plants and animal species across the world at risk. The polar bear and its declining habitat, however, have recently garnered more documentation and gained more momentum in conservation circles.
Over the past few years, viral photos and videos of polar bears highlight the need to address climate change immediately.
In 2017, National Geographic photographers Paul Nicklen and Cristina G. Mittermeier spotted a starving polar bear on Somerset Island. The video quickly went viral and helped start a conversation.
“I can’t say that this bear was starving because of climate change, but I do know that polar bears rely on a platform of sea ice from which to hunt,” Mittermeier wrote in a personal narrative published in the National Geographic magazine one year later.
Mittermeier also emphasized that the disappearing ice leading to loss of habitat, could lead the animals to wander on different lands.
“More bears will get stranded on land, where they can’t pursue the seals, walruses, and whales that are their prey and where they will slowly starve to death,” she said.
Another viral video from Novaya Zemlya, a Russian archipelago, aligns with what Mittermeier pointed out.
The video shows about 52 polar bears breaking into the remote region of Russia, likely searching for food.
Scientists suggested that climate change could be the reason for aggressive behavior displayed by polar bears in a similar invasion at a weather station in the Arctic.
The US Geological Survey warned in 2007 that two-thirds of the global population of polar bears could be wiped out by 2050 because of thinning sea ice.
Whether or not social media has proved to be a useful platform to raise awareness about climate change is still debatable, but it is now increasingly being used by scientists to debunk myths about the alarming global issue. And they believe not all hope has been lost.
Countries around the world are taking steps to help conserve the Arctic and save polar bears. Maybe it’s time we all do. |
- A type of protein made by certain white blood cells in response to a foreign substance (antigen). Each antibody can bind to only a specific antigen. The purpose of this binding is to help destroy the antigen. Antibodies can work in several ways, depending on the nature of the antigen. Some antibodies destroy antigens directly. Others make it easier for white blood cells to destroy the antigen.
Definition from: Physician Data Query via Unified Medical Language System at the National Library of Medicine
- An antibody is a protein component of the immune system that circulates in the blood, recognizes foreign substances like bacteria and viruses, and neutralizes them. After exposure to a foreign substance, called an antigen, antibodies continue to circulate in the blood, providing protection against future exposures to that antigen.
Definition from: Talking Glossary of Genetic Terms from the National Human Genome Research Institute
Related discussion in the Handbook
See also Understanding Medical Terminology. |
UCF researchers use nanoscale patterns to hide images and information in plain sight
Plasmonic structures show patterns only in selected IR bands; they can also be dynamically switched on and off.
|To see the intended information encoded into this plasmonic structure, a person must look through an IR lens or camera tuned to the correct IR band. (Image: UCF)|
Scientists at the University of Central Florida (UCF; Orlando, FL) have found a way to hide information on materials and only make it visible to a person using the right technology.1 "We found we can create a surface where we preferentially control absorption of light," says Debashis Chanda, an associate professor of physics, optics, and nanoscience who has developed the technique.
The trick is to put the information on a surface that is patterned with nanoscale structures, which can fool the naked eye by reflecting only a solid color rather than the intended information. To get the intended information, a person must look through an infrared camera tuned to the correct IR band. Not only can information be hidden this way, but the information can also be changed so that the messages invisible to the human eye can be erased and rewritten.2
To hide images within the IR spectrum while the same area appears as a solid color in the visible spectrum, the researchers created a three-level layered plasmonic system with a polymer layer imprinted with nanoscale holes sandwiched between a gold mirror at the bottom and a gold layer at the top, with holes that match the polymer layer.
Images can be imprinted on top of the plasmonic sandwich; aspects of the holes, such as size and depth, help dictate which IR band the image can be seen in. Without looking through an IR camera tuned to the right band, the top of the device looks like a solid color, such as a yellow square,
The researchers developed a way to erase and display the image in selected IR bands by adding a layer of phase-change material -- vanadium dioxide -- within the plasmonic sandwich that dynamically changes the light reflection from the surface from 100% to 0% and back as the phase change is triggered.
Applications include anticounterfeiting security, infrared tagging (for example, the presence of a designer label could be confirmed with a look through an IR camera), and infrared camouflage. It also has military applications, such as confirming which assets are friendly and which are enemy by tags on their surfaces that are only visible in a specific infrared band or by dynamically changing the information for IR camouflage.
1. Daniel Franklin et al., Light: Science & Applications (2018); https://doi.org/10.1038/s41377-018-0095-9.
2. Alireza Safaei et al., ACS Nano (2018); https://doi.org/10.1021/acsnano.8b06601. |
Talk Off the Map
We talk off the tree map together to brainstorm our sentences about what we like to do alone and what we like to do together. Talking off the map is when we read the tree map from the top down. This helps set the stage for writing and it allows students to hear and verbalize what they will be writing. At this time of year, we use Echo Reading (I say something and students repeat) because their experience with the maps is limited.
I say: Let's read our map, starting at the top and reading down. Touch the top box like I am and say "I like to." Students repeat. Now move your finger down to the left box. (I am modeling on the document camera) I say: "write." (students repeat) Now move your finger down again to the last box. (I am modeling on the document camera) I say: "alone." (students repeat)
I continue in the same fashion as we talk off the map for each word.
Writing Off the Map
I put lines on the same paper as my tree map. It allows the kids to reference the words they need in close proximity to their writing. While I am modeling, it is still helpful for students to have their map and writing lines available to them simultaneously. Transferring information from one paper/place to another is still difficult for my students at this time of year.
I begin: Now we are going to use our tree map information to write two sentences. We are going to write one for what we like to do alone and one sentence for what we like to do together.
We begin by writing our first words from the top of the Tree Map, "I like to" I model writing that on my first line and then students write.
Next, we choose a word from the left side. I say: I am going to chose the word 'write' because I like to write alone. Watch me as I copy that word exactly from my middle box to my sentence. I model writing that word with my sentence on the line.
I direct: Now you choose one 'alone' word you like to do and write it after your "I like to" words, just like I did! As students are doing that, I monitor and assist where necessary.
I say: Now we need to write our last word "alone" so our reader knows that this sentence is about what we like to do ALONE. Everyone show me the word 'alone' on your map with your finger. Where does it say 'alone?' I walk around and see if students can correctly identify the word at the bottom of the tree map.
I continue: Watch me as I write that word at the end of my sentence and last I will put a period to indicate that my sentence is complete. I model.
I direct: Now you write the word alone and a period after it to finish your first sentence. As students are choosing and writing, I monitor and assist where necessary.
When students are finished, I call their attention back to the screen and my paper on the document camera. I follow the same procedure for 'together.'
When students are finished with their second sentence, they raise their hand so they can read their sentences to me. ONce they have read their sentences to me, they illustrate their writing.
Reading Our Writing
I always have students read their writing back to me. We do this every day, so students are familiar with the procedure. I have them read back to me so that I can see how they are applying sight word knowledge, letter/sound and blending knowledge and tracking. This particular writing piece also allows me to see if they understand the return sweep.
If students are struggling, I have them echo me and I hep them to track by using hand over hand and moving their finger along as we read.
Independently, students will place action pictures (which represent things done alone or together) in a circle map titled “Together We’re Better”.
Two non-examples should be placed outside the circle, but inside the frame of reference.
I direct: We have been working with circle maps. Today I want you to look at each picture. If it is an example of something you like to do together, place it in the circle map. If it is not something you do not like to do together, place it outside of the circle map. Are there any questions?
As students are cutting and gluing, I walk around the room to monitor and clarify where necessary.
As students finish, they raise their hand. I go to their desk and ask: What is something you like to do together at school? Tell me in a sentence. If they can tell me that in a sentence, I build on that and challenge them further: Who do you (play, write, sing, etc...) with at school? Why do you like to ___ together at school with ___?
This verbal exchange gives me an idea if they understand the concept of working together and if they are making the real world connection to it. It also gives me an idea if they can or cannot convey thoughts verbally and/or answer a question, which is stressed in the Speaking and Listening standards.
After everyone is finished we watch puppet show of The Pilgrims and Wampanoags as a unit celebration! |
People often take to the streets and riot when food prices soar and the threat of potential starvation starts to loom large. Such was the case in 2008, when the price of rice shot through the roof. Rioting took place around the world -- from Egypt to Haiti to Bangladesh -- as food security evaporated across much of the developing world. Because of the ability of richer nations to protect their populations' food supply, poorer nations are often all too aware of what happens when stocks of food start to dwindle and the price of what's left makes it impossible for many to obtain.
When the situation becomes truly dire -- perhaps a drought has disrupted crop production for several growing seasons or a violent regime has armed the border, blocking food imports -- then food security issues can transform from a chronic shortage to an acute hardship, and famine can descend.
Children and the elderly are most susceptible to the trauma of famine, and malnutrition in general. About 6 million children fall victim to hunger each year; that's an average of 17,000 a day [source: CNN]. Both children and the elderly lack the stamina that healthy adults possess, although the latter population will start to suffer as well as time goes on. Disease goes hand-in-hand with famine because hungry people's bodies are less able to fight off infections. If food isn't eventually obtained, famine victims will waste away, a process that is often accelerated by illness.
When the drought or war or other disaster that caused the famine (or just the famine itself) forces victims to flee their homeland, the conditions can be even more challenging, as refugee populations are often pushed into marginal lands that aren't ideally suited for agriculture. If such a situation occurs, humanitarian aid groups like UNICEF try to swoop in with emergency supplies to help tide over the refugees until a more permanent solution can be devised. |
View of Purbrook in Medonte Township, Upper Canada, 1836
Following the Constitutional Act of 1791, the colony of Quebec was divided to create Upper Canada (present-day Ontario) and Lower Canada (present-day Québec). Military and civilian settlers submitted petitions to the Governor to obtain Crown land. Sons and daughters of Loyalists were also entitled to free lands.
Land Boards were created in 1789 to oversee land matters, to facilitate settlement in the four districts of Hesse, Nassau, Luneburg and Mecklenburg, and to grant certificates of location to the settlers in these districts.
The Land Boards were abolished in 1794 when the land granting process was centralized through the Executive Council. Therefore, petitions relating to Ontario Loyalists prior to 1791 are to be found in the Land Boards of Upper Canada, 1765-1804 or in the Land Petitions of Lower Canada, 1764-1841.
The petitions date predominantly from 1783 to 1841.
Each applicant for a grant or lease was required to submit a written petition. He or she also had to supply the necessary supporting documentation such as certificates from a local magistrate confirming his or her age, good character, loyalty and identity, or a discharge certificate from the Army or Navy. In many cases, the documents were returned to the applicant, so they are not included with the land petition. The petitioner paid a small fee for processing the petition up to the point of granting the land.
The records of the land granting process focused on four essential steps:
- allocation of specific lots to the petitioners;
- surveying of the land to establish precise boundaries;
- performance of the settlement duties (clearing and cultivating a certain acreage, erecting a dwelling of minimum size); and
- issuance of the deed.
The key to a successful petition was to identify oneself without any doubt and to justify any special entitlement. Therefore, the petitions will often contain an applicant's story detailing services, losses and suffering during the American Revolutionary War or the War of 1812. They may also contain discharge certificates, letters of introduction from prominent individuals in Britain, reports by the Surveyor General or the Attorney General on technical and legal matters, and some lists of settlers by region.
The petitions were received at the Executive Council Office. They were presented and read before a meeting of the Land Committee of the Executive Council, and a decision was recommended by the Councillors to the Lieutenant Governor. The clerk of the Council compiled the Minute Books from his notes of Council and Committee proceedings.
The Clerk assigned an alpha-numeric reference to the petition and entered the reference into the Land Book margin next to the appropriate Minute. The letter is based on the initial letter of the petitioner's surname and the petition number represents the order in which the petition appears in the Land Book: e.g. V 5 means the fifth petitioner whose name began with V for the Land Book in question. The letters I and J (I-J) and U and V (U-V) are often formed into one sequence.
The archival reference includes a bundle number corresponding to the sequence of the Land Books: e.g., V6/5 means bundle V6, petition number 5. The bundle numbers start from 1 again in 1841. A connection can be made between the Land Books and the individual petitions, and vice versa.
Miscellaneous bundles bring together other types of petitions, such as:
- petitions dating before the creation of Upper Canada;
- petitions lacking supporting documentation or receipts for fees;
- petitions that were refused or rejected;
- petitions from non-residents or underage or otherwise unqualified petitioners;
- claims deemed fraudulent; and
- duplicate petitions.
This database contains petitions for grants or leases of land and other administrative records for Upper Canada from the following collections:
It provides access to more than 82,000 references to individuals who lived in present-day Ontario between 1763 and 1865.
Indexes by name were originally created on cards for both series. The Upper Canada Land Petitions (RG 1 L 3) were derived from lists of names, not directly from the petitions, so errors or omissions in the lists are repeated on the index cards. Moreover, the spelling of names on petitions varies widely and handwriting is sometimes illegible. The index cards have been microfilmed (reels C-10810 to C-10836 and H-1976 to H-1978). The database was created from the list of names at the beginning of each bundle of petitions, and not from the card index or the actual petitions.
For the Upper Canada Sundries, references for land petitions were taken from the finding aid and an index by name was created on cards. Information appearing on the cards has been added to this database. The records for the Upper Canada Sundries have not been digitized and are only available on microfilm.
The search screen enables you to search by:
- Given Name(s)
For group petitions, subjects can be entered in the surname box (e.g. name of a township or town, militia, Indians, land, schools, church, etc.).
Note that some entries include only an initial for the given names. Sometimes there is no given name on the document. In some cases, it may be more useful to search by surname only. Names can also be written in different ways. The entries reflect the spelling of names as they appear on the lists for each bundle.
When you have entered your search terms, click on "Submit." The number of hits found will be shown at the top of the results screen.
How to Interpret the Results
Your search results will be posted as a results summary list.
Search Results Page
The search results page displays the following fields:
- Given name(s)
How to Obtain Copies
The documents have been digitized and are available online. Make sure to carefully note the microfilm, volume, bundle and page numbers in order to easily find the relevant digitized images.
The records for the Upper Canada Sundries have also been digitized and are available through the Heritage project. |
After cascade of dams dissected Yangtze River and locals started using modern fishing gear the fate of aquatic biodiversity was decided: 60% of fish species in the river are threatened.
The International Union for Conservation of Nature and Natural Resources (IUCN) announced the extinction of the Chinese paddlefish and the wild Yangtze sturgeon on June 21 in its updated Red List of threatened species.
The global sturgeon reassessment published by the IUCN on Thursday revealed that 100 percent of the world’s remaining 26 sturgeon species are now at risk of extinction, up from 85 percent in 2009.
The reassessment has also confirmed the extinction of the Chinese paddlefish (Psephurus gladius) and that the Yangtze Sturgeon (Acipenser dabryanus) has moved, from critically endangered, to extinct in the wild.
According to the reassessment, sturgeons have been overfished for their meat and caviar for centuries. Therefore, stronger enforcement of regulations against the illegal sale of sturgeon meat and caviar is critical to stop further declines. Besides, dams affect all sturgeon species migrating to their breeding grounds, while rivers warming due to climate change further disrupts sturgeon reproduction.
Both Chinese paddlefish and Yangtze Sturgeon are representative aquatic species of the Yangtze River Basin.
The Chinese paddlefish was listed as a first-class state protected animal in China in 1989 and was last seen alive in 2003.
Along with the loss of the Chinese paddlefish, another sturgeon species, the Yangtze sturgeon is now classified as “extinct in the wild.” Existing individuals in the river are the result of the release of captive stocks.
Historically, this species experienced unsustainable levels of fishing. Furthermore, mesh sizes of fishing nets have been reduced, thereby capturing young sturgeon, especially during the periods when many juveniles concentrate to feed. The primary traditional fishing season in the main stream of the Yangzte River was between March and August, with more than 30% of the catch processed between April and May. This is also the spawning season for Acipenser dabryanus, therefore spawning stocks were particularly vulnerable to capture.
The construction of the Gezhouba Dam in 1981 and the Three Gorges Dam in 2003 have also caused major adverse effects to the habitat of this species and, prior to the wild population disappearing, these constructions caused the population to be restricted to the upstream river, above the dams. The construction of the Xiangjiaba Dam in 2008 is situated in the middle of this sturgeon’s spawning reach and is therefore expected to adversely affect it through habitat fragmentation and associated habitat degradation. More recently, two more dams (Baihetan and Xilodu) were constructed upriver, changing the temperature and hydrology of the river.
Fishing was banned completely by 2021 in the main stem of the Yangtze River. In 2000, the first national nature reserve was created in the upper Yangzte River. The area of this reserve was extended in 2005 to mitigate the conflict between hydroelectric projects and maintenance of ecosystem functionality. The reserve is now the largest aquatic reserve in China, which has a total length of 1,162.6 km, including 436.5 km of the main river
From 2007 to 2018, approximately 200,000 juveniles of Yangtze Sturgeon were released into the upper Yangtze River for stock rehabilitation. In 2018, around 40 mature individuals were released into the same waters. To date, there has been no evidence that the species currently is reproducing in the wild, but Chinese scientists hope that the fishing ban, ongoing stocking and the national action plan are likely to rebuild a self-sustaining population within the next few decades. |
Climate change is a significant change in earth’s overall conditions, which can be measured by major changes in the distribution of weather patterns – such as temperature or precipitation, among other effects – that occur over several decades or longer. Climate change is mainly caused by global warming due to increasing concentrations of greenhouse gases in our atmosphere.
The consequences of climate change vary across the globe. Here in the Northeast we are already seeing significant impacts to the coastline due to rising sea levels. We are also seeing increased precipitation, increased air and ocean temperatures, more flooding, higher storm surge, more intense storms, and more.
The National Oceanographic and Atmospheric Association (NOAA) estimates a sea level rise of 3.05 feet by 2065 in the northeastern U.S. However, some believe sea levels could be rising even faster. For example, sea levels along the northeast coast rose nearly 3.9 inches in just a two year period (2009-2010) according to a Feb 24, 2015 study from the University of Arizona and NOAA. At this rate, sea levels could be more than 8 feet higher in 50 years.
Several studies also suggest precipitation amounts will increase in the future – projected to increase by 20-30% by 2070 to 2100 (Stratz and Hossain, 2014; Kunkel et al., 2013). Specifically in the Northeast, heavy rainfall events have increased (~70% from 1958-2012) and are projected to increase further in the coming years (Melillo et al., 2014).
Risks to Coastal Energy Infrastructure
The climate-related impacts discussed above pose risks to coastal energy infrastructure. In July 2013, the U.S. Department of Energy published a report outlining vulnerabilities of energy facilities to climate trends. The report cited issues such as increasing air/water temperature, increasing intensity of storms, sea level rise, and storm surge all as having potential negative implications. Among such implications is increased risk of physical damage.
Over the years, we have advocated for better assessments of flooding and other coastal risks at energy facilities located on Cape Cod Bay, where there are two large energy facilities: Pilgrim Nuclear Power Station and NRG’s Canal Station (oil and gas).
Given that Pilgrim will shut down by May 31, 2019, it is more important than ever to fully understand the risks associated with coastal hazards. For example…
1) Pilgrim’s nuclear waste storage areas are located too close to shore. Its dry cask nuclear waste storage project is currently sited about 150 feet from the Cape Cod Bay shoreline, and its so-called “low-level” waste storage area is only about 30 feet away from the coastal bank. These areas are vulnerable to storm surge, rising sea levels, flooding, salt water degradation, and other coastal risks – raising concerns about potential accidents and leaks. Nuclear waste could remain at this location for decades, if not hundreds of years – meaning coastal impacts will increasingly become more of a problem over time.
Unless Pilgrim’s dry casks will be transported off-site within a decade, regulators and officials must ensure that Pilgrim’s waste is moved to higher elevation, farther away from Cape Cod Bay and securely protected from natural and man-made hazards, or moved off-site immediately.
2) Climate-related impacts could also undermine successful remediation of contaminants on-site. Upon shutown, Entergy could opt for long-term “SAFSTOR,” a decommissioning process that would allow Pilgrim to sit idle for up to 60-years before decommissioning is completed. We know that Pilgrim has been releasing radioactive materials and other contaminants deliberately and accidentally into groundwater, surface water, and soils since it began operating in 1972. As sea levels increase, so do groundwater elevations. Contamination present on Pilgrim’s site will, no doubt, continue to migrate toward Cape Cod Bay even after Pilgrim stops generating power. Achieving a fully remediated site will also progressively become more difficult as a result of climate-related issues.
Regulators and elected officials need to ensure that Pilgrim’s pollution is surveyed and cleaned up within a decade of closure in order to protect public and environmental health.
NRG, the owner of Canal Station, is soon building an additional fossil fuel facility (Unit 3) that would have a lifespan of 40 years. NRG has admitted that the new Unit 3 facility will be affected by nearly 3 feet of sea level rise (3 feet above mean sea level based on NOAA data), flooding, storm surge and wave action in the future.
In February 2016, we submitted comments on the company’s petition to build the new unit. Some of our concerns related to coastal hazards are:
- NRG should outline how and when it will check the stability of underground tanks/pipes, and how leaks will be detected. As sea levels rise, so will groundwater levels – impacts from salt water intrusion on underground infrastructure should be addressed.
- NRG should consider future climatic conditions that are projected to increase precipitation and temperatures, particularly in the Northeast; and use the most conservative value of 2.93 feet above mean sea level by 2060 to develop plans to protect the site.
- The state should require Canal Station’s Units 1 & 2, which are outdated, environmentally destructive and have been operating under an expired Clean Water Act NPDES permit since 1994, be decommissioned before an additional unit is approved.
We are also concerned that Massachusetts is becoming overly-reliant on natural gas and NRG’s project is another step in that direction. Our state is already heavily dependent on this source — more than 50% of our in-state electricity generation currently comes from natural gas. Becoming overly-reliant on natural gas creates financial risk, places the economy and consumers at risk from fluctuating gas prices, and weakens efforts to cut emissions. While natural gas electrical generation produces much less carbon emissions than coal or oil, it still produces emissions. Drilling, storage, extraction, and pipeline activities associated with natural gas result in methane leaks. Compared to carbon dioxide, methane is a far more potent greenhouse gas that could escalate climate problems.
Flooding and Sea Level Rise Assessments
As a result of the 2011 Fukushima Daiichi Nuclear Power Plant disaster, the U.S. Nuclear Regulatory Commission (NRC) required all nuclear power plants in the U.S., including Pilgrim, to assess on-site flood hazards. Pilgrim’s flood hazard evaluation report, called the “AREVA report,” was issued in March 2015.
Florida-based Coastal Risk Consulting (CRC) assisted us with evaluating the AREVA report. CRC’s report, Analysis of AREVA Flood Hazard Re-Evaluation Report: Pilgrim Nuclear Power Station, Plymouth, MA (“CRC Report”), was published in Dec. 2015 and outlines how the AREVA report underestimates and omits important risk factors, uses outdated data, and does not consider future risk estimates for rainfall and sea level rise. For example:
- “Local intense precipitation” is found in Entergy’s AREVA Report as a primary hazard of concern that could inundate the site with several feet of rainwater. Despite this, CRC found that this mechanism is underestimated in Entergy’s report since it uses outdated precipitation data and does not consider future climatic conditions that are projected to increase precipitation amounts during heavy rainfall events (think of the October 2015 events in South Carolina).
- While the storm surge analysis in Entergy’s AREVA Report was robust, sea level rise over the next 50 years is understated since it relies heavily on historic sea level rise rates – producing a sea level rise more than 2.5 feet lower than current projections.
- Groundwater, subsidence, and erosion are not considered in Pilgrim’s flood assessment; further underestimating risks (especially related to extreme storm events).
- Pilgrim’s flood assessment focuses solely on past risk conditions and does not include scenarios that address updated projections for future risk, specifically with regard to climate change. The CRC report shows that the Pilgrim site will be inundated with non-storm tidal flooding by mid-century and that a surge from a category 4 hurricane could already flood the site today.
We also commission site maps from Northeastern Geospatial Research Professionals (NGRP). NGRP recently updated the maps in Feb. 2016 →
These maps confirm elevation inaccuracies in Entergy’s site plans for Pilgrim’s infrastructure and the nuclear waste storage areas. This shows that there is more accurate elevation information that the NRC should consider when determining flood risks at Pilgrim, and that Entergy’s claims that Pilgrim is safe from flood impacts is based on some inaccurate information. For example:
- Entergy’s site maps prior to 2014 use outdated data from 1968 (mean sea level today is >6 in. higher than 1968) to develop site plans for infrastructure (e.g., dry cask storage facility); those plans do not reflect current NAVD88 topographical elevations and do not provide an accurate basis for evaluating risks of sea level rise and other coastal impacts.
- The height of the breakwater jetties and other elevations in NGRP’s maps appear significantly lower than those shown in Entergy’s plans, and are uneven, demonstrating that the site is not as protected from flooding and sea level rise as Entergy reports.
- There are discrepancies ranging from +4 in. to -15 ft. when comparing Entergy’s plans to more current elevation information because Entergy uses mixed and outdated standards of measurements for vertical elevations and water levels.
We will continue to raise these concerns and pressure regulators to require changes to ensure Pilgrim’s infrastructure and nuclear waste storage areas are protected from climate-related coastal impacts.
- NASA Climate Page. Current news and data streams about global warming and climate change.
- NOAA Climate Page. Data, tools, and information to help people understand and prepare for climate variability and change.
- Earth Nullschool. Current wind, weather, ocean, and pollution conditions, as forecast by supercomputers, on an interactive animated map. Updated every three hours.
- Is Nuclear Power Clean Energy? CCBW Factsheet (Mar. 2016)
- OF NUCLEAR INTEREST: Replacing Pilgrim with Renewables, Conservation. Dec 2015.
- National Geographic: As Sea Level Rises, Are Coastal Nuclear Plants Ready? Dec. 2015.
- Short Answers to Hard Questions About Climate Change. Nov. 2015. New York Times, J. Gillis.
- JRWA/CCBW’s letter to the Boston Globe. Oct 2015. Re: Article claiming Pilgrim is carbon-free.
- NRDC. Nov 2013. Energy Experts Respond to Scientists’ Letter Advocating Nuclear Power.
- Nuclear Facts: Nuclear is not Carbon Free.
- Nov 2013. Of Nuclear Interest: Nuclear Power in Our Changing Climate.
- Huffington Post. May 2014. Nuclear Energy Is Not a Solution for Global Warming.
- NIRS. Nuclear Power and Climate Change. |
1. China created the world’s first paper money.
Nearly 700 years before Sweden issued the first European banknotes in 1661, China released the first generally circulating currency. In fact, usage of paper notes dates backs even earlier, to the 7th century Tang Dynasty. For centuries copper coins had been China’s primary currency. In order to carry large amounts of cash, people hefted around an ever-increasing number of these coins–not the easiest, or safest, thing to do over long distances. In an attempt to lighten their load, merchants began to deposit these coins with each other and were issued paper certificates for the coin’s value. The paper was certainly lighter. So light, in fact, that it is believed to have earned the nickname “flying money,” for its tendency to blow away in a stiff wind. The use of paper money remained in place for the next 200 years, until a copper shortage and inflation from overproduction of the bills forced merchants and Song Dynasty government officials alike to issue and accept paper notes backed by gold reserves—the first legal tender in the world.
2. The Inca built a great empire—without the use of money at all.
Unlike the neighboring Aztecs or Mayas, who used goods such as beans and textiles to buy and sell products, there was no concept of “money” among the Inca. So, how did they manage to create the largest—and wealthiest—empire in South America? Through a highly regimented system known as the “Mit’a.” From the age of 15, Incan males were required to provide physical labor to the state for a set number of days, sometimes as much as two-thirds of the year. They built public buildings and palaces, as well as an extensive system of roads (14,000 miles in all), which linked the empire together and allowed for its ongoing expansion. In return, the government provided all the basic necessities of life; food, clothing, tools, housing, etc. No money changed hands. Indeed, even if there had been money, there was simply nowhere for an Incan to spend it—no shops, no markets, no malls. That’s not to say that Incan society didn’t value the massive piles of gold and silver sitting beneath their lands. In fact, the Inca used these precious metals as part of their religious worship, considering gold the “sweat of the sun,” and silver the “tears of the moon.”
3. Medieval merchants developed an early version of the credit card.
In an era when currency was often unavailable (and few people were literate), the tally stick, a forerunner of today’s high-tech credit cards, became increasingly popular in Europe. In this early version of financial record keeping, notches were made on a wooden stick to indicate the amount lent—and owed. The sticks were then split down the middle; the creditor kept one half and the debtor the other. When a payment was made, the sticks were paired up, and the payment was marked on the stick. The tally stick system also had another built-in benefit: It was nearly impossible to counterfeit, as the shape, size and grain of the wooden halves had to match up perfectly. Tally sticks were used in much of Europe, but probably nowhere as extensively as in England. For more than 700 years, tally sticks were used to collect taxes from local citizens, until the system was finally abandoned in 1826. Eight years later, when the British parliament finally decided to get rid of the thousands of leftover tally sticks being kept in storage, they decided to burn them in an underground furnace that heated the House of Lords, resulting in a massive fire that destroyed most of the complex—the worst fire to hit London since the Great Fire of 1666.
4. Czarist Russia created a tax payable only in animal fur.
The arrival of Russian hunters and trappers in what was then the remote wilderness of Siberia in the 1600s kicked off a “fur rush” that many historians have compared to the later California gold rush in its intensity. At the height of the Russian fur trade these pelts had became so valuable that they were called “soft gold” and accepted as hard currency throughout the empire. By some estimates, they accounted for more than 10 percent of Russia’s total revenue. Eager to reap the financial rewards of the trade, Russia’s czarist government began to regulate the price of the pelts. By the early 17th century, in an attempt to keep up with the massive worldwide demand, they went one step further, imposing a new tax on thousands of Siberian peasants. The “yasak” was an annual tribute, payable solely in fur, required of every male over the age of 18.
5. Paul Revere played a key role in the creation of early American currency.
Revere, famed for his 1775 “midnight ride” to warn American colonists of an impending British invasion, was actually far more famous in his day for his work as an engraver and as one of the colonies’ premiere silversmiths. Just months after his exploits near Concord, it was Revere who was tasked with designing the engraving plates for the first Continental currency, or Continentals, produced by Massachusetts to fund the war. By the end of the American Revolution, these early paper notes had become worthless, and one of the first projects undertaken by the U.S. government following the ratification of the Constitution was the passage of the Coinage Act, establishing the U.S. Mint and regulating coin production. The first regularly circulating coins in American history were delivered in March 1793, consisting of exactly 11,178 one-cent pieces—or $111.78—and made of rolled copper provided, in part, by Paul Revere.
6. The first gold rush in American history took place in North Carolina, not California.
In 1799, the 12-year old son of a Cabarrus County farmer named John Reed discovered a gold nugget weighing an estimated 17 pounds, so large that his family used it as doorstop. When more gold was discovered in neighboring counties, it kicked off the first prospecting boom in American history, drawing thousands of people to the area, many of them newly arrived immigrants. By the early 19th century, more than 30,000 North Carolinians were mining for gold, making it the second largest profession in the state after agriculture. The prospect of financial reward was so high that professional mining companies soon entered the scene, bringing with them workers and engineers with years of experience extracting precious metals from South American mines. For more than 30 years, all gold used in U.S. coins was mined in North Carolina, and a U.S. Mint was opened in the city of Charlotte in 1837. However, decades of mining eventually depleted the region’s reserves, and by the 1860s, the Carolina Gold Boom had ended.
7. Counterfeiting was rampant during the American Civil War.
Money tampering has been around nearly as long as money itself has existed. Early coins were shaved around the edges, with the perpetrator pocketing the excess precious metals. Rome, among other ancient civilizations, made counterfeiting a crime punishable by death. The U.S. government struggled with the issue from its inception, going so far as to hire an ex-counterfeiter to design some of its first coins. Despite these efforts, the problem continued, likely reaching its apex during the American Civil War. With dozens of different notes and coins being issued by state, local and federal governments on both sides, it was nearly impossible to detect the real from the fake. It’s been estimated that at least one-third (and possible half) of all money then in circulation was fraudulent. In fact, the U.S. Secret Service was created in 1865—not to protect the president—but to combat counterfeiting. The term “greenback,” a now-common term for money, also traces its origins to the war. The phrase was derived from the intricate green ink designs used on the reverse side of Civil War-era banknotes, which the U.S. Treasury Department hoped would prevent counterfeiting.
8. West Point Mint was “the Fort Knox of silver” and has a whole lot of gold.
When most people think of vast amounts of precious metals tucked away in secure locations, it’s Fort Knox that comes to mind. Few people know that a tiny facility in New York State once rivaled Knox in the wealth department, and was home to the largest concentration of silver in the United States. Opened in 1937 and originally known as the West Point Bullion Depository, the Mint is located just miles from the U.S. Military Academy at West Point. There are currently more than 54 million ounces of gold in “deep storage” at the facility, with an estimated value of more than $80 billion dollars, making West Point the second largest gold depository after Fort Knox. Though it did not achieve official status as a U.S. Mint until 1988, it had begun striking pennies and gold medallions decades earlier. Today, it issues coins struck with the “W” mint mark in gold, silver and platinum, including the only U.S. coins issued to commemorate the September 11 attacks. |
Most people fall somewhere on a continuum of communication styles. At one end of the continuum, they are passive, while on the other end fall those who are aggressive.
Kids sometimes tend to hang out at one end or the other. Sometimes they are Cold, and hang out on the passive side getting trampled on, and then become Hot and swing to aggressiveness when they’ve had enough.
In the middle is assertiveness, which is where we want our students to be – to just BeCool.
Passive Communication: Students who are passive don’t stand up for themselves. They are Cold and give up when they confront a difficult situation. Some call them doormats, because people walk all over them.
Aggressive Communication: Students who are aggressive don’t control their anger. They use angry words and physical violence to confront difficult issues. They are Hot and blow up.
Assertive Communication: Students who are assertive fall in the middle. They don’t let others walk all over them, nor do they blow up. They’re Cool as addressed in the BeCool series. They stand up for themselves and others.
Assertiveness= Strength: Many people tend to see others who are passive as weak, but nice, and those who are aggressive as strong, but mean.
Really though, passivity is not kind to ourselves. And aggressiveness isn’t strength, but lack of control.
True kindness comes those who are assertive, able to be kind to themselves and others. And true strength is assertiveness.
The Consequences of Passive and Aggressive Communication: When children are too passive, their self-esteem suffers. They don’t get their needs met. They may become targets for bullies and struggle to make friends.
Children who are too aggressive often become the bullies. They too have difficulty making friends (after all, who wants to be friends with someone who is mean?) Their self-esteem also suffers as they are often getting in trouble for their behavior.
Self Control: Self-Control is a big component of how passive or aggressive a student is. Being passive is a failure of self-control; students who are passive just give up. Aggressiveness is also a failure of self-control. Students who are aggressive haven’t learned to control their anger and lash out at others. Assertive students can control their emotions, identify their needs, and communicate them clearly and directly, a skill that many adults have yet to master!
The best way to teach students how to manage conflicts with both peers and adults is with the BeCool paradigm. Students learn that the Cold (passive) and Hot (aggressive) confrontation styles lead to negative consequences. This VideoModeling curriculum contrasts these negative consequences with the positive results of being Cool (assertive).
Here are some further tips to teach assertiveness to your students.
• Teachers Lead the Way: In fact, passive and aggressive communication styles are prevalent not only among children but also among adults. As with everything else, children learn what they see modeled for them, so the first step in building assertiveness in children is modeling it for them. According to Kristin Stuart Valdes of Edutopia, teaching assertiveness starts with teaching basic communication skills and showing students how it’s done.
Teachers can’t model assertiveness for their students if they aren’t assertive themselves. Those that are too passive have a hard time handling issues that arise with parents or administration and have difficulty managing a classroom. Teachers with aggressive communication styles are often seen as ‘mean’ and have difficulty building relationships with students, parents, and colleagues because they are intimidating. When teachers adopt an assertive communication style they set an example for their students. If you are too passive or too aggressive as a teacher, recognize this as a growth area for. You can use the BeCool paradigm for yourself, not just your students!
• The Power of ‘NO!’: Saying no is a life skill. Most 2 yr olds have no problem saying no, but passive kids have learned to say yes. Roleplay, read stories and give students opportunities to practice saying no in safe situations.
• Boundaries: What are boundaries? Boundaries are like a fence with a gate. They keep out what we don’t want, but let in what we do want. Teach kids to listen to their gut feelings and to make decisions based on what is best for them, not people pleasing. Teach kids to set healthy limits with those around them. Use the proven effective by Harvard Circles Curriculum to teach students the abstract rules of social boundaries – in a concrete way.
• Self-Care: We hear about self-care a lot for parents and teachers, but it is something kids need to know about too. They need to learn to listen to their bodies. To sleep when they are tired, eat something healthy when they are hungry, and calm down when they are upset. Teach kids that they must fill their own cup and then they can help to fill others’.
• I-Messages and Identifying Feelings: Check out our post here on SEL and how to incorporate it into the classroom. Students need to learn to identify their feelings. Sometimes students struggle to identify any emotion other than anger, especially if they have an aggressive communication style. Help them learn to recognize their emotions and use ‘I feel’ messages to communicate their feelings.
• Take Time: When we’re angry, our first reaction is often not our best. Teach students to take time to think about how they want to respond (respond, not react) to a difficult situation. They don’t have to solve a problem right away. Impulsiveness is often what gets kids in trouble. They either instinctively withdraw and give in, or lash out. Teach them to take a moment to ‘cool down’ and think through their problem and how they want to handle it.
The child who is ‘left behind’ most is the one who leaves school without transition readiness. |
Do Instructional Activities - Discovery
Description: Discovery activities lead learners to make discoveries. Use discovery activities for exploratory learning, to reveal principles, trends, and relationships, and to inspire curiosity about a topic. Discovery activities include virtual laboratories, case studies, and role playing activities (Horton, p. 125).
Types of Discovery Activities
Virtual Labs and Field Trips. Virtual labs and field trips include the testing and evaluation information through experiments and examination (UMUC, 2011). Learners can try all kinds of experiments without the risk of damaging equipment or injuring themselves and others in a virtual lab. They can also conduct experiments not possible in even the most generously funded real laboratory (p. 128).
- Challenge learner’s assumptions. Design experiments to challenge what learners believe to be true.
- Prescribe experiments. Do not just give learners a laboratory and assume they will make up their own experiments. Assign experiments to perform.
- Reuse your virtual laboratory. Developing a simulated laboratory is a lot of work. Consider using the same laboratory in multiple activities or courses.
- Use virtual laboratories to prepare students for real-world laboratories by beginning with simple, limited representations of the real lab (Horton, p. 128-129).
Approaches to teaching labs resource
- Suggestion: Second Life is a resource for virtual labs and field trips where numerous worlds have already been designed for students to explore. Also, Second Life allows students to collaborate online. (Refer to Section Three: Online Learning Tools/Second Life for more information and resources on Second Life).
Example of a Virtual Genetics Lab
Case Studies. Case studies involve the evaluation of systems by observing and analyzing simulated situations or processes. Case studies provide relevant, meaningful experiences in which learners can discover and abstract useful concepts and principles (Carnegie Mellon, “Case Studies,” para. 1). Case studies are effective discovery activities when learners must actively apply analytical and problem-solving skills to the events cited in the case study. “Cases are the building blocks of problem-solving learning environments,” and can also be categorized as a Connect activity (Jonassen, 2011, p. 184). Case studies are especially well suited for “teaching judgment skills required to cope with ambiguous situations commonly faced in real life” (Horton, p. 131).
- Provide a rich mixture of case materials such as reports, contracts, instruction manuals, drawings, blueprints, spreadsheets, charts, graphs, diagrams, video or audio interviews.
- Guide study of the case by prompting discovery of critical principles of the case study. Mayer (as cited by Reiser et al.) found “people learn better with guided discovery methods in which the instructor imposes some structure on the task than with pure discovery methods in which students are free to interact as they please” (p. 321). Provide learners with:
- What the case study shows.
- What to notice.
- Questions to answer. They direct learners’ searches and control what discoveries they are likely to make.
- What to think about. Ask questions that guide learners to think about how the case relates to the subject of the lesson or course (Horton, p. 134).
Role Playing. Just as children learn how to be an adult through playing to be an adult. Similarly, adult learners can learn by playing the role of someone else requiring the learner to view events from a different perspective. The instructor must state the goal and assign each student with a role to accomplish that goal. Learners must research their roles and collaborate through online discussion forums to play out their roles (Horton, p. 135).
- Introduce the scenario fully
- Assign roles related to the subject or use generic roles
- Match the role to the personality and skills of the learner
- Require learners to use their assigned role names in messages (pp. 138-140). |
On July 20, 1969, Margaret Hamilton’s computer code allowed Neil Armstrong to become the first human on the moon. Sure, Armstrong will forever be remembered in history as the first man to walk on the moon, but it took a woman to land him there. As a result of her work on the Apollo mission, Hamilton became a pioneer of software engineering, a term she actually coined herself.
At 24 years old with an undergrad degree in mathematics, Hamilton taught high school classes and then took a job as a computer programmer at MIT to support her husband through Harvard Law. Then, on August 10, 1961, NASA issued its first major contract for the Apollo program with MIT to develop the guidance and navigation system for the Apollo spacecraft. Hamilton led the software engineering division to develop the building blocks of software engineering. At the time, Hamilton and her team were pioneers on a new frontier. Or, as she explained: “When I first got into it, nobody knew what it was that we were doing. It was like the Wild West.”
By mid-1968, Hamilton led a team of 400 people who worked on Apollo’s software. Hamilton was so dedicated to the project that she would come to the lab on weekends and evenings to continue programming.
Hamilton’s hard work paid off on July 20, 1969. Just minutes before Armstrong and Buzz Aldrin were about to make their historic moon landing, alarms started ringing. The computer that was running the Apollo 11 lunar module was about to make a shift that could have aborted the entire mission. Instead of completing the history-making moon landing, the astronauts would have had no other choice but to return to Earth. Luckily, Hamilton and her team had programmed a different set of instructions that took control of the computer and the mission was able to continue.
Almost 50 years since the historic landing, Hamilton e was finally recognized for her crucial role in the Apollo 11 mission. On November 22, 2016, President Obama awarded her with the Presidential Medal of Freedom, saying Hamilton was a part of “that generation of unsung women who helped send humankind into space.” |
The Thirteen Colonies were British colonies in North America founded between 1607 (Virginia) and 1732 (Georgia). Although Great Britain held several other colonies in North America and the West Indies, the colonies referred to as the “thirteen” are those that rebelled against British rule in 1775 and proclaimed their independence on July 4, 1776. They subsequently constituted the first thirteen states of the United States of America.
Virginia was the first permanent English settlement in America. The colonists who established Jamestown on May 13, 1607, named Virginia in honor of Elizabeth I (1533–1603), the “Virgin Queen” of England. The successful settlement was sponsored by the London Company, a joint-stock venture chartered by King James I (1566–1625) in 1606. Captain John Smith (c. 1580–1631) led the colony.
In 1624 James I revoked Virginia's charter, after which it became a royal colony, which it remained until 1776. Virginia was the first colony to begin the move for independence from England in 1776, and it was a major player in the American Revolution (1775–83). It became the tenth state in the Union on June 25, 1788.
Religious persecution drove a group of English Puritans , who wished to separate from the Church of England, to the New World. These Pilgrims were blown off course in their ship, the Mayflower , and landed on Cape Cod in 1620. They settled in an abandoned village, which they named Plymouth .
In 1629 a nonseparatist Puritan group settled to the north in the Massachusetts Bay colony. The group was headed by the patriarch John
Winthrop (1588–1649). Along with other leaders, Winthrop intended to make the colony an exemplary Christian society. Massachusetts went on to become the sixth state of the Union on February 6, 1788.
New Hampshire gained a separate identity as a royal colony in 1679 when the British government declared that it was not part of the Massachusetts Bay colony. Still, Massachusetts overshadowed New Hampshire throughout the colonial period. The boundary between them was not settled until 1740.
New Hampshire was the only colony to experience almost no military activity during the American Revolution, and it was the first to declare its independence. New Hampshire was the ninth state to enter the Union on June 21, 1788.
Unlike many other colonies, Maryland was established with an almost feudal system in which the land was considered the property of the English lord who governed it. The territory was given as a proprietorship by England's King Charles I (1600–1649) to George Calvert (c. 1580–1632). Lord Calvert later left the land to his son, Cecilius (1605–1675), who is better known as Lord Baltimore. He named the region Maryland after the queen consort of Charles I, Henrietta Maria (1609–1669) of France. The colony of Maryland was fully under Baltimore's control.
Early Dutch settlers in Connecticut were dislodged by the large migration of English Puritans who came to the colony between 1630 and 1642. The Puritans established settlements at Windsor (1633), Wethersfield (1634), and Hartford (1636). In 1639 these three communities joined together to form the Connecticut colony, choosing to be governed by the Fundamental Orders, a relatively democratic framework for which the Reverend Thomas Hooker (c. 1586–1647) was largely responsible.
After a number of years of bitter border disputes, Connecticut received legal recognition as a colony by England in 1662. A relatively autonomous colony and strong supporter of the American Revolution, Connecticut became the fifth state of the Union on January 9, 1788.
In 1636 the English clergyman Roger Williams (c. 1603–1683) established a colony at Providence seeking religious freedom for a group of nonconformists from the Massachusetts Bay colony. Others followed, settling Portsmouth (1638), Newport (1639), and Warwick (1642). In 1644 Williams journeyed to England, where he secured a legislative grant uniting the four original towns into a single colony, the Providence Plantations. Williams secured a charter for Rhode Island and the Providence Plantations from King Charles II (1630–1685) in 1663, which guaranteed religious freedom and substantial local autonomy.
Stephen Hopkins (1707–1785) signed the Declaration of Independence as a delegate from Rhode Island, which became the thirteenth state on May 29, 1790.
The colony of Delaware belonged to three different countries during the seventeenth century. Permanent settlements were made by the Swedes in 1638 (at Wilmington, under the leadership of a Dutchman, Peter Minuit [1580–1638]) and by the Dutch in 1651 (at New Castle). The Dutch conquered the Swedes in 1655, and the English conquered the Dutch in 1664. The English king's brother James (1633–1701), the duke of York (who later became James II, king of England), ceded the colony to the English proprietor William Penn (1644–1718), who kept Delaware closely tied to his family and to his beloved Pennsylvania until 1776.
John Dickinson (1732–1808), a delegate from Delaware, signed both the Articles of Confederation and the Constitution. On December 7, 1787, Delaware became the first state to ratify the federal Constitution.
The Italian explorer Giovanni da Verrazano (c. 1485–1528) discovered the North Carolina coast in 1524. The English courtier Sir Walter Raleigh (1554–1618) sponsored the famous “lost colony” at Roanoke , and in 1629 King Charles I began the settlement in earnest of the colony he called, after himself, “Carolana.” It was set up as a proprietorship. The colony of South Carolina split off from North Carolina in 1719.
In 1729 the proprietors relinquished their rights for money and land, and North Carolina became a royal colony. North Carolina's leaders hesitated before joining the Union, waiting until November 21, 1789, to ratify the U.S. Constitution. The delay helped stimulate the movement for the adoption of a Bill of Rights . North Carolina became the twelfth state.
The English established the first permanent settlement in South Carolina in 1670 under the supervision of the eight lord proprietors who were granted “Carolana” by King Charles II. The colonists settled at Albemarle Point on the Ashley River, and in 1680 they moved across the river to the present site of Charleston.
The original grant had made South Carolina a very large colony, but eventually the separate provinces of North Carolina and Georgia were established, making South Carolina small. The colonists overthrew the proprietors in 1719, and South Carolina voluntarily became a royal colony in 1729. South Carolina took an active part in the American Revolution and became the eighth state on May 23, 1788.
England assumed control of New Jersey after King Charles II granted a region from the Connecticut River to the Delaware River to his brother James, the duke of York. James deeded part of the land to his friends, Baron John Berkeley (1602–1678) and Sir George Carteret (c. 1610–1680), making New Jersey a proprietorship on June 23, 1664. It was later divided into two separate parts, East Jersey and West Jersey, only to be reunited in 1702 by Queen Anne (1665–1714). A royal governor was appointed in 1738. New Jersey played a pivotal role in the Revolutionary War and became the third state on December 18, 1787.
As a colony, New York had a checkered history. Originally founded as the Dutch colony of New Amsterdam in 1624, British forces conquered it in 1664. King Charles II of England gave the land to his brother James, the duke of York, who renamed the colony New York.
The presence of both Dutch and English colonists in the area created conflicts that haunted New York well into the eighteenth century. By the time of the American Revolution, however, these conflicts lessened, and new conflicts between patriots (Americans who broke from British rule) and Tories (Americans who were loyal to England; also known as Loyalists) replaced them. Because the British army controlled New York City during most of the war, the city became a haven for Loyalists. New York became the eleventh state on July 26, 1788.
Penn, a Quaker who espoused pacifism, tolerance, and equality, was given broad powers to make laws and to run the colony as he saw fit. Penn, however, gave up his lawmaking powers and set up a form of representative government. Many immigrants came to this tolerant colony.
Pennsylvania's most famous patriot resident was the statesman, scientist, and philosopher Benjamin Franklin (1706–1790). The Declaration of Independence, which Franklin signed, was declared from Philadelphia. Pennsylvania was the second state to join the Union, on December 12, 1787.
The colony of Georgia was founded in 1732 by James Oglethorpe (1696–1785), a soldier, politician, and philanthropist who had been granted a charter to settle the territory by Great Britain. Named after King George II, Georgia was the last of the thirteen British colonies established in the United States.
Georgians were among the first colonists to sign the Declaration of Independence. Following the American Revolution Georgia was the fourth state overall and the first southern state to ratify the federal Constitution on January 2, 1788. |
The concept of a computer did not materialize overnight. Just as the growth and development of mature biological species normally took place in fits and started over the ages, the computer also took thousands of years to mature.
Ancient people used stone for counting or made scratches on a wall or tied knots in a rope to record information. But all these were manual computing techniques. Attempts had been going on for developing faster computing devices and the first achievement was the abacus, the pioneer computing device made by man. Let us take a look at the development of the computer through various stages.
Around 3000 years before the birth of Jesus Christ, the Mesopotamians quite unknowingly laid the foundation of the computer era. They discovered the earliest form of a bead - and write counting the machine, which subsequently came to be known as abacus. The Chinese improved upon the abacus so that they could count and calculate fast.
Napier's 'Logs' and 'Bones'
John Napier developed the idea of logarithm. He used 'logs' to transform multiplication problem to addition problem. Napier's logs later became the basis for a well known invention-the computing machine known as slide rule. Napier also devised set of numbering rods known as Napier's Bones. He could perform both multiplication and division with these Bones
The idea of logarithm developed in 1614, notably reduced the tedium of repetitive calculations
Pascal's Adding machine
Blaise Pascal, a french mathematician, invented a machine in 1642 made up of gears which was used for adding number quickly. This machine was named as adding machine and was capable of addition and subtraction. It worked on clock work mechanism principle. The adding machine consisted of numbered toothed wheels having unique position values. The rotation of wheels controlled the addition and subtraction operations. This machine was capable of carry - transfer automatically.
Gottfried Leibnitz , a German mathematician, improved an adding machine and constructed a new machine in 1671 that was able to perform multiplication and division as well. This machine performed multiplication through repeated addition of numbers. Leibnitz's machine used stepped cylinder each with nine teeth of varying lengths instead of wheels as was used by Pascal.
Joseph Jacquard manufactured punched cards at the end of American revolution and used them to control looms in 1801. Thus the entire control weaving process were automatic. The entire operation was under a program's control. With the historic invention of punched cards, the era of storing and retrieving information started that greatly influenced the later inventions and advancements
Babbage's Difference Engine
Charles Babbage, a professor of mathematics developed a machine called Difference Engine in the year 1822. This machine was expected to calculate the logarithmic tables to a high degree of precision. The difference engine was made to calculate various mathematical functions. The machine was capable of polynomial evaluation by finite difference and its operation was automatic multistop operation.
In 1887, an American named Herman Hollerith fabricated what was dreamt of by Charles Babbage. He fabricated the first electromechanical punched card tabulator that used punched cards for input, output and instructions. This machine was used by American Department of Census to compile their 1880 census data and were able to complete compilation in 3 years which earlier used to take around 10 years.
Mark - I
Prof.Howard Aiken in U.S.A constructed in 1943 an electromechanical computer named Mark-I which could multiply two 10- digit number in 5 seconds - a record at that time. Mark-I was the first machine which could perform according to pre programmed instructions automatically without any manual interferences. This was the first operational general purpose computer. |
There are three networking devices: a hub, a switch, and a router. They operate at Physical, Data access, and Network layers respectively. Nodes connected to a hub are in the same network domain, and in the same collision domain. Nodes connected to a switch are in the same network domain, but not in the same collision domain. Nodes connected to a router are in different network, and different collision domains. So, a switched network will protect you from collision but not from network attacks.
Collision means that packets sent from two different nodes may cancel or interfere with each other. For example, Andy sends a packet to Cornius at the same time that Bill sends a packet to Danny and these two packets may collide with each other, physically. In other words, they share the same medium.
From a security perspective, being protected from collision means that you are protected from packet sniffers (there are ways around this), and you can enjoy better bandwidth. |
The first federal environmental act was the establishment of Yellowstone National Park in March 1872 in the territories of Montana and Wyoming. Instead of promoting the land for development, Congress and President Ulysses S. Grant declared that it should be "as a public park or pleasuring ground for the benefit and enjoyment of the people." As the first such preserve in the world, Yellowstone inaugurated an international national park movement that currently includes some 1,200 parks or preserves in 100 countries, including 391 in the United States.
The Scottish naturalist John Muir became an early advocate for preservation after his travels and scientific work convinced him that some natural areas need protection from human exploitation. Muir founded the Sierra Club in 1892 to that end and urged President Theodore Roosevelt to join the cause. Roosevelt, himself known as an ardent outdoorsman, eventually dedicated more than 150 million acres to national parks and forests, and founded the US. Forest Service, which manages forests for water and timber resources while protecting them for wildlife and recreation. The first chief of the Forest Service, Gifford Pinchot, promoted a "wise use" strategy of wilderness management that proposed, in contrast to Muir, that nature could be safely commercialized.
Another early American environmentalist, Aldo Leopold, called for a "land ethic" that recognized the value of the natural world as beyond financial. In 1924, due to Leopold's efforts as a Forest Service employee, the Gila National Forest in New Mexico became the world's first designated wilderness. This designation allows travel only by foot or horseback and bans any commercial activity except grazing in order to protect the usefulness of the wilderness for cleaning air and reducing climate change, as well as providing clean water, wildlife habitat, and natural recreational experiences.
Interest in environmental issues escalated again in the late 20th century due to the increasingly visible effects of human behavior on the environment. "Going green" is now an international movement addressing land and water conservation, air and water pollution, solid waste disposal, global warming, and biodiversity.
(‘The New York Times ‘Smarter by Sunday – 52 Weekends of Essential Knowledge for the Curious Mind’) |
Thinkfinity Lesson Plans
Subject: Arts,Language Arts,Social Studies
Title: The Meaning Behind the Mask
Description: In this lesson, from EDSITEment, students explore the cultural significance of masks, discuss the use of masks in stories, and then investigate the role masks play in ceremonies and on special occasions in various African cultures. After students have studied these masks, they are then given an opportunity to choose a familiar story and make simple masks to perform the story.
Thinkfinity Partner: EDSITEment
Grade Span: K,1,2 |
Help:Vector graphics tutorial
- 1 Introduction
- 2 Downloading Inkscape
- 3 Opening Inkscape for the first time
- 4 Shape tools
- 5 Path drawing tools
- 6 Other tools
- 7 Step by step drawing a picture
- 8 Saving your work for Wikipedia
- 9 See also
Welcome to this vector graphics tutorial! This tutorial is aimed at absolute beginners who are interested in getting started with vector graphics. One important point is that you can improve your SVG skills by testing and testing again : mastery comes with experience, experience comes with lots of practise.
What are Vector Graphics?
So what are vector graphics? Well let's start by looking at the alternative to vector graphics, bitmap graphics. With bitmap graphics the image is divided up into a grid of pixels. The computer holds information about those pixels, such as their colour and where they are in the image and from this information the computer can "draw" the image. Note there is no obvious way of seeing what the image will be until it is drawn. Vector graphics work in a completely different way. They define the image mathematically. The files contain instructions that state "draw a circle" or "draw a curve". It is (at least for very simple cases) possible to read these instructions and imagine what the image should look like.
Because vector graphics work in this way, they are ideally suited to the kinds of drawings that require simple shapes that can be mathematically described. Diagrams, logos, clipart, house plans and maps are all suitable drawings. Photographs are not.
Here are some example vector images to show you the kind of image that can be drawn with a vector graphic program.
Various file formats are used for vector graphics; the most common is Macromedia/Adobe Flash. However, Wikimedia prefers scalable vector graphics or SVG. SVG is an XML markup language for describing two-dimensional vector graphics. It is an open standard created by the World Wide Web Consortium. The editor we will be using in this tutorial is Inkscape.
We will use Inkscape for a couple of reasons. Firstly, it is free software. Secondly, it uses SVG natively. Many Linux distributions come with Inkscape already installed; if not, you can use your package manager to install it. For Windows and Mac, you need to go to the Inkscape website http://www.inkscape.org/ and download the installer. Just follow the instructions on the website.
Opening Inkscape for the first time
Now that you have installed Inkscape, let's open it up and take a look at the interface.
The layout is reasonably uncluttered. This is because Inkscape cleverly hide tools until you need them. There are tools down the left hand side and at the top. There is information along the bottom.
The tools down the left hand side are used for creating drawings. They are the most important tools and we will look at all of them in this tutorial. The tools along the top are mostly used for modifying objects in a drawing. We will look at some, but not all of them in this basic tutorial.
The Rectangle drawing tool
Let us start our exploration of the side toolbar by looking at the rectangle tool.
Select the blue rectangle tool (you can tell it is selected because a button appears around it), then click and drag on the canvas to draw a rectangle. Do not worry if it is a different colour. We will worry about colours later.
Notice the control points at the corners? The square shaped ones will allow you to change the shape and size of the rectangle. The circular one will allow you to round the corners.
Notice also the info bar at the top? This changes with each tool selected. In the case of the rectangle tool it tells you the dimensions of the rectangle, and the rounding of the corners. You can change the numbers manually. This is handy if you need to draw a rectangle with exact dimensions, such as a flag.
Another thing to notice is the information at the bottom of the screen (next to the layers button). With most programs you can ignore this type of information; with Inkscape you should get into the habit of reading it as it does provide useful information or helpful tips on how to use the tool.
The Ctrl key
If you need to create a perfect square, hold down the Ctrl key on the keyboard while you click and drag. This key limits the rectangle formed to either a perfect square or a rectangle with integer ratio sides. The Ctrl key is useful for modifying many other operations. For example, if you hold it while moving an object, it will restrict the directions to horizontal or vertical only. Not sure what it will do in a particular circumstance? Hold it down and look at the information panel at the bottom of the screen and Inkscape will tell you.
The Ellipse, Polygon/Star and Spiral tools
These tools work in exactly the same way as the rectangle tool. Using the control key will restrict the ellipse tool to drawing perfect circles. Let's look at the polygon or star tool.
You can change the number of corners, the spoke ratio, roundness of the corners and the randomness manually by entering in the numbers on the info bar. You can also change the spoke ratio by dragging the control points on the polygon itself. Set the numbers the same as is shown above - we will need this rounded triangle shape later on in the tutorial.
Path drawing tools
The next three buttons as we go downwards are used for drawing paths. A path is a mathematical curve that is specified by a number of points that the path must curve through. Taking them out of order, lets look at the middle one, the bezier curve tool.
Bezier Curve tool
Above is an example of a bezier curve. You can see two nodes in this view. These are the start and end nodes of the curve. But there are other nodes that you cannot see. These determine how the line curves between the end nodes. In order to see those nodes you need to click on the node button.
Clicking on the node tool reveals another node in the middle of the curve.
If you then click on the middle node you will see the bezier handles appear.
These handles allow you to change the shape of the curves between the nodes. Notice that a list of node tools appear at the top. You can use these to change the nodes. We will not go into detail about these tools in this beginners tutorial.
The node tool can be used on all the objects created with the other tools to reveal their nodes.
More on the drawing tools
The other two drawing tools also create paths. The top tool of the three is the scribble tool. Use it like a pencil. The computer will calculate all the nodes and beziers for you. Closed paths can be created by drawing a loop. (With the bezier tool, click on the start node to close the curve).
The last tool in the group is the calligraphy tool. It allows you to do calligraphic writing. The pen creates closed loops in a realistic pen nib like way. Because of this many graphic artists like to draw with this pen all the time.
This tool allows you to select objects, resize them and move them about. If you click twice on an object with the selection tool, the handles change and you are able to rotate an object.
This tool looks like a magnifying glass with a + sign in it. Drag the tool over an area to zoom in. Shift and click to zoom out again.
Looks like a letter A. Click and drag an area where you want your text to go, then start typing! If the letters are too small, click on the select button, then drag the handles to make it bigger. There are a lot of things that you can do with text that are beyond the scope of this beginners tutorial.
Use this to draw a connection between two objects. For example, a drawing and a label.
The nice thing about using this tool is that if you decide to move the objects about on the canvas, the connector still maintains the connection.
Click and drag on an object and you will create a gradient from full colour to full transparency.
Colour Sampler tool
Use this tool to sample a colour on the screen.
Step by step drawing a picture
There are of course lots of other tools. We haven't even mentioned two tools along the top toolbar, and some of these are essential for a beginner to know. The best way to actually learn about these is in the course of using them. So we are now going to proceed to draw a simple picture using Inkscape. The image we are going to draw is a No Drinking / No Alcohol sign similar to this No Smoking sign. It will consist of a bottle of wine with a wineglass and a red No Entry symbol over them.
Setting up the page
The first thing we need to do is set up the workspace.
- We need a square canvas so go to File-->New-->CD Cover.
- We are going to create two layers. One layer for the bottle and wine and one for the red barred circle thing. Doing this will not affect the picture at all; it just makes it easier to work.
- On the menu bar go to Layer-->Rename Layer, then call the current layer Wine.
- Now go Layer-->Add Layer, then call this new layer No!
We will draw the circle in the layer we called No!. The first thing we will do is lock the Wine layer so that it cannot be accidentally edited.
Look at the bottom of the window and find the layer tool. It should read No!. Click on the downwards pointing triangle and select Wine. It will change to bold once you select it. Now do you see the black open padlock icon just to the left of the layer tool? Click on that to lock the layer. Now we cannot edit this layer for the time being. Click on the layer tool and select No! again.
We are now ready to start drawing.
Drawing the barred circle
Click on the ellipse tool and hold down the Ctrl key so that you draw a perfect circle. Make it reasonably large. We can change the size later.
Changing the colour
We need to set the colour to red and the outline to black. We do this with the fill and stroke tool. The button is located on the top toolbar and looks like this . Clicking on it will bring up the fill and stroke box, where you can edit the fill colour and transparency plus the stroke colour, weight, style and transparency.
On the above screenshot, the fill and stroke button is on the top toolbar next to a letter A. You can change the colour using several methods, but the easiest one to choose is wheel. Set the fill colour to bright red and make sure the A channel is set to 255 and the master opacity is set fully on at 1.000. These control the transparency, but we want a fully opaque image. Set the stroke colour to black and the stroke width (it's under the style tab) to 2.5 pixels.
Creating a ring shape
We are going to create a ring shape by making another circle inside the first one and then subtracting that shape from first using Inkscape's "combine paths" tools. To use the tools, the circle needs to be converted from an object to a path.
Go to the menu bar and select:
- Path-->Object to Path
Nothing will appear visually to have happened to the circle, but you will know it is now a path because the info at the bottom of the window will say so.
Now we will copy and paste a second circle.
- Edit-->Paste in Place
Now we have two circle shaped paths, one on top of the other. Click on the top one, we are going to make it smaller. There are several ways of doing this but we are going to use the inset tool (in order to show you how to use it). This tool cannot be used on shapes, only on paths, which is why we had to convert the circle to a path.
You will see the top circle get a tiny bit smaller. In this tutorial so far, keyboard shortcuts have been avoided. However in this case, we will need to repeat the process several times and therefore using the shortcut will save a lot of time, so:
- Press Ctrl and ( together repeatedly until the inner circle is the correct looking size.
Select both circles by dragging the select tool over them both, or by holding down the Shift key and clicking on each of them. Now go to the menu bar and select:
This combines the two paths into one path by subtracting the smaller circle (which is on top) from the larger circle.
Creating a bar over the ring
Draw a red rectangle the same width as the circle and height to match the thickness of the ring. Select both the circle and the rectangle by dragging the select tool over them both, then choose:
Align the rectangle so that it is in the center of the ring.
We are going to add the two paths together to form a single path. Make sure both objects are selected, then go to:
Finally we will rotate our object by 45 degrees clockwise.
- Object-->Transform, then select the Rotate tab and enter -45 into the Degrees box.
The final barred ring will look like this:
Drawing the wine glass
The first thing we need to do is lock the No! layer, then click on the picture of an eye next to the padlock button at the bottom of the window to hide that layer. Now unlock the Wine layer.
- Draw a rounded triangle with the star tool.
- Resize the triangle to make a base of the wine glass.
- Now draw a rectangle for the stem and an ellipse for the bowl.
- Select all three objects and use:
- Object-->Align to align them all vertically.
Combine the paths of all three, then set the stroke width to 1, fill colour to grey and the A (alpha) transparency to around a third (play around with it until you are happy).
Now we will cut off the top of the glass. Draw a rectangle, large enough to cover the top half of the bowl.
To cut off the top of the ellipse, choose:
You now have a wine glass.
Want to add some wine? Draw an ellipse and cut most of it off using the rectangle trick above. Select fill and stroke but choose gradient fill. Edit the gradient fill and choose a wine colour for one end of the gradient and white for the other. (We are going to create a sheen on this wine). Select reflected for the repeat. Finally use the node tool to see the nodes of the gradient, then drag the white node to the right and down a little to create the required gradient.
Saving your work for Wikipedia
If you haven't done so already:
- Edit-->Select All, and maybe Group using the group icon.
Using high magnification, check that all your transitions from adjoining layers line up. Check your text objects are well separated, as they may be rendered differently by the end user's browser.
Now you need to minimise the saved image size so that it isn't surrounded by a lot of white space on the Wiki page. Then enter the Save dialogue:
- File-->Document Properties-->Page Tab-->Fit Page to Selection Button (click)
- File-->Save As
Life is made a lot easier if you spend a few moments adding your working folder to the drop list on the left. Don't close Inkscape, you still need it.
The file must be uploaded to Commons or Wikipedia, using the Upload File dialogue on the left. Again, life is simpler if you keep all the files in one folder - the working folder. When you have completed the standard boxes and saved the file, it will come up on a page of its own.
This needs to be checked against your original still open in Inkscape. Check the text objects - almost certainly the font will have changed. If it is not acceptable, you will need to make changes in Inkscape, then re-save the file under the same name - this can be an iterative process.
Add categories so your image may be found.
- Note: When saving, select "Plain svg" rather than the default "Inkscape svg". This has been suggested as a way to get a more stable image.
- Note: If you wish to include a .svg in a standard HTML web page, use the <embed> tag rather than an <img> tag, as browsers consider an .svg as a piece of XML rather than an image. |
The Berlin Tunnel
During the Cold War, monitoring the Soviet Union and its influence worldwide was the top priority for the CIA. In the 1950s, before reconnaissance satellites and other sophisticated collection systems were operational, wiretaps were one of the important technical means for collecting intelligence about Soviet military capabilities. The challenge was where and how to best conduct such wiretap operations.
Berlin was the center of a vast communications network from France to deep within Russia and Eastern Europe. At the time, almost all Soviet military telephone and telegraph traffic between Moscow, Warsaw, and Bucharest was routed through Berlin over land lines strung overhead from poles and buried underground. In a joint effort, the CIA and British Secret Intelligence Service (MI-6) assessed that tapping into underground communication lines in the Soviet sector of Berlin offered a good source for Soviet and East German intelligence. Tunneling from West Berlin to the underground cables in nearby East Berlin was judged to be feasible and hidden from visual surveillance.
Director of Central Intelligence Allen Dulles approved the covert tunneling and tapping operation in January 1954. Work began the following month using a US Air Force radar site and warehouse in West Berlin as cover for the construction.
Construction took a year. Tunnelers removed 3,100 tons of soil (enough to fill 20 average American living rooms) and used 125 tons of steel plate and 1,000 cubic yards of grout. The finished tunnel was 1,476 feet long. British technicians installed the taps, and collection began in May 1955.
Unknown at the time to the CIA and MI-6, the KGB—the Soviet Union’s premier intelligence agency—had been aware of the project from its start. George Blake, a KGB mole inside MI-6, had apprised the Soviets about the secret operation during its planning stages. But to protect Blake, the KGB allowed the operation to continue until April 1956 when they “accidentally discovered” the tunnel while supposedly repairing faulty underground cables—without putting Blake at risk. The Soviets planned the discovery in hopes of winning a propaganda victory by publicizing the operation. But their plan backfired when, instead of condemning the operation, most press coverage marveled at the audacity and technical ingenuity of the operation.
The taps produced enormous amounts of data for almost a year:
- 50,000 reels of tape
- 443,000 fully transcribed conversations
- 40,000 hours of telephone conversations
- 6,000,000 hours of teletype traffic
- 1,750 intelligence reports.
Following the tunnel’s shutdown, processing of this immense volume of data took more than two years to complete.
Subsequent studies determined that the Soviets had not attempted to feed false information over the tapped lines—the intelligence that had been collected was genuine. Despite the KGB’s foreknowledge, CIA ruled this most ambitious operation a success—it yielded valuable intelligence for US policymakers and war fighters, including:
- Detailed order of battle and information on activities of Soviet and Warsaw Pact forces
- Identification of people working on Soviet atomic energy projects
- Early warning of the Soviet’s establishment of an East German army
- The poor condition of East German railways
- Resentment between Soviets and East Germans
- Great tension in Poland
- Soviet inaction regarding a military invasion of Western Europe. |
The evolution of similar traits in different species, a process known as convergent evolution, is widespread not only at the physical level, but also at the genetic level, according to new research led by scientists at Queen Mary University of London and published in Nature this week.
The scientists investigated the genomic basis for echolocation, one of the most well-known examples of convergent evolution to examine the frequency of the process at a genomic level.
Echolocation is a complex physical trait that involves the production, reception and auditory processing of ultrasonic pulses for detecting unseen obstacles or tracking down prey, and has evolved separately in different groups of bats and cetaceans (including dolphins).
The scientists carried out one of the largest genome-wide surveys of its type to discover the extent to which convergent evolution of a physical feature involves the same genes.
They compared genomic sequences of 22 mammals, including the genomes of bats and dolphins, which independently evolved echolocation, and found genetic signatures consistent with convergence in nearly 200 different genomic regions concentrated in several 'hearing genes'.
To perform the analysis, the team had to sift through millions of letters of genetic code using a computer program developed to calculate the probability of convergent changes occurring by chance, so they could reliably identify 'odd-man-out' genes.
They used a supercomputer at Queen Mary's School of Physics and Astronomy (GridPP High Throughput Cluster) to carry out the survey.
Consistent with an involvement in echolocation, signs of convergence among bats and the bottlenose dolphin were seen in many genes previously implicated in hearing or deafness.
"We had expected to find identical changes in maybe a dozen or so genes but to see nearly 200 is incredible," explains Dr Joe Parker, first author on the paper.
"We know natural selection is a potent driver of gene sequence evolution, but identifying so many examples where it produces nearly identical results in the genetic sequences of totally unrelated animals is astonishing."
Dr. Georgia Tsagkogeorga, who undertook the assembly of the new genome data for this study, added: "We found that molecular signals of convergence were widespread, and were seen in many genes across the genome. It greatly adds to our understanding of genome evolution."
Group leader, Dr Stephen Rossiter, said: "These results could be the tip of the iceberg. As the genomes of more species are sequenced and studied, we may well see other striking cases of convergent adaptations being driven by identical genetic changes."
Explore further: Evolution's toolkit seen in developing hands and arms
'Genome-wide signatures of convergent evolution in echolocating mammals' is published in the journal Nature on 04 September 2013. The article is available from this link: dx.doi.org/10.1038/nature12511 |
Recent summer melts have left lots of the ocean exposed to sunlight.
In early 2011, the US and Europe froze, even as Greenland and Alaska experienced unusual periods of warmth. This year, the US and Europe were baking as the winter drew to an end, even as cold air hovered over Central Europe and Asia. In the Northern Hemisphere, extreme winter weather tends to be associated with the negative phase of the Arctic Oscillation, a wind pattern that dominates the polar region. And a consensus is building that changes in the Arctic may have permanently placed the Oscillation in the negative mode, leading to stable changes in the winters of the Northern Hemisphere. Cornell professor Charles H. Greene
has just published a review of this idea, and we talked with him about what the warming Arctic might mean for the US and Europe.
Greene's paper describes a key determinant of the Northern Hemisphere's winter weather: the Arctic Oscillation. When that is in its positive phase, a strong set of winds called the Polar Vortex forms. These winds help trap Arctic air masses at the pole, keeping the cold out of the mid-latitudes. This also allows the jet stream to take a more direct route around the globe, moderating the weather.
But over the last few years, the Oscillation has been strongly negative; in fact, in 2010, we saw a record for the most strongly negative period we'd ever recorded. During this phase, the winds of the Polar Vortex weaken, allowing the cold Arctic air to intrude or mix into the air at lower latitudes. As a result of this, Greene told Ars two things happen to the jet stream: it gets substantially weaker, and it tends to meander widely from north to south as it traverses the globe. This can lead to the severe chills the US and Europe have experienced over the past several winters, but the meandering jet stream can also draw warmer southern air north, as happened in the US this spring. |
From eons to seconds, proteins exploit the same forces
Nature’s artistic and engineering skills are evident in proteins, life’s robust molecular machines. Scientists at Rice Univ. have now employed their unique theories to show how the interplay between evolution and physics developed these skills.
A Rice team led by biophysicists Peter Wolynes and José Onuchic used computer models to show that the energy landscapes that describe how nature selects viable protein sequences over evolutionary timescales employ essentially the same forces as those that allow proteins to fold in less than a second. For proteins, energy landscapes serve as maps that show the number of possible forms they may take as they fold.
The researchers calculated and compared the folding of natural proteins from front to back (based on genomic sequences that form over eons) and back to front (based on the structures of proteins that form in microseconds). The results offer a look at how nature selects useful, stable proteins.
In addition to showing how evolution works, their study aims to give scientists better ways to predict the structures of proteins, which is critical for understanding disease and for drug design.
The research reported in the Proceedings of the National Academy of Sciences shows that when both of the Rice team’s theoretical approaches—one evolutionary, the other physics-based—are applied to specific proteins, they lead to the same conclusions for what the researchers call the selection temperature that measures how much the energy landscape of proteins has guided evolution. In every case, the selection temperature is lower than the temperature at which proteins actually fold; this shows the importance of the landscape’s shape for evolution.
The low selection temperature indicates that as functional proteins evolve, they are constrained to have “funnel-shaped” energy landscapes, the scientists wrote.
Folding theories developed by Onuchic and Wolynes nearly two decades ago already suggested this connection between evolution and physics. Proteins that start as linear chains of amino acids programmed by genes fold into their three-dimensional native states in the blink of an eye because they have evolved to obey the principle of minimal frustration. According to this principle, the folding process is guided by interactions found in the final, stable form.
Wolynes used this fundamental law to conceptualize folding in a new way. The top of his folding funnel represents all of the possible ways a protein can fold. As individual stages of the protein come together, the number of possibilities decreases and the funnel narrows and eventually reaches its functional native state.
A funnel’s rugged landscape is different for every protein. It shows smooth slopes as well as outcroppings where parts of a protein may pause while others catch up, and also traps that could cause a protein to misfold.
“The funnel shows that the protein tries things that are mostly positive rather than wasting time with dead ends,” Wolynes said. “That turns out to resolve what was called Levinthal’s paradox.” The paradox said even a relatively short protein of 100 acids, or residues, that tries to fold in every possible way would take longer than the age of the universe to complete the process.
That may be true for random sequences, but clearly not for evolved proteins, or we wouldn’t be here. “A random sequence would go down a wrong path and have to undo it, go down another wrong path, and have to undo it,” said Wolynes, who in his original paper compared the process to a drunken golfer wandering aimlessly around a golf course. “There would be no overall guidance to the right solution.”
So the funnel is a useful map of how functional proteins reach their destinations. “The only way to explain the funnel’s existence is to say that sequences are not random, but that they’re the result of evolution. The key idea of the energy landscape (depicted by the funnel) only makes sense in the light of evolution,” he said.
While Onuchic and Wolynes have been advancing their theories for decades, only recently has it become possible to test their implications for evolution using two very different approaches they developed on the shoulders of their previous work.
One of the algorithms they employ at Rice’s Center for Theoretical Biological Physics (CTBP) is called the Associative Memory, Water-Mediated, Structure and Energy Model (AWSEM). Researchers use AWSEM to reverse-engineer the folding of proteins whose structures have been captured by the century-old (but highly time-consuming) process of x-ray crystallography.
The other model, direct coupling analysis (DCA), takes the opposite path. It begins with the genetic roots of a sequence to build a map of how the resulting protein folds. Only with recent advances in gene sequencing has a sufficiently large and growing library of such information become available to test evolution quantitatively.
“Now we have enough data from both sides,” Wolynes said. “We can finally confirm that the folding physics we see in our structure models matches the funnels from the evolutionary models.”
The researchers chose eight protein families for which they had both genomic information (more than 4,500 sequences each) and at least one structural example to implement their two-track analysis. They used DCA to create a single statistical model for each family of genomic sequences.
The key is the selection temperature, which Onuchic explained is an abstract metric drawn from a protein’s actual folding (high) and glass transition (low) temperatures. “When proteins fold, they are searching a physical space, but when proteins evolve they move through a sequence space, where the search consists of changing the sequence of amino acids,” he said.
“If the selection temperature is too high in the sequence space, the search will give every possible sequence. But most of those wouldn’t fold right. The low selection temperature tells us how important folding has been for evolution.”
“If the selection temperature and the folding temperature were the same, it would tell us that proteins merely have to be thermodynamically stable,” Wolynes said. “But when the selection temperature is lower than the folding temperature, the landscape actually has to be funneled.”
“If proteins evolved to search for funnel-like sequences, the signature of this evolution will be seen projected on the sequences that we observe,” Onuchic said. The close match between the sequence data and energetic structure analyses clearly show such a signature, he said, “and the importance of that is enormous.”
“Basically, we now have two completely different sources of information, genomic and physical, that tell us how protein folding works,” he said. Knowing how evolution did it should make it much faster for people to design proteins “because we can make a change in sequence and test its effect on folding very quickly,” he said.
“Even if you don’t fully solve a specific design problem, you can narrow it down to where experiments become much more practical,” Onuchic said.
“Each of these methods has proved very useful and powerful when used in isolation, and we are just starting to learn what can be achieved when they are used together,” said Nicholas Schafer, a Rice postdoctoral researcher and co-author. “I’m excited to be participating in what I think will be an explosion of research and applications centered around these kinds of ideas and techniques.”
Source: Rice Univ. |
The First Fleet: the process of colonisation
Britain transported its criminals from its overcrowded jails to the British colonies in the Americas, until the American Revolution (which lasted from 1775 to 1783). After the Revolution, the United States refused to accept prisoners, so Britain had to find another place to send them. Joseph Banks suggested Botany Bay, and this was accepted. Settlement of Australia would not only be a place to send prisoners but would keep rival powers, such as France, away from Australia. Captain Arthur Phillip was chosen to command the convict fleet, as he had experience transporting African slaves. The fleet, known as the First Fleet, set sail for Botany Bay on 13 May 1787.
The First Fleet
The First Fleet consisted of 11 ships and about 1500 people in all. There were over 700 convicts, 290 marines, 400 sailors and some women and children. On the way, the fleet stopped at Tenerife (Canary Islands), Rio de Janeiro, and the Cape of Good Hope to pick up food, animals, plants and other supplies before heading to Botany Bay. The fleet landed at Botany Bay between 18 and 20 January 1788. Refer Image 1
It was the middle of summer, so there was little fresh water or fertile soil at Botany Bay. Captain Phillip decided to take some crew and sail north to find a better location. They found the clear waters of a protected harbour that Phillip named Sydney after the British Home Secretary, Lord Sydney. On 26 January 1788 (Australia Day), Captain Arthur Phillip and a group of officers and marines landed in Sydney Cove and raised the Union Jack (the British flag) to proclaim New South Wales as a British colony. Refer Image 2
Establishing a colony
On 27 January 1788, the male convicts began to arrive and started to clear the trees, put up tents, unload stores and animals, and sow vegetable seeds and corn. On 6 February 1788, the female convicts arrived from Botany Bay and the colony was established. Refer Image 3
Captain Phillip became the governor of the colony and began to establish permanent structures and farms. Huts, storehouses, a hospital and a church were built and a brick residence was constructed for the governor, called Government House. In November of 1788 a new settlement was founded at Parramatta, where the soil was more fertile. Another settlement was soon established at Toongabbie. Norfolk Island was also settled so that timber and flax (to make sails) from the island could be used in the new colony.
The first years were very hard and the colony almost failed. The first harvest came to nothing and food had to be strictly rationed. Governor Phillip sent HMS Sirius to the Cape of Good Hope for more supplies. In June 1790, the Second Fleet arrived with more convicts and food supplies, and in 1791 the Third Fleet arrived. Food was still in short supply, but by 1792 the colony was well-established. Trading ships were starting to visit Sydney and the whaling industry had begun. Sheep were being imported to grow wool, and released convicts were taking up farming. The colony of New South Wales was starting to grow.
The first settlers and the Indigenous peoples
The region around Sydney Cove was not uninhabited or unoccupied, as the British had declared. Its land belonged to the Eora and Dharug peoples. When the Union Jack was raised on 26 January 1788, all Indigenous land had been declared British territory. In addition, all Indigenous people had been made British subjects and would be expected to obey the laws of Great Britain. This was despite the fact that Indigenous people had their own laws, considered the land an essential part of their lives; and had their own families, clans and language groups.
The arrival of the British was the start of a process which resulted in Indigenous groups losing their land, their hunting grounds and their way of life. Contact with the British brought diseases such as smallpox that Indigenous peoples had never known before. These diseases killed thousands and thousands of Indigenous people. There was also competition between the British and Indigenous peoples for clean water and food. The British settlers cut down trees, destroyed sacred sites, stole weapons and rapidly extended their control of the land.
The British settlement of Australia has become known as the European invasion of Australia. In the following chapters the effects of the British colonisation on the Indigenous peoples will be explored. |
Emulating charles dickens's writing style has been attempted by many, and only can you combine lengthy descriptive narrative, witty prose, extreme poverty,. A toad's narrative: the unique realism of charles dickens “the rise of the novel”” the journal of narrative technique 132 (1983): 59-73. Charles dickens was a public man and a famous man, and he the narrative style is slightly more ironic than earlier, but neither the narrator.
2in order to respond to this question i will turn to charles dickens's novel david copperfield, long considered dickens's fictionalized account of his own passage . Learn a little more about the writing style of charles dickens to gain a better dickens alludes to many london landmarks in his narrative. The narrative style is complex: dickens can switch from a childish description to an adult comment and back again within a sentence the opening chapters are.
In the epistolary novel the narrative is conveyed entirely by an exchange of letters narrative (eg walter scott, ivanhoe charles dickens, a tale of two cities. This chapter deals with the significance of narrative technique in fiction the focus in this for example, charles dickens's david copperfield in medias res or. There are some narrative techniques existing in english literature, so who are used by dickens in hard times. Charles dickens was born 200 years ago today – and what better on as the pickwick papers thanks to its sylistic and narrative innovation.
Bleak house and victorian art and illustration: charles dickens's visual narrative style donald h ericksen in a letter to john forster, his friend and biographer. A secondary school revision resource for gcse english literature about the context of charles dickens' great expectations. I wanted to take a closer look at some of the techniques dickens uses to build his narrative his approach to characterisation is simple, but. Dickens uses the first person point of view in great expectations this helps us to see things from pip's perspective, to relate more easily to the events he. An examination of dickens's narrative technique in great expectations, oliver twist, and david copperfield.
Charles dickens uses narrative techniques to help readers understand the characters and important themes in his novel great expectations, published in 1860. Therefore we are able to employ corpus linguistic techniques systematically to identify that literary narrative fiction can be defined not by event but by character the character of mr dick in charles dickens's david copperfield ( 1850) is. Christmas carol/dickens techniques/structure and narrativemd a christmas carol is divided into five chapters, and dickens called each chapter a ' stave.
Britannica classics: early victorian england and charles dickensclifton a strong narrative impulse and a prose style that, if here overdependent on a few. This book takes a fresh look at childhood in dickens' works and in victorian of scientific inquiry shaped his narrative techniques and aesthetic imagination. Narrative techniques of charles dickens in oliver twist and david copperfield “ whatever i have tried to do in life, i have tried with all my heart.
A narrative technique is any of several specific methods the creator of a narrative uses to an example of this is in the first chapter of great expectations by charles dickens: a man who had been soaked in water, and smothered in mud, and. A list of important facts about charles dickens's oliver twist, including with hypocritical or morally objectionable characters, the narrative voice is often ironic . Speaking in strictly functional terms, the dual narrative in charles dickens's version of the story, a technique still used nowadays when journalists include in.
Free essay: faculty of philology the department of english language and literature diploma thesis charles dickens's. Charles dickens, an english writer, used realism in his works such as a tale to the idealism of the romantics, realism became a common writing style of the. My approach to realism has mainly to do with narrative method, though i hope i have dant use of exaggeration and nonrealist techniques, perhaps most notably encompass not only charles dickens's our mutual friend and victor hugo's.Download |
1) two assumptions of the biological approach
- assumption 1: behaviour can be explained interms of the brain.
- for example, the cerebal covers the surface of the brains and is divided into four lobes: frontal lobe, occipital lobe, parital lobe and temporal lobe.
- different regions of the brain are localised/specialised for different function.
- e.g. the frontal lobe is responsible for thinking and fine motor movement
- the occipital lobe is responsible for sight and vision
- assumption 2: behaviour can be explained interms of hormones.
- hormones are biochemical substances that are produced in one part of the body
- e.g. pituitary glands and adrenal glands
- examples of hormones are: testosterone which is a male hormone and Oestrogen which is a female hormone.
2) two assumptions of the behaviourist approach
- assumption 1: behaviour can be explained interms of operant conditioning
- operant conditioning= new behaviour can be learnt through reinforcement, either positive or negative
- Skinner demonstrated this with his rats or pigeons (skinner box).
- he made them repeat their behaviour through being rewarded with… |
Shifts in perceptions of time and space occurred during World War II when Nazi Germany implemented the enigma machine to relay all sorts of communications within its military. An enigma is operated through circuits which scramble typed letters randomly to a contact on the other side. Three to four wheels of letters are generally used. German secret services used enigma to translate instructions (and quality bantz?) on the field, at sea and in the air (source). The speed and distance at which the Germans were able to communicate vital information revolutionised the second world war and the communications proceeding it.
The Germans were convinced their enigma code was impossible to break. However Alan Turing, nowadays known as the father of modern computing, led a team of mathematicians at Bletchley Park during the second world war (source). They devised a machine which was designed to break the enigma code, which it eventually succeeded in doing and thus helped to cease the war. Turing’s team were able to intercept messages which were intended to stay between German military groups and plan battles and deploy troups as necessary. Whilst still in college, Turing had devised a machine with an equal capacity to think for itself as a human. He returned to this idea after the war and began development of what has been credited as the first digital computer (source).
Decades after his suicide in 1954 (two years after being chemically castrated for the crime of homosexuality), Turing has been credited as the father of modern computing and cryptography, which has no doubt led to the creation of the cyberspace we live in today.
Today, a mere sixty years after Turing’s death, the world has morphed into ‘a global nervous system‘; each node (for example a computer) is a nerve on a complex body which is capable of transmitting signals at a rapid speed and quantity. Geographical borders have been effectively obliterated today, and ideologies of time and space have been redefined.
Have a go at this enigma machine simulation here. |
DAY/TIME: MTH / 7:30 – 12:00 DATE: 06 / 19 /14
TITLE: INTRODUCTION TO EXPERIMENTATION
Introduction to experimentation aims to familiarize the students with some of the logic of research. The materials used are pencil and paper, stop watch with second hand. The procedure of the experiment was: The experimenter (E) instructed the subject (S) to write the alphabet backward (from Z to A) as rapidly as possible. There will be 5 trials of 30 seconds each with a one-minute rest between trials. After the first trial S’s reported orally the number of letters written and to estimate the number expected in the second trial. After the second, third and fourth trials S’s reported the number estimated, the number achieved and the number estimated for the next trial. After the fifth trial only the estimated and achieved scores were reported. The subject was female, 18 years old and BS-Psychology Major. It was found out that the participant has a rise and fall achieved score while in the group mean revealed that the majority of the respondents got perfect achieved score in the fifth trial. It was concluded that practicing, conditioning and focusing influence the learning processes of an individual and the Subject has maintained the chunks of memory in writing the alphabet in a backward manner.
An experiment is an orderly procedure carried out with the goal of verifying, refuting, or establishing the validity of a hypothesis. Controlled experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated. A child may carry out basic experiments to understand the nature of gravity, while teams of scientists may take years of systematic investigation to advance the understanding of a phenomenon. Experiments can vary from personal and informal natural comparisons (e.g. tasting a range of chocolates to find a favorite), to highly controlled (e.g. tests requiring complex apparatus overseen by many scientists that hope to discover information about subatomic particles). In the scientific method, an experiment is an empirical method that arbitrates between competing models orhypotheses. Experimentation is also used to test existing theories or new hypotheses in order to support them or disprove them. According to some Philosophies of science, an experiment can never "prove" a hypothesis, it can only add support. Similarly, an experiment that provides a counterexample can disprove a theory or hypothesis. An experiment must also control the possible confounding factors—any factors that would mark the accuracy or repeatability of the experiment or the ability to interpret the results. Confounding is commonly eliminated through scientific control and/or, inrandomized experiments, through random assignment. In engineering and other physical sciences, experiments are a primary component of the scientific method. They are used to test theories and hypotheses about how physical processes work under particular conditions (e.g., whether a particular engineering process can produce a desired chemical compound). Typically, experiments in these fields will focus onreplication of identical procedures in hopes of producing identical results in each replication. Random assignment is uncommon. In medicine and the social sciences, the prevalence of experimental research varies widely across disciplines. When used, however, experiments typically follow the form of the clinical trial, where experimental units (usually individual human beings) are randomly assigned to a treatment or control condition where one or more outcomes are assessed. In contrast to norms in the physical sciences, the focus is typically on the average treatment effect (the difference in outcomes between the treatment and control groups) or another test statistic produced by the experiment. A single study will...
References: Kahayon, &Berba . 2005. Psychology Towards New Millenium. Mandaluyong City,
Miranda, N. C. 2008. Psychology Essentials to understanding behavior. Mandaluyong City,
Rousmaniere, F. H. (2005). A Definition of Experimentation.The Journal of Philosophy, and
Scientific Methods: Journal of Philosophy, Inc
Please join StudyMode to read the full document |
|Part of a series on|
|Types of Reading|
|Learning to Read|
Speed reading is any of several techniques used to improve one's ability to read quickly. Speed reading methods include chunking and eliminating subvocalization. The many available speed reading training programs include books, videos, software, and seminars.
Psychologists and educational specialists working on visual acuity used a tachistoscope to conclude that, with training, an average person could identify minute images flashed on the screen for only one five-hundredth of a second (2 ms). Though the images used were of airplanes, the results had implications for reading.
It was not until the late 1950s that a portable, reliable and convenient device would be developed as a tool for increasing reading speed. Evelyn Wood, a researcher and schoolteacher, was committed to understanding why some people were naturally faster at reading and tried to force herself to read very quickly. In 1958, while brushing off the pages of a book she had thrown down in despair, she discovered that the sweeping motion of her hand across the page caught the attention of her eyes, and helped them move more smoothly across the page. She then used the hand as a pacer. Wood first taught the method at the University of Utah, before launching it to the public as Evelyn Wood's Reading Dynamics in Washington, D.C. in 1959.
Skimming is a process of speed reading that involves visually searching the sentences of a page for clues to meaning. For some people, this comes naturally, but is usually acquired by practice. Skimming is usually seen more in adults than in children. It is conducted at a higher rate (700 words per minute and above) than normal reading for comprehension (around 200-230 wpm), and results in lower comprehension rates, especially with information-rich reading material.
Meta guiding is the visual guiding of the eye using a finger or pointer, such as a pen, in order for the eye to move faster along the length of a passage of text. It involves drawing invisible shapes on a page of text in order to broaden the visual span for speed reading. For example, an audience of customers at a speed reading seminar will be instructed to use a finger or pen to make these shapes on a page and told that this will speed up their visual cortex, increase their visual span to take in the whole line, and even imprint the information into their subconscious for later retrieval. It has also been claimed to reduce subvocalization, thereby speeding up reading. Because this encourages the eye to skim over the text, it can reduce comprehension and memory, and lead to missing important details of the text. An emphasis on viewing each word, albeit briefly, is required for this method to be effective.
Computer programs are available to help instruct speed reading students. Some programs present the data as a serial stream, since the brain handles text more efficiently by breaking it into such a stream before parsing and interpreting it. The 2000 National Reading Panel (NRP) report (p. 3-1) seems to support such a mechanism.
To increase speed, some older programs required readers to view the center of the screen while the lines of text around it grew longer. They also presented several objects (instead of text) moving line by line or bouncing around the screen; users had to follow the object(s) with only their eyes. A number of researchers criticize using objects instead of words as an effective training method, claiming that the only way to read faster is to read actual text. Many of the newer speed reading programs use built-in text, and they primarily guide users through the lines of an on-screen book at defined speeds. Often the text is highlighted to indicate where users should focus their eyes; they are not expected to read by pronouncing the words, but instead to read by viewing the words as complete images. The exercises are also intended to train readers to eliminate subvocalization, though it has not been proven that this will increase reading speed.
Effect on comprehension
Skimming alone should not be used when complete comprehension of the text is the objective. Skimming is mainly used when researching and getting an overall idea of the text. Nonetheless, when time is limited, skimming or skipping over text can aid comprehension. Duggan & Payne (2009) compared skimming with reading normally, given only enough time to read normally through half of a text. They found that the main points of the full text were better understood after skimming (which could view the full text) than after normal reading (which only read half the text). There was no difference between the groups in their understanding of less important information from the text.
In contrast, other findings suggest that speed reading courses which teach techniques that largely constitute skimming of written text result in a lower comprehension rate (below 50% comprehension on standardized comprehension tests) (Carver 1992).
Claims of speed readers
||This article may contain excessive, poor, or irrelevant examples. (December 2007)|
The World Championship Speed Reading Competition stresses reading comprehension as critical. The top contestants typically read around 1,000 to 2,000 words per minute with approximately 50% comprehension or above. The world champion is Anne Jones with 4,700 words per minute with 67% comprehension. The 10,000 word/min claimants have yet to reach this level.
Much controversy is raised over this point. This is mainly because a reading comprehension level of 50% is deemed unusable by some educationalists (Carver 1992). Speed reading advocates claim that it is a great success and even state that it is a demonstration of good comprehension for many purposes (Buzan 2000). The trade-off between "speed" and comprehension must be analyzed with respect to the type of reading that is being done, the risks associated with mis-understanding due to low comprehension, and the benefits associated with getting through the material quickly and gaining information at the actual rate it is obtained.
A critical discussion about speed reading stories appeared in Slate. Among others, the article raises doubts about the origin of John F. Kennedy's allegedly amazing reading speed. Ronald Carver, a professor of education research and psychology, claims that the fastest college graduate readers can only read about 600 words per minute, at most twice as fast as their slowest counterparts. Other critics have suggested that speed reading is actually skimming, not reading.
|Wikibooks has more on the topic of: Speed reading|
- Fixation (visual)
- Rapid Serial Visual Presentation
- Slow reading
- Vision span
- Words per minute
- Edward C. Godnig, O.D. (2003). "The Tachistoscope Its History & Uses" (PDF). Journal of Behavioral Optometry 14 (2): 40. Retrieved April 13, 2012.
- Frank, Stanley D (1994). The Evelyn Wood Seven-Day Speed Reading and Learning Program. Cambridge University Press. p. 40. ISBN 9781566194020.
- Duggan, GB.; Payne, SJ. (Sep 2009). "Text skimming: the process and effectiveness of foraging through text under time pressure". J Exp Psychol Appl 15 (3): 228–42. doi:10.1037/a0016995. PMID 19751073.
- Carver, R.P. "Reading rate: Theory, research and practical implications.". Journal of Reading 36: 84–95.
- "John F. Kennedy on Leadership".
- "American Experience".
- "The 1,000-Word Dash". Slate. Feb 18, 2000.
|last1=in Authors list (help)
- "The Skeptic's Dictionary".
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (October 2012)|
- Allyn & Bacon, (1987) The Psychology of Reading and Language Comprehension. Boston
- Buzan (2000) The Speed Reading Book. BBC Ltd
- Carver, R.P-Prof (1990) Reading Rate: A Comprehensive Review of Research and Theory.
- Carver, R. P. (1992). Reading rate: Theory, research and practical implications. Journal of Reading, 36, 84-95.
- Cunningham, A. E., Stanovich, K. E., & Wilson, M. R. (1990). Cognitive variation in adult college students differing in reading ability. In T. H. Carr & B. A. Levy (Eds.), Reading and its development: Component skills approaches (pp. 129–159). New York: Academic Press.
- Educational Research Institute of America (2006). A Review of the Research on the Instructional Effectivenessof AceReader. Report No. 258.
- Harris and Sipay (1990) How to Increase Reading Ability. Longman
- FTC Report (1998)
- Homa, D (1983) An assessment of two “extraordinary” speed-readers. Bulletin of the Psychonomic Society, 21(2), 123-126.
- McBride, Vearl G. (1973). Damn the School System—Full Speed Ahead!
- National Reading Panel (2000). p. 3-1.
- Nell, V. (1988). The psychology of reading for pleasure. Needs and gratifications. Reading Research Quarterly, 23(1), 6-50
- Perfetti (1995) Reading Ability New York:Oxford University Press
- Schmitz, Wolfgang (2013) Schneller lesen - besser verstehen [Reading faster - understanding better, German], Rowohlt, 8th edition
- Scheele, Paul R (1996) The Photoreading Whole Mind System
- Stancliffe, George D (2003) Speed Reading 4 Kids
- Whitaker (2005) Speed Reading Wikibooks
- Abela (2004) Black Art of Speed Reading
- Zach Davis (2009) PoweReading. Informationswelle nutzen, Zeit sparen, Effektivität steigern (German). Peoplebuilding Verlag.
- "BBC-Improve your skim reading technique". bbc.co.uk. BBC. Retrieved 24 October 2012. |
The good news: NASA has discovered the 10,000th near-Earth object (NEO).
The bad news: At least 100,000 are still out there.
NEOs are asteroids and comets that approach Earth, coming within 28 million miles (45 million kilometers) of our planet during their orbit around the sun. The vast majority of these chunks of space rock and ice are harmless — they just fly right by, minding their own business, in well-defined, well-known orbits.
NEOs also come in a range of sizes, from the pipsqueak few-footers to the rather terrifying whopper, 1036 Ganymed, that measures 25 miles (41 kilometers) across.
And now NASA has discovered the 10,000th NEO — a 1,000 feet (300 meters) wide asteroid affectionately named 2013 MZ5.
“Finding 10,000 near-Earth objects is a significant milestone,” said Lindley Johnson, program executive for NASA’s Near-Earth Object Observations (NEOO) Program at NASA Headquarters. “But there are at least 10 times that many more to be found before we can be assured we will have found any and all that could impact and do significant harm to the citizens of Earth.”
That means there’s at least 100,000 of these (potentially) marauding space rocks still to be tracked down, a feat that NASA is tackling head-on.
The latest asteroid was spotted by the Maui-based Pan-STARRS-1 telescope as part of a NASA-funded, University of Hawaii-managed PanSTARRS survey. 2013 MZ5 is by no means a hazardous asteroid and is not expected to be any threat to Earth of the foreseeable future.
The discovery of 2013 MZ5 is the latest in a long line of NEO discoveries, most of which have been made by NASA projects over the last 15 years.
“The first near-Earth object was discovered in 1898,” said Don Yeomans, manager of NASA’s Near-Earth Object Program Office at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. “Over the next hundred years, only about 500 had been found. But then, with the advent of NASA’s NEO Observations program in 1998, we’ve been racking them up ever since. And with new, more capable systems coming on line, we are learning even more about where the NEOs are currently in our solar system, and where they will be in the future.”
Although many more space rocks remain to be found, it’s believed that the majority of big, potentially hazardous NEOs have been discovered. Of the 10,000 discoveries so far, roughly 1,000 are larger than one-kilometer across. From this size and up, should one hit Earth, it would have global consequences for the planet and all life on it. So far, none of these large objects pose a threat. Even better news is that only a few dozen of the largest NEOs remain to be found.
As the NEOs get smaller, they’re harder to detect, meaning the vast majority of undiscovered NEOs are small, but not insignificant, objects. For example, any space rock measuring 30 meters (100 feet) or bigger can cause significant damage to a populated region should it hit. Less than one percent of NEOs 30 meters and smaller have been spotted so far.
In 2005, NASA was directed by Congress to find 90 percent of all NEOs 140 meters (460 feet) or larger. It is believed there are around 15,000 NEOs of that size, 30 percent of which have been discovered so far.
So NEO programs are finding new objects at an average rate of 3 per day, greatly enhancing our ability of tracking and identifying potentially hazardous NEOs. But as can be seen from the numbers, it’s not necessarily the largest, civilization-ending NEOs that may cause concern, it’s the smaller, city-killing NEOs that may take us by surprise.
As the asteroid that exploded over Chelyabinsk, Russia, in February showed us, it doesn’t take a huge piece of space rock to cause widespread damage and injury to a populated region. The Chelyabinsk meteor was only 15 meters (50 feet) wide.
Image: Artist’s impression of a small near-Earth asteroid. Credit: Corbis |
The antiquity of cheesemaking is well-accepted, allowing a hilarious mishearing of Jesus’ sermon in Monty Python’s Life of Brian. But how long have we been enjoying this delicious food product?
The answer is, at least 7,000 years, according to research recently published in Nature. Melanie Salque and others at Bristol University had, along with other archeologists, long pondered the use of certain Neolithic stone pots dug up in the 1970s. The pots were full of small holes, and the person who dug them up, Peter Bogucki of Princeton University, had speculated that they were used to separate curds and whey during cheesemaking.
The team at Bristol were able to prove his theory with the use of recently-developed techniques, allowing a chemical analysis of the residue inside the pots. The results showed a clear chemical signature for cow milk.
Cheesemaking allowed the removal of milk sugars that would have upset the stomachs of the lactose-intolerant Neolithic farmers. Cheese also stays edible for months and would have been a valuable food resource for year-round consumption.
Blessed were the cheesemakers indeed. |
You might not care how hard or easy it is to image zebrafish larvae, but you should. Zebrafish larvae are among the most commonly-used laboratory animals, useful for studies of human diseases such as cancer, Parkinson’s disease, Alzheimer’s, diabetes, and amyotrophic lateral sclerosis (ALS). Now, engineers from MIT have developed a system that dramatically streamlines the zebrafish-imaging process. Whereas traditional manual viewing takes about ten minutes per fish, a new system developed by engineers at MIT can get the job done in just 19 seconds.
Zebrafish are so commonly used in laboratories because they are genetically similar to humans, sharing much of the same anatomy and biochemistry. The fish take only seven days to fully develop, with organs visible inside their transparent bodies within just three – mice and rats take much longer to reach maturity, and you can’t see through them. Because the baby fish are so tiny, however, it takes some doing to get them properly positioned under a microscope, which limits their usefulness in certain studies.
The new system developed at MIT pumps fish from a holding area onto a viewing platform, where they are automatically rotated to display the desired body part. The fish remain unharmed throughout the process. The team has already demonstrated the system’s capabilities, by imaging the neurons that project from the larvae’s retinas to their brains.
The MIT engineers have applied for a patent, and are looking into commercializing their system for use in drug trials, where a large number of animals need to be analyzed in quick succession. They are also looking into speeding up the process, and more efficiently processing the data that it produces.
“There is significant need for high-throughput [automated] studies on whole animals, at high resolution,” said Mehmet Fatih Yanik, associate professor of electrical engineering and computer science. “People are currently doing this manually, which is too slow. Ours is the only system that can take a large library of chemicals and screen it on thousands of vertebrates.”
The research was recently published in the journal Nature Methods.
See the stories that matter in your inbox every morning |
Green coral, Raja Ampat, West Papua, Indonesia. Coral reefs are formed from calcium carbonate secreted by tiny animals called polyps. These colonies of polyps and the reefs they create are among Earth's most diverse ecosystems, providing shelter for a wide variety of fish, mollusks, sponges, and other sea creatures. They are important for tourism and the fishing industry. Corals are highly sensitive to both warming ocean temperatures and ocean acidification brought about by increased atmospheric carbon dioxide. NCAR scientists are studying the effects of warming and acidification on reefs and the marine populations they support.
Visit Website | Image credit:
photo by Kathy Krucker |
Q: Can you clarify the different types of Coming of Age materials available?
A: There are two aspects to the Coming of Age program, and they can be used separately or together:
- The Coming of Age in the Holocaust, Coming of Age Now website is a free, interactive curriculum resource that is designed for middle and high school students to explore Holocaust history, and themes of identity, and personal responsibility. Featuring first-person accounts of young people who survived the Holocaust, Coming of Age integrates compelling videos, narratives, and primary documents with online discussions and engaging activities. This resource can be used in the classroom or at home.
- The Museum offers a tour based on the Coming of Age curriculum that presents the history of the Holocaust through the eyes of young people who went through it.
Q: What is the Coming of Age curriculum?
A: The Coming of Age curriculum includes twelve stories of Holocaust survivors and one story of an individual who grew up in the Mandate of Palestine during the same period. Each story reflects unique, individual experiences, and as a group, the stories provide a library of resources for learning about the Holocaust through personal narratives. The curriculum was developed specifically for young people and uses age-appropriate concepts and language, prompting thoughtful reflection on responsibility, identity, and community. The website includes the Coming of Age curriculum as well as additional interactive activities.
Q. What is the cost for the materials?
A. There is no charge to use of the Coming of Age website, although participants do need to register. The hard copy Coming of Age curriculum costs $125, plus shipping.
Q: Can Coming of Age as be part of a Mitzvah project or for students studying for their bar or bat mitzvah?
A: This resource is useful for exploring themes about coming of age. It explores the themes of becoming an adult and accepting new personal responsibilities. It is designed to work well in conjunction with bar and bat mitzvah projects.
Q: Can we use Coming of Age as part of public school education about the Holocaust?
A: Yes, the themes are universal. In the Coming of Age website there are summaries of the stories which are useful in choosing the stories that are most appropriate for your students. There are references to aspects of Jewish culture, but all terms are explained and can be easily understood in a public school setting. |
1889 Settling the Poor
In 1889, Jane Addams and Ellen Starr opened Hull House in Chicago, the nation’s first and most influential “settlement house”—a movement that aimed to link successful citizens to the poor, especially immigrants, in relationships of support, mentoring, and friendship. At first, Addams operated Hull House from her inheritance. Later, she received contributions from individuals such as Anita Blaine, Louise Bowen, Mary Smith, and other donors.
By 1907, Hull House had grown to 13 buildings covering most of a city block, with gym, theater, art gallery, boys’ club, cafeteria, residence for working women, libraries, and more; it served thousands of people each week. Among other efforts, Addams ran a labor bureau at Hull House to help residents find jobs, and opened a bank to encourage saving. By 1920, nearly 500 settlement houses existed nationally, and they played an important role in helping America assimilate millions of new arrivals during our decades of heaviest immigration.
Over the years, Addams shifted away from direct instruction and assistance to the poor, and increasingly focused on influencing public policies. She began to question the practice of “middle-class moralists” who urged on the lower classes “the specialized virtues of thrift, industry, and sobriety.” Historian Joel Schwartz describes this shift as “tragic” because it “discouraged poor people from practicing precisely the behaviors that are most likely to allow them to escape their poverty.” As dependency and the welfare state grew, the personal service to the poor that settlement houses had provided declined, and Hull House, after decades of powerful service, finally shut its doors.
- Jane Addams, Twenty Years at Hull-House (Empire, 2013),
- Jean Bethke Elshtain (ed.), The Jane Addams Reader (Basic Books, 2001) |
A back-of-envelope calculation can give some idea.
If the Deccan Plate moved through some 60░ of latitude during its rapid Mesozoic rafting from Antarctica to itsá position at the K/T boundary it will have traversed almost 7,000 km from say 100 Ma to 65 Ma. Reports on the recent tsunami suggest that it was generated by an 11 m slip along a 1200 km boundary. Averaging out 10 m slips over the 35 Ma between 100 Ma and 65 Ma would give movement capable of generating tsunami of similar size every 50 years. Movement of the Deccan Plate over the past 65 Ma has been significantly slower and it has travelled a further 4,500 km. This would average 450,000 ten metre slips in the past 65 Ma or one slip every 150 years. However, tectonic plates are not entirely rigid and do not move en masse and movements are likely to be clustered rather than evenly spread. It seems inevitable that the relative movements between the Asian and Deccan Plates would have progressively reduced since the Deccan and Asian contact and the distribution of magnitude and frequency would be nothing like this averaging out, but it does give some indication of the potential importance of tsunami as an evolutionary selective force on coastal biotas.
Although there was little awareness of tsunami threats in Sri Lanka before 26th December 2004, there are literary references in mediaeval Sinhalese texts to tidal flooding decimating the coastline and settlements. The coastline from Kelaniya to Mannar on the west coast was affected during one flooding event in the 2nd Century AD.
To understand current tectonic events and the historical origins of Sri Lanka's snail fauna requires a step back in time to the Mesozoic origins of the Deccan or Indian tectonic plate. Palaeomagnetic records support a history of the Deccan Plate in which it separated from the southern continent of Gondwana at about 130Ma and, after breaking away from Madagascar around 90Ma, rafted across the Tethys Ocean to make land contact with Asia at about 30Ma. The ongoing collision of the Deccan and Eurasian plates has been described as the most profound tectonic event in the past 100Ma. The leading submarine plate margin may have made contact with Asia at the 65Ma K/T boundary and one current hypothesis is that the massive energy generated by this collision of continents could have given rise to the devastating volcanism and lava flows that formed the Deccan Traps. Although the Chicxulub meteorite impact has dominated explanations for the K/T extinctions for the past few decades, a strong case remains for arguing that the Deccan Traps were primarily responsible for K/T extinctions.
As the southernmost part of the Mesozoic island Deccan Plate it could be that the land area currently composing Sri Lanka was part of a refuge from the devastating impact of the trap lava as it flowed across most of the Deccan Plate land mass. In fact coastal margin reconstructions for the K/T boundary show a much smaller, more isolated island on an area currently occupied by a part of southern Sri Lanka. The Gondwanan fauna in contact with the traps would have been obliterated and, when land contact was made with Asia, recolonisation across the traps would have been in competition with a northern fauna. This may be why Sri Lanka has several snail groups thought to have ancient origins in Gondwana that are poorly represented in or completely absent from India. This scenario is complicated by evidence that other fragments of Gondwana may have been assimilated into the Deccan Plate during its rapid movement north but the timing for this is not clear. An additional complication from the European Mesozoic fossil record is that our supposed Gondwanan taxa appear to have been part of what was a Pangaean fauna. We have some considerable way to go before South Asian faunal origins are unraveled.
Sri Lanka is an integral part of the Deccan Plate currently separated from India by the shallow and narrow Palk Straits. It has repeatedly been connected to the mainland, most recently about 10,000 years ago. More significant than the current sea channel in isolating and shaping Sri Lanka's distinctive, highly diverse and endemic land snail fauna has probably been the fact that the rainforests in the south-west of the island are isolated from the seasonal rainforests of India's Western Ghats. This climate pattern with extensive arid zones in northern Sri Lanka and southern India appears to have a longer history than Sri Lanka's current island state.
Deccan Trap flows and early formation of the Himalaya possibly represented the peak of post-collision tectonic activity. However, fossil evidence indicates that, among other effects, major tectonic events during rafting of the Deccan Plate generated massive flooding that washed terrestrial species out to sea. Thus massive inundation of the sea onto coastal areas of the Deccan Plate predated Asian contact and, as we have just been made brutally aware, such flooding is not confined to the distant past.
The land area of what is now Sri Lanka was much smaller at 65 Ma and it was further from the continental landmass at the time that it could have acted as a biotic refuge from the Deccan Trap lava flows. Details of the complexity and extent of landform and orogenic activity between the Deccan Plate and Asia are largely unknown and no attempt has been made to include them in this figure. Estimates suggest that crustal shortening of the northern leading edge of the Deccan Plate following contact with the Eurasian Plate was in the order of 1,500 km. There is evidence that the Tibetan area was a separate continental plate that had earlier fused with Asia. Further evidence suggests that several additional island plate fragments, possibly of Gondwanan origin, had fused with the Deccan Plate during its northern passage, a movement that was at a far greater rate than any current plate movements. The massive subduction of ocean floor between the Deccan and Eurasian Plates and the associated volcanic and orogenic activity was replaced as a mechanism for mountain building by compression between the two plates only from the end of the Mesozoic. The lateral displacement of plates through over 100 Ma would have generated numerous tsunami of enormous magnitude that seem likely to have been sufficiently frequent to act as a powerful selective force on the evolution of coastal biotas.
[Figure reproduced from Naggs & Raheem, in press. Records of the Western Australian Museum Supplement No 68. World map at K/T boundary from Scotese (2002); Deccan Plate after McLean (1985)].
An idiot's guide to Deccan traps can be found at http://en.wikipedia.org/wiki/Deccan_Traps |
Before this assignment: Students use TCI readings to write a group journal about the Aztecs daily life. They then combine their journals to create a booklet. (see aztec Journals posted).
In this assignment,students act out one of the journal entries in front of class. They have a worksheet that students complete after viewing the skits. Look up Aztec Journal Skits if you like this idea. It's fun and engaging! |
What is Cognitivism?
Social Learning Theory (Bandura)
Bandura’s Social Learning Theory posits that people learn from one another, via observation, imitation, and modeling.
Social Learning Theory:
People learn through observing others’ behavior, attitudes, and outcomes of those behaviors. “Most human behavior is learned observationally through modeling: from observing others, one forms an idea of how new behaviors are performed, and on later occasions this coded information serves as a guide for action.†(Bandura). Social learning theory explains human behavior in terms of continuous reciprocal interaction between cognitive, behavioral, and environmental influences.
Necessary conditions for effective modeling:
1. Attention — various factors increase or decrease the amount of attention paid. Includes distinctiveness, affective valence, prevalence, complexity, functional value. One’s characteristics (e.g. sensory capacities, arousal level, perceptual set, past reinforcement) affect attention.
2. Retention — remembering what you paid attention to. Includes symbolic coding, mental images, cognitive organization, symbolic rehearsal, motor rehearsal
3. Reproduction — reproducing the image. Including physical capabilities, and self-observation of reproduction.
4. Motivation — having a good reason to imitate. Includes motives such as past (i.e. traditional behaviorism), promised (imagined incentives) and vicarious (seeing and recalling the reinforced model)
Bandura believed in “reciprocal determinismâ€, that is, the world and a person’s behavior cause each other, while behaviorism essentially states that one’s environment causes one’s behavior, Bandura, who was studying adolescent aggression, found this too simplistic, and so in addition he suggested that behavior causes environment as well. Later, Bandura soon considered personality as an interaction between three components: the environment, behavior, and one’s psychological processes (one’s ability to entertain images in minds and language).
Social learning theory has sometimes been called a bridge between behaviorist and cognitive learning theories because it encompasses attention, memory, and motivation.
Stage Theory of Cognitive Development (Piaget)
Piaget’s Stage Theory of Cognitive Development is a description of cognitive development as four distinct stages in children: sensorimotor, preoperational, concrete, and formal.
Piaget’s Stage Theory of Cognitive Development:
Swiss biologist and psychologist Jean Piaget (1896-1980) observed his children (and their process of making sense of the world around them) and eventually developed a four-stage model of how the mind processes new information encountered. He posited that children progress through 4 stages and that they all do so in the same order. These four stages are:
* Sensorimotor stage (Birth to 2 years old). The infant builds an understanding of himself or herself and reality (and how things work) through interactions with the environment. It is able to differentiate between itself and other objects. Learning takes place via assimilation (the organization of information and absorbing it into existing schema) and accommodation (when an object cannot be assimilated and the schemata have to be modified to include the object.
* Preoperational stage (ages 2 to 4). The child is not yet able to conceptualize abstractly and needs concrete physical situations. Objects are classified in simple ways, especially by important features.
* Concrete operations (ages 7 to 11). As physical experience accumulates, accomodation is increased. The child begins to think abstractly and conceptualize, creating logical structures that explain his or her physical experiences.
* Formal operations (beginning at ages 11 to 15). Cognition reaches its final form. By this stage, the person no longer requires concrete objects to make rational judgements. He or she is capable of deductive and hypothetical reasoning. His or her ability for abstract thinking is very similar to an adult.
What is the special Meaning for eLearning?
In cognitivism the learner is seen as an active and self-operating human being. It suggests learning rather as an individual than an objective process. Therefore, the learning environment should be set in a way that individual and learner-oriented learning is possible.
As learning is based on active cognition and experience E-learning offers a good range of tools to make that work. Listening exercises and Web Searches are active learning tools where individual leanring is enforced.
Which eLearning Tools make use of Cognitivism and in what point exactly?
Altenburger, A. (2005), Internetgestuetztes Computer Supported Cooperative Learning, URL: http://deposit.d-nb.de/cgi-bin/dokserv?idn=97591894x&dok_var=d1&dok_ext=pdf&filename=97591894x.pdf, S. 32ff.
Thissen, D./Steuber, H. (2001) (2001), Didaktische Anforderungen an die internetbasierte Wissensvermittlung. In: Kraemer, W; Mueller, M. (Hrsg.): Corporate Universities und E-learning. Personalentwicklung und lebenslanges Lernen. Wiesbaden: Gabler, S. 316f.
Other Learning TheoriesBehaviorism (Pawlow, Skinner and Instructionalism) |
The Universe is big, but how big is it?
All the planets, galaxies, stars including you on our very own home planet Earth, make up the Universe as we know it. But exactly how big is this Universe? To understand the cosmology of this majestic Universe, we can always rely on Comparison!
Suppose you are visiting the Himalayas, this is how much you will be able to see yourself from up there!
Himalayas Facts : The Himalayas are the collection of the highest mountain peaks present in Asia. It includes more than 100 peaks and is 7,200 meters above sea level! The true meaning of being at the top of the world!
Is Jupiter the biggest planet?
- Let us suppose we are able to isolate the planets Earth and Jupiter and place them next to the Sun.
- This tiny speck is what our Earth looks like in front of the Sun.
Facts about Jupiter : Jupiter is the largest planet of our solar system. It is 69,911 kilometer in radius. It is 11 times bigger than the radius of Earth. Over 1,300 Earths would fit inside Jupiter.
The Milky Way Galaxy
Imagine you are swimming in a never ending ocean. You see a small lighthouse and jump on it to take rest. Now think of this large never ending ocean as our home galaxy, Milky way. The tiny light house that you just jumped into, is our entire solar system!
Facts about Milky Way Galaxy : The Milky way, our galactic home, is a collection of various stars and planets and is a bit milky, hence its name. Our solar system resides in one of the arms of this spiral galaxy which has approximately 400 billion stars!
The Local Supercluster
It contains a total of about 1000000000000000 times the mass of the Sun. It is a large group of smaller galaxies including Milky way. Two biggest galaxies known to-date are:
- The Milky way
- The Andromeda galaxy
Laniakea Supercluster Facts : it contains local group clusters like the one in which Milky way exists!
- The size of the observable Universe is 1026. Measured in light-years, it comes out to be 46 billion light years.
- According to the theory of expansion , it keeps on expanding, and this value keeps on changing over time.
The Universe Circle Fact : Distance between Earth and Sun : 150 million kilometers.
This is just the description of the Universe that can be observed by us using the given technology.
This is how tiny we are that we cannot be depicted graphically on the same plane! Tiny, yet capable of doing great things!
For more interesting Geography articles and videos, visit our Geography for Kids category. |
Astronomers have discovered the first evidence that white dwarfs can slow down their rate of ageing by burning hydrogen on their surface.
The finding challenges the prevalent view of white dwarfs – stars that have burned up all of the hydrogen they once used as nuclear fuel – as inert, slowly cooling stars.
We have found the first observational evidence that white dwarfs can still undergo stable thermonuclear activity
Jianxing Chen, University of Bologna
Jianxing Chen of the Alma Mater Studiorum, University of Bologna, and the Italian National Institute for Astrophysics, who led this research, said: “We have found the first observational evidence that white dwarfs can still undergo stable thermonuclear activity.
“This was quite a surprise, as it is at odds with what is commonly believed.”
White dwarfs are the slowly cooling stars that have cast off their outer layers during the last stages of their lives.
They are common in the cosmos, and roughly 98% of all the stars in the universe will ultimately end up as white dwarfs, including the sun.
Studying the cooling stages helps astronomers understand not only white dwarfs, but also their earlier stages as well.
To investigate the physics underpinning white dwarf evolution, astronomers compared cooling white dwarfs in two massive collections of stars – the globular clusters M3 and M13.
Our discovery challenges the definition of white dwarfs as we consider a new perspective on the way in which stars get old
Francesco Ferraro, University of Bologna
Using Hubble’s Wide Field Camera 3 the team observed M3 and M13 at near-ultraviolet wavelengths, allowing them to compare more than 700 white dwarfs in the two clusters.They found M3 contains standard white dwarfs which are simply cooling stellar cores.
While M13 contains two populations of white dwarfs – standard white dwarfs and those which have managed to hold on to an outer envelope of hydrogen, allowing them to burn for longer and therefore cool more slowly.
Comparing their results with computer simulations, the researchers were able to show that roughly 70% of the white dwarfs in M13 are burning hydrogen on their surfaces, slowing down the rate at which they are cooling.
They suggest the discovery could have consequences for how astronomers measure the ages of stars in the Milky Way.
Previously the evolution of white dwarfs has been modelled as a predictable cooling process.
This relatively straightforward relationship between age and temperature has led astronomers to use the white dwarf cooling rate as a natural clock to determine the ages of star clusters, particularly globular and open clusters.
However, white dwarfs burning hydrogen could cause these age estimates to be inaccurate by as much as one billion years.
Francesco Ferraro of the Alma Mater Studiorum, University of Bologna, and the Italian National Institute for Astrophysics, who co-ordinated the study, said: “Our discovery challenges the definition of white dwarfs as we consider a new perspective on the way in which stars get old.
“We are now investigating other clusters similar to M13 to further constrain the conditions which drive stars to maintain the thin hydrogen envelope which allows them to age slowly.”
The findings are published in Nature Astronomy. |
Creating good listening conditions: For education settings
“We would never teach reading in a classroom without lights. Why then would we teach in ‘acoustic darkness’? Speaking to a class, especially of younger children, in a room with poor acoustics, is akin to turning out the light.”
- Professor John Erdreich, Scientific Counsel in Acoustics
This information will help school managers (and managers in other education settings) to create good listening conditions for pupils.
Research shows that good listening conditions in schools will:
- improve learning and retention of information for all children, and especially those who are deaf, have a temporary hearing loss (glue ear, for example) or other additional learning needs
- improve behavior in the classroom
- reduce teacher absences
- make sure that deaf children get the most out of their hearing
Taking steps to improve the acoustics in your school will help demonstrate to parents that you are making reasonable adjustments under equality legislation to meet the needs of disabled pupils.
We have a collection of resources that will help assess and improve listening conditions in learning environments.
You can download the resources below:
- Managing listening conditions checklist
- Preliminary noise survey
- Pupil survey
- Presentation for Teachers of the Deaf
You can also find out more about what schools can do to improve listening conditions here.
Why are good listening conditions so important?
There’s a strong link between good acoustics and achievement for all pupils. For example, a study of 142 schools in England showed that there was a direct link between the level of classroom noise and pupils’ Key Stage 2 Maths results.
Many children experience temporary hearing loss, such as glue ear, at an age when listening to develop spoken language is critical. It’s estimated that eight out of ten children will have had at least one episode of glue ear by the time they are 10 years old.
Evidence shows that poor classroom acoustics can create a negative learning environment for many pupils, especially those who are deaf, have learning difficulties, or pupils who have English as an additional language.
Alan Steer’s report, Learning Behaviour, argues that the surroundings in which children work and learn have a major impact on behaviour. He states that: “Architects and contractors should pay special attention to acoustics and lighting in classrooms to support pupil participation in lessons”.
Research shows that teachers have more cases of throat problems than any other professional group, and this can be made worse by having to project their voices over classroom noise. Eighty-six percent of teachers reported that classroom noise caused them problems and 80% reported vocal strain and throat problems. Forty-nine percent had to strain their voices to be heard, and overall teachers were 32% more likely than other professions to have voice problems that caused them to take time off work.
Hearing aids and cochlear implants can’t cut out background noise. In fact, they make all noises in a classroom louder (not just the teacher’s voice) meaning that a deaf child may struggle to hear their teacher if there is a lot of background noise.
Many schools have soundfield systems but its effectiveness is dependent on the acoustics of the room.
Why do children find it challenging to listen in class?
“People can fill in the blanks of missed information only if they have that information already stored in their brain’s ‘data bank’ from where they can retrieve it. Because they do not have those data banks, children need a sharper auditory signal than adults do. Thus, while a classroom might sound fine to an adult, it may be woefully inadequate for typical children who are neurologically undeveloped or have not had decades of language and life experience. All this means that children require a quieter environment and a louder signal than adults do in order to learn.” Carol Flexer, Professor of Audiology
Children working in an exciting, interactive classroom will inevitably make noise. But there are other noises which don’t have a positive impact on a learning environment, such as noise coming from other rooms and from outside the school building. These noises can have an impact on a child’s ability to listen in class.
For children to understand what a teacher is saying, the teacher’s voice needs to be louder than any background noise. If a classroom is noisy (or what is sometimes called a low signal to noise ratio) most teachers will have difficulty speaking loudly enough so that pupils can understand.
Reverberation happens when the sound from a source has stopped, but reflected sound continues in the room. If surfaces have a low absorbency and are reflective, like concrete, then the sound may bounce around the room. It then arrives at the child’s ear at different times, blurring the sound and making it difficult to listen and understand the message. The longer the reverberation time the more blurred the message, and the greater the impact on learning.
The younger the child, the greater the need for a good listening environment. Pupils with limited language ability will also rely on good listening conditions to be able to follow what a teacher is saying.
Listen to these sound clips to experience how a child with a high frequency loss might hear in a noisy classroom: www.ndcs.org.uk/simulation.
Equality Act 2010: Duty to make reasonable adjustments
The Equality Act 2010 applies to all education providers and local authorities in England, Scotland and Wales. The Act requires schools and local authorities to make reasonable adjustments so that disabled pupils are not put at a substantial disadvantage in accessing the curriculum, teaching and learning. The Act also requires education providers to proactively anticipate the needs of disabled pupils. Some of the measures we’ve described in this booklet would constitute reasonable adjustments for deaf pupils.
Northern Ireland is not covered by the Equality Act but has its own anti-discrimination legislation: the Disability Discrimination Act 1995 and the Special Educational Needs and Disability Order (NI) 2005. Although the legislation is different, many of the principles are the same.
Planning duties for schools and local authorities
Local authorities in all four countries of the UK must produce strategies to make education more accessible for disabled pupils.
These strategies should aim to:
- increase the extent to which disabled pupils can participate in the curriculum
- improve the physical environment of schools so that pupils can take better advantage of education, benefits and facilities
- improve the availability of accessible information for disabled pupils.
They should set out what the local authority will contribute to the schools it maintains and what is expected from those schools.
Legislation on special educational needs/additional support for learning
In all four countries of the UK, disabled pupils with higher levels of need may have a statutory plan (such as statement of special educational needs, an Education, Health and Care plan or a coordinated support plan). These statutory plans may set out what improvements are needed to help the pupil achieve the best possible educational outcomes.
The Department for Education’s guide: Acoustic Design of Schools: Performance standards (2014) sets out expectations and explains the steps that local authorities and schools need to take to comply with the School Premises Regulations (2012).
You can find more detailed guidance in Acoustics of Schools: A design guide (2015), which has been produced by the Association of Noise Consultants (ANC) and the Institute of Acoustics (IOA). It provides some of the more technical information that was previously in Building Bulletin 93.
Building regulations are devolved in Wales and Building Bulletin 93 continues to be used there. Schools built and refurbished under the 21st Century Schools programme must take part in a pre-completion test to show that they comply with acoustic standards in Building Bulletin 93. If the building fails to meet the acoustic standard, remedial steps should be taken to ensure compliance, along with further testing to demonstrate that the school now meets the standards.
In Scotland, statutory requirements for school environmental conditions are outlined in the School Premises (General Requirements and Standards) (Scotland) Regulations 1967. Regulation 24 states that: “Every part of the school building shall have acoustic conditions and insulation against disturbance by noise appropriate to the use for which the part of the building is designed.” Adherence to Building Bulletin 93 is widely seen as a way to show compliance with this regulation.
The Scottish Government’s guidance, School Design: Optimising the Internal Environment – Building our future, Scotland’s school estate (2007) is intended to help local authorities to develop design brief documents for a range of environmental conditions in schools, including acoustics. Both Building Bulletin 93 and Building Bulletin 101 are referred to in this document as “the starting point for design guidance”. While there are no specific regulatory requirements, there are areas of effective practice where Building Bulletin 93 has been fully implemented in new school builds.
An amended version of Building Bulletin 93 was introduced in Northern Ireland in 2007. New build schools in Northern Ireland have to be tested acoustically to ensure that the requirements in Building Bulletin 93 have been met. If the requirements aren’t met, schools have to take action. The Department of Education will not fund these measures so it’s vital that acoustics are right from the start. |
A solvent is a liquid substance that lets other substances to get dissolved in it. The water is a universal solvent and is least expensive and also widely. But there are other solvents preferred based on the need like ethanol, oils, petroleum products, etc.
They find many applications in the formulation of food, drugs, cosmetics (lipsticks) and also in research.
Common Examples of Solvents include
- Carbon disulfide
- Carbon tetrachloride
- Formic acid
- Acetic Acid
- Trifluoroacetic acid
- Dimethyl sulfoxide
- Dibutyl phthalate
- Petroleum ether
Solvents are chemical compounds that are physically liquids at room temperature. Besides these, even gases can act as solvents when required.
Whereas in the industry, these solvents are used fundamentally for the extraction, purification and also molding of substances into different shapes.
There are different types of solvents that are routinely used
Different types of solvents
Solvents can be briefly classified based on their chemical nature and behavior.
A. Based upon Polarity:
In general, most solvents have polarity in their internal chemistry.
This polarity is due to the concentration of opposite charges on one of the atoms or elements inside a solvent molecule.
It imparts changes on the solute molecule structure such that they get dissolved by forming ions.
When a solute is mixed in a solvent, the solvent molecules dissolve the solute by separating apart the solute molecules using forces like hydrogen bonding, Vanderwal forces, etc.
Examples: Sodium chloride has a NaCl molecule, which breaks into Na+ and Cl- ions when dissolved in water.
1. Polar solvents: These are solvents having a dielectric constant of more than 15. They can dissolve salts and other ionizable solutes. Polar solvents examples include water, alcohol. Polar solutes like the salts dissolve in polar solvents.
2. Non-polar solvents. These solvents are nonpolar and have dielectric constants less than 15. They cannot form intermolecular bonds by use of hydrogen bonding, Vanderwal forces, etc. Hence they cannot dissolve polar compounds. Nonpolar solvents examples include Benzene, CCl4.
Fats and oils are soluble in non-polar solvents. Hence to remove lipids from an extract, petroleum ether is used in the industry.
B. Based on Chemical nature:
1. Aprotic solvents: (No protons). These solvents are nonreactive and chemically inert. They neither take protons nor give protons. Ex: benzene (C6H6). Chloroform (CHCl3).
2. Amphiprotic solvents: These solvents can provide and take up protons on reaction. They have a neutral pH. Ex: Water, alcohol.
3. Protogenic solvents (proton+genesis = give): These are the solvents acidic by nature. They can donate a proton and hence called “protogenic.” Ex: HCL, H2SO4, perchloric acid.
4. Protophyllic solvents: These are the solvents that take up protons. They are basic by nature and are mostly alkalies. Ex: NaOH, KOH, etc.
These and protophilic solvents can be again classified as leveling agents and differentiating agents.
A strong acid or base is a leveling agent as it can donate or accept protons to even weak base or acid, respectively.
While weak acids and weak bases cannot do so, they can only give proton to a strong base or take up a proton from a strong acid, respectively. Hence due to this differentiation, they are called differentiating agents.
C) Based on chemistry:
Solvents are also classified based on their center of chemistry due to the presence of some particular elements. These unique elements in solvents bring a total change in their physical and chemical properties.
Inorganic solvents: Solvents without carbon are called inorganic solvents. Ex: water, NaOH, HCl
Organic solvents. Solvents having carbon are called organic solvents. Ex: Alcohols (CH3OH), hydrocarbons solvents like Benzene.
Halogenated solvents: Solvents having halogens are called halogenated solvents. Halogens are elements found in the 17th group of the periodic table.
Deuterated solvents: These solvents have deuterium, a hydrogen isotope, in their molecular structure. They are preferred in experiments where hydrogen has to be avoided. For example, in nuclear magnetic spectroscopy, the solvents with hydrogen can interfere in the analysis. Hence, solvents substituted with deuterium instead of the hydrogen atom are preferred. Their examples include
Deuterated water (D2O), Deuterated methanol (CD3OD), Deuterated acetic acid (CD3COOD), Deuterated trifluoroacetic acid (CF3COOD), etc.
Based on their behavior and properties, solvents are selected for purposes like acid-base titration, complexometry, extraction procedures, solubilization, chromatography, spectrophotometry, etc.
The above nature seems highly specific. Because sugar (C12H22O12) molecules are organic by nature due to the presence of carbon in it. But interestingly, sugar is insoluble in organic solvents like benzene. This is because sugar molecules have polarity and require polar solvents to dissolve. Hence we see sugar dissolves well in plain water, which is inorganic but having polarity.
So among the types of solvents available, to dissolve a solute, one should consider both chemistry and also the polarity. |
In 1854, German pathologist Rudolf Virchow described his microscopy observations of small round deposits present within the nervous system. The staining techniques used at the time suggested that these deposits were in fact starch, also known as amylum. Thus, Virchow used the term ‘amyloid‘ to describe the deposits. We now know that amyloid is actually formed from proteins, not carbohydrates, but the name amyloid remains.
Amyloid is formed from proteins that aggregate into fibrous deposits to form plaques. These plaques grow around cells and disrupt organ and tissue function, leading to a variety of diseases. Most of the best-known diseases linked to amyloid are neurodegenerative diseases like Alzheimer’s, but amyloid has also been linked other disease of ageing including diabetes, cancer and heart disease, and can affect multiple organs simultaneously. Diseases caused by amyloid formation can be referred to collectively as amyloidosis.
Amyloid is formed from proteins – large, complex molecules that are built from hundreds or thousands of smaller units called amino acids. These amino acids are joined together to form a protein chain, which then folds upon itself to produce a three-dimensional structure. The arrangement of amino acids within the protein determines where and how the protein will fold, and the final shape of the protein determines what its function within the body will be. The video below depicts a protein folding as it is being built from its constituent amino acids by a ribosome.
Normal, healthy proteins will not aggregate to form amyloid. Unfortunately, with millions upon millions of copies of a given protein being made during our lifetimes, the folding process doesn’t always occur correctly. When a protein misfolds and assumes the wrong shape, it may not properly fulfil its function, and may even become harmful. To minimise this occurrence, our cells have evolved a complex quality control system to correct or remove misfolded proteins. However, these systems become less effective as we age, allowing more and more errors to slip through the net. Once-healthy proteins can also misfold or partially unfold at some time after they have been made and released from a cell.
37 human amyloid proteins have so far been confirmed to be capable of causing disease. Well known examples include amyloid β in Alzheimer’s disease, α-synuclein in Parkinson’s disease, and huntingtin in Huntington’s disease.
In addition to these neurodegenerative diseases, amyloidosis can also cause or contribute to other diseases. In diabetes, for example, beta cells in the pancreas must ramp up their production of the hormone insulin in an attempt to maintain control of blood sugar. This puts strain on the beta cells’ ‘protein factories’ and can cause proinsulin (a precursor to insulin) to misfold. The resulting amyloid damages the beta cells and contributes further to the progression of diabetes. Another example is transthyretin cardiomyopathy, in which an amyloid protein called transthyretin builds up in the walls of the heart, leading to stiffening and heart failure.
Unfortunately, we still don’t have a definite answer as to why amyloid causes disease. In some cases, amyloid may physically disrupt the tissue and thus cause damage to the organ. However, the link between amyloid and disease is not always as straightforward as one might assume. For example, some individuals with amyloid can live to an advanced age without developing neurodegenerative disease, despite showing levels of amyloid plaque similar to those dying with dementia.
One possible explanation for this is that it is not always the amyloid plaque itself that causes the majority of the damage, but rather the intermediate aggregates involved in the plaque’s formation. Thus, an individual could slowly accumulate a large amount of amyloid plaque over time, all the while maintaining relatively low levels of intermediate structures and avoiding significant tissue damage.
Another process that is likely to play a key role in the damage caused by amyloid is inflammation. The inflammatory response is the immune system’s way of rapidly dealing with a foreign pathogen, and can be triggered by any source of damage such as that caused by amyloid. However, when the inflammatory cells are unable to remove the cause of the damage, inflammation continues indefinitely. Chronic inflammation is seriously disruptive for the tissue in question, and also results in a vicious cycle, since inflammation can accelerate amyloid formation.
Some level of amyloid formation may unavoidable, but our evidence suggests that amyloid– related diseases are not necessarily an inevitable part of ageing. The extent to which environmental factors impact amyloid formation and disease risk compared with genetic factors varies from one disease to another. However, generally, amyloid pathology can be improved through all of the lifestyle practices that we associate with good health, particularly regular exercise and a healthy diet low in sugar, alcohol and processed meats, while avoiding smoking.
There are other more minor lifestyle adjustments that may further reduce risk of amyloid-associated diseases. Antioxidants capable of reducing levels of chronic inflammation, such as flavonoids found in nearly all fruits and vegetables, may reduce the risk of some amyloid-associated diseases. It may be possible to reduce amyloid formation by consuming foods with above average flavonoid content, although the overall benefits of a healthier diet and more exercise are likely to be more consequential.
There is also some evidence that caffeine may reduce amyloid deposition in the brain and thus protect against Alzheimer’s disease and some other forms of amyloidosis. Coffee and certain teas are of interest as they contain both caffeine and the aforementioned flavonoids. For example, some studies suggest that lifetime coffee consumption lowers amyloid deposition and reduces the risk of Alzheimer’s, while a recent small study suggested that green tea might slow transthyretin amyloidosis in the heart.
Perhaps the most extreme measure one could take with the aim of reducing amyloidosis is to practice dietary restriction such as calorie restriction or fasting. Experiments in mice suggest that calorie restriction can reduce multiple types of amyloidosis, and there is good reason to believe that this may also work in humans, since calorie restriction has been associated with reduced risk of age-related diseases including Alzheimer’s.
“Amyloid” — Historical Aspects: https://www.intechopen.com/books/amyloidosis/-amyloid-historical-aspects
The Role of Inflammation in Amyloid Diseases: https://www.intechopen.com/books/amyloid-diseases/the-role-of-inflammation-in-amyloid-diseases
Protein Misfolding and Degenerative Diseases: https://www.nature.com/scitable/topicpage/protein-misfolding-and-degenerative-diseases-14434929/
Targeting Amyloid Aggregation: An Overview of Strategies and Mechanisms: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6164555/
Proinsulin misfolding and endoplasmic reticulum stress during the development and progression of diabetes: https://doi.org/10.1016/j.mam.2015.01.001
Caloric restriction reduces the systemic progression of mouse AApoAII amyloidosis: https://dx.doi.org/10.1371%2Fjournal.pone.0172402
Caloric restriction attenuates amyloid deposition in middle-aged APP/ PS1 mice: https://dx.doi.org/10.1016%2Fj.neulet.2009.08.038
Green tea halts progression of cardiac transthyretin amyloidosis: an observational report: https://doi.org/10.1007/s00392-012-0463-z
Caloric restriction: beneficial effects on brain aging and Alzheimer’s disease: https://doi.org/10.1007/s00335-016-9647-6
Oligomeric Intermediates in Amyloid Formation: Structure Determination and Mechanisms of Toxicity: https://doi.org/10.1016/j.jmb.2012.01.006
Scientifically Developed Blended Vitamins, and Exclusive Supplements For Health, and Longevity |
Spanish explorer, Diego de Nicuesa founded the Nombre de Dios European settlement near Panama, South America settlement in 1510, followed by the founding of San Sebastian de Uraba, Colombia, South America by explorer Alonso de Ojeda. Later the San Sebastian de Uraba colony was transferred to Darien near the Isthmus of Panama by Balboa and renamed Santa Maria de la Antigua del Darien.
Diego de Nicuesa
- European Colonization of the Americas – New World Encyclopedia
Europeon Nation’s Control over South America 1700 to the twentieth century. The start of the European Colonization of the Americas is typically dated to 1492, although there was at least one earlier colonization effort. The first conquests were made by the Spanish and the Portuguese.
- European colonization of the Americas – Wikipedia
Jump to Early conquests, claims and colonies – As the sponsor of Christopher Columbus’s voyages, Spain was the first European power to settle and colonize the largest areas, from North America and the Caribbean to the southern tip of South America.
- Timeline of the European colonization of North America – Wikipedia
Late fifteenth century 1492: Columbus reaches The Bahamas, Cuba and Hispaniola. 1492: the colony of La Isabela is established on the island of Hispaniola. 1496: Santo Domingo, the first European permanent settlement is built. 1498: La Isabela is abandoned by the Spanish.
- History of South America – Wikipedia
Jump to European colonization – These authorized the European Christian nations to “take … known to include most of the South American soil), would belong to Spain, … by foreign conquistadors, first from Spain and later from Portugal.
- British colonization of the Americas – Wikipedia
British colonization of the Americas began in 1607 in Jamestown, Virginia, and reached its … European colonization …. Province of South Carolina, first permanent English settlement in 1670, became a separate colony in 1710–12. Province of … |
iPhoneOgraphy – 30 Dec 2016 (Day 365/366)
Owls are birds from the order Strigiformes, which includes about 200 species of mostly solitary and nocturnal birds of prey typified by an upright stance, a large, broad head, binocular vision, binaural hearing, sharp talons, and feathers adapted for silent flight. Exceptions include the diurnal northern hawk-owl and the gregarious burrowing owl.
Owls hunt mostly small mammals, insects, and other birds, although a few species specialize in hunting fish. They are found in all regions of the Earth except Antarctica and some remote islands.
Owls are divided into two families: the true owls or typical owls, the Strigidae; and the barn-owls, the Tytonidae.
Owls possess large, forward – facing eyes and ear – holes, a hawk – like beak, a flat face, and usually a conspicuous circle of feathers, a facial disc, around each eye. The feathers making up this disc can be adjusted to sharply focus sounds from varying distances onto the owls’ asymmetrically placed ear cavities. Most birds of prey have eyes on the sides of their heads, but the stereoscopic nature of the owl’s forward-facing eyes permits the greater sense of depth perception necessary for low-light hunting. Although owls have binocular vision, their large eyes are fixed in their sockets – as are those of most other birds – so they must turn their entire heads to change views. As owls are farsighted, they are unable to clearly see anything within a few centimeters of their eyes. Caught prey can be felt by owls with the use of filoplumes – hairlike feathers on the beak and feet that act as “feelers”. Their far vision, particularly in low light, is exceptionally good.
Owls can rotate their heads and necks as much as 270°. Owls have 14 neck vertebrae compared to seven in humans, which makes their necks more flexible. They also have adaptations to their circulatory systems, permitting rotation without cutting off blood to the brain: the foramina in their vertebrae through which the vertebral arteries pass are about 10 times the diameter of the artery, instead of about the same size as the artery as in humans; the vertebral arteries enter the cervical vertebrae higher than in other birds, giving the vessels some slack, and the carotid arteries unite in a very large anastomosis or junction, the largest of any bird’s, preventing blood supply from being cut off while they rotate their necks. Other anastomoses between the carotid and vertebral arteries support this effect.
The smallest owl – weighing as little as 31 g (1 oz) and measuring some 13.5 cm (5 in) – is the elf owl (Micrathene whitneyi). Around the same diminutive length, although slightly heavier, are the lesser known long-whiskered owlet (Xenoglaux loweryi) and Tamaulipas pygmy owl (Glaucidium sanchezi). The largest owl by length is the great grey owl (Strix nebulosa), which measures around 70 cm (28 in) on average and can attain a length of 84 cm (33 in). However, the heaviest (and largest winged) owls are two similarly sized eagle owls; the Eurasian eagle-owl (Bubo bubo) and Blakiston’s fish owl (B. blakistoni). These two species, which are on average about 2.53 cm (1.00 in) shorter in length than the great grey, can both attain a wingspan of 2 m (6.6 ft) and a weight of 4.5 kg (10 lb) in the largest females.
Different species of owls produce different sounds; this distribution of calls aids owls in finding mates or announcing their presence to potential competitors, and also aids ornithologists and birders in locating these birds and distinguishing species. As noted above, their facial discs help owls to funnel the sound of prey to their ears. In many species, these discs are placed asymmetrically, for better directional location.
Owl plumage is generally cryptic, although several species have facial and head markings, including face masks, ear tufts, and brightly colored irises. These markings are generally more common in species inhabiting open habitats, and are thought to be used in signaling with other owls in low-light conditions.
Sexual dimorphism is a physical difference between males and females of a species. Reverse sexual dimorphism, when females are larger than males, has been observed across multiple owl species. The degree of size dimorphism varies across multiple populations and species, and is measured through various traits, such as wing span and body mass. Overall, female owls tend to be slightly larger than males. The exact explanation for this development in owls is unknown. However, several theories explain the development of sexual dimorphism in owls.
One theory suggests that selection has led males to be smaller because it allows them to be efficient foragers. The ability to obtain more food is advantageous during breeding season. In some species, female owls stay at their nest with their eggs while it is the responsibility of the male to bring back food to the nest. However, if food is scarce, the male first feeds himself before feeding the female. Small birds, which are agile, are an important source of food for owls. Male burrowing owls have been observed to have longer wing chords than females, despite being smaller than females. Furthermore, owls have been observed to be roughly the same size as their prey. This has also been observed in other predatory birds, which suggests that owls with smaller bodies and long wing chords have been selected for because of the increased agility and speed that allows them to catch their prey.
Another popular theory suggests that females have not been selected to be smaller like male owls because of their sexual roles. In many species, female owls may not leave the nest. Therefore, females may have a larger mass to allow them to go for a longer period of time without starving. For example, one hypothesized sexual role is that larger females are more capable of dismembering prey and feeding it to their young, hence female owls are larger than their male counterparts.
A different theory suggests that the size difference between male and females is due to sexual selection: since large females can choose their mate and may violently reject a male’s sexual advances, smaller male owls that have the ability to escape unreceptive females are more likely to have been selected. |
A beginner's guide to Seed Multiplication Process
Seed Multiplication Process
In addition to being one of the oldest professions in the world and providing employment to billions of people globally, Agriculture also forms the backbone of the global economy. Recognised as primary occupation, Agriculture has played a pivotal role in sustenance and growth of life on this earth. By definition, Agriculture involves cultivation of land and breeding of animals but is primarily recognised as the chief provider of food, fibre, herbs, fruits, grains and various other food products. Farming forms an integral part of agriculture and employs more than two billion people around the world, majorly in Asia.
Farming requires some basic components to be accomplished successfully. The major inputs required for farming are: -
- Suitable Land
- Suitable Weather
- Steady supply of Water
- Pesticides and Insecticides
- Farming Equipment
- Harvesting Equipment
Out of all the inputs mentioned above, Vegetables seeds form the core of farming all around the world. Though a seed might seem like a small input, but the quality of seed has the major impact on the quality and quantity of all crops. For instance, quality of F1 Hybrid Cucumber Seeds or F1 Hybrid Okra Seeds used will determine the quality of crop harvested by a farmer.
In scientific terms, seed is described as a basic agricultural input which is an embryo embedded in a food storage tissue which is surrounded by a protective coat of seed. Seeds play a crucial role in the life of a farmer and have the ability to carry forth the genetic abilities of a variety of crop. In order to be assured of a healthy crop, it is important for farmers to be able to access healthy seeds such as F1 Hybrid Chilli Seeds,F1 Hybrid Tomato Seeds, Hot Pepper Seeds, Bottle gourd Seed,Bitter gourd Seed,Sponge gourd Seed etc which are genetically pure and boast of a high germination percentage.
About Seed Multiplication
After manufacturing of the seed, the next important step is to ensure multiplication and distribution of high-quality seeds, as the quality and quantity of agricultural output of an entire country depends on it. This process is carried out by various government as well as private seed manufacturers engaged in hybrid seed production in India.
Seed is one of the cheapest yet most essential agricultural input which is quintessential for sustained agriculture. When a variety of seed is released only a small quantity of the seed is made available with the breeder, which is known as the nucleus seed. The breeder undertakes the process of seed multiplication to produce commercially viable quantities of a seed.
A new or improved variety of seed is not released unless and until enough true seeds have been produced to make it viable commercially. Multiplication of a seed involves the following steps: -
1. Nucleus Seed
2. Breeder Seed
3. Foundation Seed
4. Registered Seed
5. Certified Seed
6. Truthful Seed
- Nucleus Seed
Nucleus Seed is described as the initial amount of a pure seed which has been improved upon or has been developed along the parental lines of a hybrid under the expert guidance of an Evolver. As the name suggests, Nucleus seed is entirely pure and is devoid of any physical deformities or impurities. In order to achieve the pure form, it is developed by the evolver in an isolated environment to make sure there is no contamination from physical or genetic factors. It is important to make sure that the vigour of the parental line is retained in the nucleus seed.
Individual plants which have healthy and vigorous growth are selected from the bulk lot of nucleus right before the onset of flowering. Various easily observable parameters are observed and recorded for each individual plant. Out of these plants, if any specimen inhibits any diseases or do not conform to the desired characteristics, they are immediately removed. Every individual plant is harvested separately. Thereafter the seeds are kept in individual bags having proper labelling.
- Breeders Seed
Known as the progeny of nucleus seed, breeder seed production is usually accomplished in a single stage, but if required due to circumstances such as greater demand or low seed multiplication rate, it can be produced in two different stages. Breeder can be anybody, an individual, a University, Government or a company. There are certain standards that have been set for breeder’s seed in terms of moisture content (less than 12%), physical purity (98% or more) and genetic purity (99% or more). After completion of this stage, seeds are stored in clearly labelled bags with white colour tags.
- Foundation Seed
After completion of the breeder stage, next comes the foundation seed stage. This process can also be completed in two stages with genetic purity of more than 99.5%. These seeds can be produced on seed multiplication farms belonging to state government or Agri universities or other government farms in addition to private seed companies. The purity level of foundation seed is lower than nucleus or breeder seed. After completion of the process, seeds are stored in bags with white labels.
- Registered Seed
NSC (National seeds corporation) offers guidance and technical advice to progressive farmers as well as registered seed growers to produce registered seed from foundation seeds which are genetically pure. In order to ensure the genetic purity, seed certifying agencies conduct regular inspections and field tests. Once completed, the seeds are stored in a bag with purple label.
- Certified Seed
Certified seed is the progeny of foundation or registered seeds. It is known as certified seed because it has been labelled certified by rating agencies as suitable to raise a good crop. Progressive farmers play a major role in its production and ensure adherence to the established practices for seed production. They need to be genetically pure and are stored in bags with blue labels.
- Truthful Seed
These seeds are developed and produced by private seed companies and are sold under labels of Truthful seeds. In order to be able to produce truthful seeds, field and seed standards specified by Seeds Act must be maintained by the companies. Once the process is complete, seeds are stored in bags labelled with green colour tags.
Precautions to be observed during seed multiplication and seed production
Availability of high-quality seeds is essential to make sure excellent harvest at the end of the harvesting season. Here are some of the major precautions that must be observed during seed multiplication and seed production by Indian vegetable seeds companies:
- The process requires heavy investment and high level of technical skills as it is a complicated task. So, this process must be undertaken by those entities who are in a position to commit such resources.
- In order to reap complete benefits of new variety of crops, due care must be paid to ensure genetic purity and quality of seeds i.e. conditions must be standardized.
Due to rising awareness regarding benefits of high-quality seeds, Seed industry is also thriving across the world. Many private as well as public sector companies are working round the clock to develop genetically pure high-quality seeds to ensure a healthy and bountiful crop.
Nurture Agro Seeds is a leading agriculture seeds companies in India and is based out of Bangalore. Nurture Agro Seeds has been catering to the demand of its global clientele for superior quality agricultural inputs. In addition to supplying F1 hybrid seeds such as F1 Hybrid Tomato Seeds, F1 Hybrid watermelon seeds, and many more, Nurture Agro Seeds provides world-class seed multiplication and hybrid seed production services to other companies from around the world.
It has established itself as the prominent seed manufacturers in India. Nurture Agro Seeds deals in more than 90 varieties of 20 different vegetable crops such as cherry tomato seed, bitter gourd seed, okra seed, sponge gourd seed etc. |
1st and 2nd grades. they cover a lot of bases, so let’s get started. addition and subtraction homework addition and subtraction greater than 1000. ten frame fact …. 3 views 9 downloads. kids solve addition and subtraction essay genres for literature problems with one- and two-digit numbers addition and subtraction homework to crack the code and find the mystery word on this first grade different ways to write poems math worksheet. the concept of zero pairs does not come easily to many students, so it really helps to have awesome, engaging lessons …. the key vocabulary words addition and subtraction homework in this lesson are addition, subtraction, together and apart score as many points as you can before the time runs book report writing out. 1st through 3rd grades. 4919. addition and subtraction with and without assignment makers regrouping worksheet kindergarten math test pdf free homework sheets for autobiography sample essay preschoolers free k4 worksheets learning pages for preschoolers printable printable toddler sheets free printable reading comprehension worksheets grade 4 writing a reference solving equations variables rough draft paper examples calculator adding within 5 free good things about homework creation coloring pages for …. this week …. = . addition/ subtraction worksheet generator (cuisenaire rod-like) similar to the above listing, the resources below are aligned to related standards in the common core for mathematics that together addition and subtraction homework support the following learning outcome: binary addition and subtraction homework solutions addition and subtraction with and without regrouping worksheet kindergarten math test pdf free homework sheets for preschoolers free k4 worksheets learning pages for preschoolers essay on computers printable printable toddler sheets free printable reading comprehension worksheets grade 4 solving equations variables calculator adding within 5 free creation coloring pages for …. binary addition and subtraction don’t use you in essays homework solutions addition and subtraction worksheets. the coloring sheets addition and subtraction homework related to bible and others are mckinsey approach to problem solving easily …. |
SQL, Structured Query Language, is a programming language designed to manage data stored in relational databases. SQL operates through simple, declarative statements. This keeps data accurate and secure, and helps maintain the integrity of databases, regardless of size.
The SQL language is widely used today across web frameworks and database applications. Knowing SQL gives you the freedom to explore your data, and the power to make better decisions. By learning SQL, you will also learn concepts that apply to nearly every data storage system.
Let’s begin by entering a SQL command.
In the code editor, type:
SELECT * FROM celebs;
You will run all of your SQL commands in this course by pressing the Run button at the bottom of the code editor. |
If you’ve ever told your toddler to “wait five minutes,” you know it’s more likely that she’ll suddenly grow an inch than understand what five minutes means!
Understanding time is remarkably complex, and it develops gradually, into early elementary school. It’s a long road to the day when your child actually gets it when you say, “We’ll see Grandma in one week.” But, the foundations of time sense begin early.
The Foundations of Time Sense
Primitive Sense of Time
During the first few months of your baby’s life, it may have felt like you lost your sense of time. Sleep schedules are off. Feedings and diaper changes happen at random intervals. It may seem like there’s no difference between 2 am and 2 pm! Then, slowly, a routine starts to take shape as the concept of time starts to creep back into your life. This is thanks to your baby’s development of a primitive sense of time. In 1972, researchers discovered that early on, babies develop the ability to recognize a temporal interval between events.
Awareness of Sequences
Around age 8 months, babies develop an awareness of event sequences, or an understanding that certain things precede others.
Around age 15 months, little ones can grasp the steps of a routine: First this, then this, finally this.
Concept of Day and Night
Around age 22 months, toddlers’ sense of time takes a giant leap as they develop a cognitive understanding of the concept of day and night.
Basic Understanding of Past and Future
Between ages 34 and 36 months, children grasp, in a general way, that there is a past. But, they may still refer to anything in the past as “yesterday.” They also understand some basic differences related to the close future (such as the differences between now, soon, and later).
Supporting Development of Time Sense
Your toddler’s true time sense won’t emerge until early elementary school, but you can support the foundations in a few key ways:
- Promote routines – Doing similar things in a similar order each day, and teaching multi-step tasks (like hand-washing) helps your child develop an understanding of sequences.
- Use general time-related terms – Because your child can’t grasp what “fifteen minutes” means, you can use terms like before, after, first, next, and then. For example, “We will go to the park after lunch.”
- Explore the “Cognitive” activities in our BabySparks program for fun ways to support time sense foundations!
There’s one thing we know for sure, a toddler’s lack of time sense can feel exasperating! Minutes and hours mean nothing and waiting periods just can’t be measured. It’s all part of the wild ride of toddlerhood! One day in the not-so-distant future, she’ll be marking dates on a calendar and asking for her first watch. Until then, enjoy the timeless ride! |
| Arctic Wolves (Canis lupus arctos) also called Polar Wolf or White Wolf is a subspecies of Gray Wolf or Canis lupus.
They inhabit the Canadian Arctic, Islands, parts of Alaska and the northern part of Greenland. These territories are some of the most inhospitable areas in the world with temperatures
that rarely go above -30C or -22F. Arctic Wolves have gone though some physical adaptations that make it easier to survive in this extreme cold.
They have a smaller body than their southern cousins, their ears have decreased in size to reduce heat loss and reduce chance of freezing, they have a shorter snout and their legs are shorter to reduce
exposure to frigid air. Their fur has also been adapted to help them survive in the cold. Arctic Wolves have a dual layer coat. The top layer consists of long guard hairs and the under layer is a soft thick
downy layer similar to that found in Eider Ducks. This downy layer traps body heat and protects their skin from frostbite.
Arctic Wolves travel in packs from 2 to 20 depending on how much food is in their territory. Their regular prey is usually Caribou and Muskox, but they will also hunt Arctic Hare,
lemmings, ptarmigans, seals and waterfowl. Each pack has a large area up 1000 sq. miles and move around following their prey. Wolves do not waste any of their kills and will eat the bones as well.
Bones contain marrow which is a great source of fats that produce energy during the cold winter months.
Due to the fact Arctic Wolves live in an area of permafrost or permanently frozen ground it is difficult to dig a den like other wolf varieties. So they Alpha female will look for rock outcropping,
caves or depressions in the ground. They usually have 2-3 pups but can have up to a dozen. If depends on scarcity of food in the region. In times of lower available food they will have less pups
and in times of high food availability they will have more pups. The pups are born in late May to early June. Like other mammals/thumbnails the pups are born blind and deaf and weigh about a pound.
They stay in the den for about three weeks and are dependent on their mother and pack for food, protection and warmth. After about three weeks they start venturing out of the den area and start exploring
the world around the den area.
The Arctic Wolves greatest sense is their sense of smell. With their sense of smell they gather a tremendous amount of information. They learn to recognize their pack mates and as pups they
learn their mother's smell which is very important in the first couple of weeks when they cannot see or hear. Adults use their sense of smell to locate prey that are too far to see, and mark their territory
that borders along other packs territories. The Alpha Male uses smell to judge the Alpha Females state during the breeding season.
The next important sense Arctic Wolves have is their hearing. In the open wolves can hear things 10 miles away. Their eye site is also very important to them and like most mammalian predators
their eyes are forward on their head which gives them 3D vision with a radius of 180 degrees. Three dimensional vision is very important when trying to judge distance from objects. |
X-ray machine refers to the equipment that diagnoses patients by various images formed by X-ray passing through human body. The basic configuration of X-ray machine x ray collimator are beam limiters of 125 KV and 150 KV tube. The basic configuration of X-ray machine beam limiter are beam limiters of 125 KV tube and 150 KV tube. The beam limiters of X-ray machine play a role in limiting the radiation dose emitted by the tube.
In the process of X-ray machine working, different equipment is needed to work together to produce X-ray, control X-ray and diagnose human body. The spherical tube provides the light source for the X-ray machine, the beam limiter controls the beam, the high-voltage generator supplies the pressure for the light source, and the high-voltage cable connects the spherical tube and the generator to transport the high-voltage. An X-ray machine needs two high-voltage cables to work.
Therefore, spherical tube, beam limiter, high voltage generator and high voltage cable are indispensable configurations for an X-ray machine. The basic configurations of X-ray machine are also divided into three types: manual electric and automatic. The basic configuration of X-ray machine beam limiter are 125 KV and 150 KV. |
Lesson 11 has three parts A, B, C which can be completed in any order.
In this lesson we explain a concept called variable scope. The main idea is that it is possible to have two different variables with the same name whenever they have different scopes. This makes it much easier to write and edit programs, especially large ones. It is also a necessary ingredient in a later topic, recursion.
An example of variable scope
For this example, part of our program will be the following function:
def square(x): value = x * x return valueThis is a function which computes the square of the number
x, for example
25. Now suppose we call
squarefrom another part of the program which also uses a
valuevariable for another purpose:
# main body of the program starts here value = 17 fivesquared = square(5) print(fivesquared, value)The question is, what will be printed in this scenario? There are two possibilities, if we don't know how Python works:
- One possibility is that Python recognizes that the two
valuevariables should be "kept separate" somehow. Thus, the call to
squarewill return 25, but the value of
valuein the main body will stay at 17, and we see the output
- The other possibility is that when
square, executes, it over-writes the existing value of
valuewith 25, and we see the output
Let's see what actually happens!
We see that Python's behaviour is consistent with possibility #1. In fact, whenever you call a function, you cannot affect any variables defined outside the function. Instead, any statements that assign values to variables affect only a "local" variable which is "inside" the function call. The reason that Python, and most other languages, work like this is that it makes it much more simple to write long programs with many functions. In particular, when we define a new function, we are always allowed to use any variable names inside the function (including common ones like
i) even if they are being used somewhere else for different purposes.
To see this in detail, look at "Step 6 of 8" in the visualizer. Notice that there are two different
value variables: one inside of the
square function, and one outside (in the "global" area).
x = "outer"
x = value
x = valueonly affects the local version of variable
xinside the function. The global version of
xdoes not change. (So, the function
xReplaceis useless and never has any effect.)
Another similar concept to scope is a namespace; a namespace is like a scope for a package. So even if you
import a package (such as
math) which uses variable name
x somewhere, it does not overlap with the variable
x used by your program, since they lie in different namespaces.
Scoping Rules: Seeing Out
There are situations where we want to mix variables from local and global scopes. One common example is where you have some common variable initialized once at the start of the program, but you also want your functions to be able to read the variable.
Here, the local scope did not contain a variable named
favouriteTopping, but this did not cause an error. Python's rule for evaluating variables is that, if a variable needs to be evaluated which does not exist in the local scope, then it looks for it in the global scope. In our case, it indeed found
favouriteTopping in the global scope. (In a more general case with one function body calling another function you would have three scopes; Python always checks the "localmost" first and goes one step at a time to the global scope until the variable is found.)
| Two of the examples above, with |
Function arguments are always treated as new local variables, as the following broken code fragment illustrates (it attempts to reset the variable
0, but fails).
Like many other things, the normal flow described above works for 99% of all situations. But that remaining 1% of the time, you really might want to change a global variable from within a function. Python allows you to do this using the
Here's a modification of the earlier
xReplace example. Note that we declaring the variable as
global in the function. This lets you change the global value from within the function.
Reiterating: when inside of a function, assigning to a variable name defined at global scope actually creates a separate new local-scope variable with the same name, instead of modifying the global-scope variable (like
resetToZero). If you want to update the global-scope variable, you must include the statement
global «variable-name» inside of the function (like the fixed
xReplace example). Then Python understands that changes to
«variable-name» in the function are meant to refer to the existing global-scope variable.
As we mentioned earlier, reading a global variable does not require using the
global statement; only writing. That concludes the lesson! |
Eco-friendly gardening can encompass a huge range of forward-thinking techniques that anyone can adopt in their garden in answer to the escalating danger posed by climate change.
These outdoor activities centre around lowering the harmful emissions as a result of everyday gardening activity.
Thinking in a more eco-friendly way when it comes to your garden, means you can not only lower greenhouse gasses but also increase the absorption of carbon dioxide too.
What is Eco-Friendly Gardening?
As per an article by conservation.org, the carbon dioxide concentration in our atmosphere in 2018 was the highest it has been in over 3 million years.
This figure shows us that the importance of reducing our individual carbon footprint is a priority for the long-term success of the planet.
Many methods and tools used during day-to-day activities in the garden are actually harmful to the environment, such as the use of synthetic fertilisers, which contaminate natural soil stores, and inefficient watering systems which use excess water and energy.
Eco-friendly gardening seeks to tackle the leading causes of climate change such as depleting natural water sources, habitat destruction, chemically produced products and wildlife decline.
How Can Eco-Friendly Gardening Reduce CO2 Emissions?
For many, it is taken for granted that any garden by definition is an eco-friendly one, since plants and trees produce oxygen, and reduce carbon in the atmosphere.
However, we know far more about the damaging effect of climate change, so if a gardener is serious about reducing their individual input on the environment, they must first understand what things could be harmful and how they might avoid them.
The most common problems are:
While it is the case that artificial fertiliser uses nitrogen to speed up plant development, the fertiliser itself will have been made using the Harber Bosch method, which converts methane from natural gas into hydrogen. Carbon dioxide emissions increase as a result of this process, which explains why synthetic fertilisers contribute significantly to annual carbon emissions.
Despite the complicated method used to create synthetic fertiliser, the solution is actually a straightforward one: make your own compost.
Not only does this save you money, but it also gives you somewhere to recycle home and garden waste, which will reduce your impact on the environment.
Natural wetlands, or peat bogs, absorb a tremendous amount of carbon dioxide, and most of the compost you see in garden centres and supermarkets is actually directly responsible for depleting these naturally occurring resources.
This article was written by the Independent’s Martin Hickman in 2010, which just goes to show how long this issue has been on the agenda, and yet even a decade on, nearly 50% of the compost sold in the UK contains peat.
The positive news though, is that there are many alternative options which don’t contain any peat. The most popular of these choices is coir-based compost, which uses waste produced when processing coconut fibre. Coir is a fantastic option due to the low levels of carbon dioxide produced when transporting the product across the globe.
If you’re heating up your greenhouses in the winter, you’re actually wasting a good deal of energy and money. This is because your standard greenhouse isn’t fitted with double-glazing and are designed to allow air filtration to help plants as they develop.
This is basically the same as trying to fill up a bucket with a hole in the bottom. At the very point the hot air is released, it will begin to exit through the filtration points.
To solve this problem, bring all of your seedlings indoors, place them under spectrum lighting and leave the greenhouse empty over winter. By doing this, you’re keeping both your seedlings and home warm simultaneously.
We all know that plants pull in carbon dioxide which is then converted into oxygen, but what you might know is that carbon is a constant on our planet and has the ability to move and change form with ease.
Burning fossil fuels converts that carbon into harmful emissions. When these emissions reach the earth’s atmosphere, they are ingested and re-emitted as infrared radiation, which in turn warms the ozone layer.
It is thought that carbon sequestration is the long-term solution to this problem. This means planting trees which are able to absorb carbon dioxide during photosynthesis which it will hold on to for the remainder of its days.
If you’re looking to plant trees in your garden, you should adhere to the following advice:
- Plant trees with broad leaves and crowns as this maximises the effect of photosynthesis.
- Choose trees that grow quickly, since these will be able to store considerable amounts of carbon as they develop.
- Trees with long life spans can store carbon for hundreds of years without releasing it. For example, Bowthorpe Oak in Lincolnshire is thought to be over a thousand years old.
- Be sure to plant trees that are adapted to your region and climate, as these will be the most effective in helping to support the plants and animals in that area.
- Select hardy varieties that are most resistant to disease.
Some of the best tree species for this purpose include chestnut, mulberry, oak, poplar, maple and dogwood.
One of the most rewarding and exciting eco-friendly practices is being able to grow your own crops. Not only will you benefit from living on your own produce, but it’s also possible to save two pounds of carbon dioxide for every pound of produce you plant.
This is down to the fact that many commercially grown produce requires agricultural vehicles, petrol-based fertiliser and harmful pesticides. Organic crop growth at home reduces your carbon footprint and saves you money too.
How to Arrange Your Crop Beds
One of the pitfalls of growing your own vegetables is the plethora of pests and diseases that you’ll need to tackle to ensure healthy growth. This means it’s a good idea to plan your growth strategies quite carefully, to prevent cross-contamination of disease and pests.
There are some plants, that, when grown together, can complement and promote growth and vitality. This could mean planting a taller plant to provide shelter for smaller ones, or planting species which prevent pests and disease, for example:
Chives & Tomatoes
The sweet onion aroma produced by chives deters tomato threatening green flies and other predators if grown next to each other.
Rose & Garlic
Rose plants and garlic are perfect bedfellows, and this is because garlic is an effective safeguard from pests and fungal disease.
Spring Onions & Carrots
The aroma from onions is effective in preventing carrot root fly attacks. The carrot scent, conversely, prevents winged insects from making a meal of your onions.
Radish, Dill & Cucumber
By planting dill next to cucumbers, you can attract wasps, hoverflies and other beneficial predators that eat the insects that can destroy your crop. Radishes are also effective at reducing the chance of attracting cucumber beetles.
Reduce, Reuse & Recycle
When demand is at its peak, gardening activity can consume 70% of the UK water supply. This forces suppliers to tap into water reserves, thereby causing environmental damage and increased energy costs.
Due to such considerable demand, organic gardeners must devise their own solutions with a range of reduce, reuse and recycling methods, such as:
About 24,000 litres of rain falls annually in the UK, and by utilising a water butt, you can collect up to 160 litres of fallen rainwater at any one time. By stockpiling this water during the wetter parts of the year, you can play your part in protecting water supplies.
Although it may not look particularly appealing, water collected from sinks, showers and baths is perfectly fine to water your plants with although the water mustn’t contain any chemical products such as disinfectant or bleach.
Even veteran gardeners, when faced with challenging weather conditions, can over or under-water plants, which overuses resources and can impair plant growth.
To ensure you’re not making this error, push a trowel or spade blade into the soil around the plants; if it’s damp, there’s no need to water again.
Generally, plants need about 24 litres of water every ten days. Sandy soils will need more water than heavy soils, and clay-based soil will require frequent high-volume watering.
Good compost has the ability to revitalise and fortify soil structure, which in turn helps to retain water and keep hold of crucial nutrients needed to help plant roots. This will then attract worms and other beneficial insects which will further enrich the compost.
Although many gardeners understand the benefits of compost, many still rely on the synthetic products that we’ve already covered. This stuff doesn’t help nutrient retention and can actually contaminate plants.
This is why organic gardeners make their own compost made from organic matter, since it is free, requires very little investment and is a great way to recycle food waste. |
Politics can be a dirty and unappealing sphere, but it can also be interesting for analyzing. If you like our political science essay example, we recommend you to check out our research paper about Donald Trump.
Who Was the Most Influential Political Figure of 1789-1791 and Why?
The American Revolution contributed to world history with many important political figures. Their names are internationally known and respected today. Undoubtedly, George Washington remains the most distinctive symbol of that period regarding his virtues and contribution to the success of the War of Independence. However, was he the most influential figure in the long run? The answer is no. He was the man of the past, virtuous spirit of colonial legacy and the first revolutionary years. America was built under his supervision, but not necessarily by his own hands.
Who else could be a candidate? There were two politicians, besides Washington whose contribution to American history necessitated their consideration for candidacy: Alexander Hamilton and Thomas Jefferson. Their famous rivalry reflected deep contradictions in the foundation of the American political system. Those contradictions appear over and over, stimulating convictions, but also allowing the country to develop.
Nonetheless, it is important to choose a certain figure whose influence in the first post-war years made a deep impact on the entirety of American history, and this person is Alexander Hamilton. There are three reasons to claim Hamilton as the most influential political figure of that period and further decades: his role in establishing the national economy and financial system, his personal legacy as a self-made man, and his role in forming national political culture and institutions.
The first, and in some ways, most essential argument lies in the business and economic perspective. Hamilton represented urban trading and manufacturing states with far-reaching economic plans. This formed his agenda as a politician in some way, nonetheless, we cannot deny his personal contribution as a powerful intellectual and scholar. He became Secretary of Treasury in the first Cabinet of George Washington and leader of the Federalist party, one of the most powerful political groups in early American history.
As a Secretary of Treasury, broadly the sole creator of the economic and financial policy of the newly created state, Hamilton opposed Thomas Jefferson, who represented the agenda of rural states, farmers, and plantation owners. Among his most important decisions were: the nationalization of the state’s debts (which was strongly opposed from the South because mostly northern states were indebted), the creation of the First Bank of the United State and normalization of trade relations with the British Empire. He represented the vision of America as a unified state, with a strong government able to protect its own economic and military interests, and developing industries. He was able to support his claims not only in the Cabinet but also in the area of politics (Cogliano, 205).
His political project, the Federalist Party, won a second presidential election (John Adams became a President) and supported all the important legislation in Congress. Even though Jefferson won the second election in 1801 and the Federalist Party ran into an ongoing crisis, Hamilton managed to form the institutions and policies which over-lived himself. On the other hand, his policy drew America in the direction of future conflict between industrial North and rural South. But it would be unfair to make Hamiltonian responsible for the situation which existed long before his political career and remained after. He was a man who tried to serve the interests of the country as a whole, progressing local policies.
The second argument for Hamilton’s personality is his own life. His biography provided many exciting plots for popular culture and the worldview of Americans. He was naturally a self-made man. He was born on a small island somewhere in the Caribbean, lost parents yearly, and invested hardly in education and public activism. Many generations of Americans grew bearing in mind his life and living respectfully. It is very symptomatic that there was great attention to his personality in the recent years. For example, the Broadway musical by Lin-Manuel Miranda describes his life and career. It starts with questions: “How does a bastard, orphan, son of a whore, and a Scotsman, dropped in the middle of a forgotten spot in the Caribbean by Providence, impoverished, in squalor grow up to be a hero and a scholar?” The answer which Hamilton’s life provides is – by the hard work, each and every day for the rest of your life. However, it is not only a matter of work but also of passion and confidence (Chernow, 42).
Third, Alexander Hamilton directed the American government into the federal model, where shared institutions always held power and authority. Nowadays, many politicians, both liberal and conservative, express worry and growing pessimism about the American government. It is often criticized due to the large share of political control belonging to the biggest corporations and the practices of lobbying. A lot of people see early self-governing states as a role-playing model which could reform and restart America. Those are also a result of Hamilton’s incredible political energy and far-reaching vision.
Nonetheless, those results could be valued differently. Some people benefit from social security and medical projects, while others lose jobs because of deindustrialization. America nowadays is full of visions of a bright future: however, it could be doubted if we have politicians virtuous enough to adopt and launch those visions, and what is even harder, to coordinate (Cogliano, 225). However, Hamilton was not a blind supporter of unlimited governmental power. It was clear that “Hamilton was as quick to applaud checks on powers as those powers themselves, as he continued his lifelong effort to balance freedom and order” (Chernow, 259). Summing up, we still live under the shadow of those political giants from the last decades of the 18th century.
After taking into consideration all the arguments expressed, I still claim that Alexander Hamilton is the most influential political figure in America from 1789-1791. He was able to contribute to three important areas of human life: economy, culture, and political institutions. His postwar career was marked by the opposition to Thomas Jefferson and James Madison and the proclamation of a strong federal government, economic protectionism, and development of the industry as well as establishing long-term international trade agreements with all possible partners (including Great Britain).
Hamilton is also widely known as an example of the true American spirit of self-development and public servitude. Even after his political project failed and opponents took control over federal institutions, he was able to preserve the long-term legacy and prosperity of America. On the other hand, he contributed to the growing conflict between the North and South, as well as supporters of a strong central government and state autonomy. Those conflicts are remarkable even for the current political discussion in the United States.
These are the reasons why I believe Alexander Hamilton was even more influential than his opponents, even though he never became President. Hamilton was one of those whose passion and confidence lives through the centuries, still giving life to the country he formed and protecting people he cared for.
Chernow, Ron. Alexander Hamilton. London, Head of Zeus, 2017.
Cogliano, Francis D. Revolutionary America, 1763-1815: A Political History. New York, Routledge, Taylor & Francis Group, 2017. |
When the magnetization of a ferromagnetic body is changed, it wants to start to rotate - this connection between the magnetization and the angular momentum was already observed in an experiment by Einstein and de Haas in 1915. The reason for this phenomenon is the fact that on a microscopic level, magnetization is intrinsically linked to the angular momentum of electrons. Unlike Einstein and de Haas at the time, physicists now know that both the orbital motion of the electron around the atomic nucleus as well as its spin - which is a purely quantum mechanical property which can to some extent be imagined as the rotation of the electron about its own axis - generate the magnetization. In fact, in a ferromagnetic solid the spin generates the lion’s share of the magnetization. When angular momentum is conserved, a change in magnetization must thus be accompanied with a change of other forms of angular momentum in the system - in the Einstein-de Haas Experiment, this was the resulting rotation of a suspended magnet after its magnetization had been changed. On a microscopic level, it is the corresponding motion of the atoms which constitutes the final reservoir of angular momentum.
Laser-driven Spin Dynamics in Ferrimagnets: How does the Angular Momentum flow?
Illumination with an ultrashort laser pulse is a means to demagnetize a material very fast - for the prototypical ferromagnets iron, cobalt and nickel, for example, the magnetization is extinguished within about one picosecond (10-12 s) after the laser pulse has hit the material. This has led to the question, through which channels the angular momentum associated with the magnetization is transferred to other reservoirs during the short time available. Researchers from MBI in Berlin together with scientists from Helmholtz Zentrum Berlin and Nihon University, Japan, have now been able to follow this flow of angular momentum in detail for an iron-gadolinium alloy. In this ferrimagnetic material, adjacent iron (Fe) and gadolinium (Gd) atoms have magnetization with opposite direction. The researchers have used ultrashort x-ray pulses to monitor the absorption of circularly polarized x-rays by the Fe and Gd atoms as a function of time after previous laser excitation. This approach is unique in that it allows tracking the magnetic moment during the ultrafast demagnetization at both types of atoms individually. Even more, it is possible to distinguish angular momentum stored in the orbital motion vs. in the spin of the electrons when the respective absorption spectra are analyzed.
With this detailed “x-ray vision”, the scientists found that the demagnetization process at the Gd atoms in the alloy is significantly faster than in pure Gd. This is, however, not due to an exchange of angular momentum between the different types of atoms, as one could have suspected based on their antiparallel alignment. “We understand the accelerated response of Gd as a consequence of the very high temperatures generated among the electrons within the alloy,” says Martin Hennecke, the first author of the study. Interestingly, a “reshuffling” of angular momentum between the spin and orbital motion of the electrons could also not been detected when following the laser-induced demagnetization with a temporal resolution of about 100 femtoseconds (10-13 s) - this is true locally at all the Fe and the Gd atoms. So how does the angular momentum flow? “Obviously, all angular momentum is fully transferred to the atomic lattice,” says Hennecke. “In line with recent theoretical predictions, the spin angular momentum is first transferred to the orbital motion at the same atom via the spin-orbit interaction, but we cannot see it accumulate there as it is directly moving on to the atomic lattice.” The latter process has recently been theoretically predicted to be as fast as 1 femtosecond, and the detailed experiments now confirm that this last transfer step is indeed not a bottleneck in the overall flow of angular momentum.
Given that short laser pulses can also be used to permanently switch magnetization and thus write bits for magnetic data recording, the insight in the dynamics of these fundamental mechanisms is of relevance to develop new approaches to write data to mass data storage media much faster than possible today. |
PARIS—Mammals suck. The ability to suckle milk is a defining characteristic of the group, and it is no small feat of evolution. Nursing—as well as drinking through a straw—requires complex anatomy to seal off the airway every time we suck and swallow.
But one branch of mammals doesn’t suckle: the egg-laying monotremes, which include today’s platypus and echidna, or spiny anteater. These animals lack nipples. Their babies instead lap or slurp milk from patches on their mother’s skin. Monotremes are thought to have diverged from other mammals roughly 190 million years ago, so most paleontologists figured that suckling evolved after that split.
Now, a close look at modern animals and key fossils from before the split suggests monotreme ancestors could suckle after all, but the animals later lost the ability as their mouths evolved to eat hard-shelled prey. The finding “puts a new light on monotremes” and suggests suckling was part of the original mammalian package, says paleontologist and functional anatomist Alfred “Fuzz” Crompton of Harvard University, who led the new studies.
The work is “incredibly interesting and really important” for understanding mammalian evolution, says neurophysiologist Rebecca German of Northeast Ohio Medical University in Rootstown. “They are beginning to understand the part of the anatomy that is critical to infant feeding.”
Previous research by Crompton and others has identified a suite of muscles that play a key role in suckling. One, called the tensor veli palatini, stretches from near the base of the ears to the edges of the soft palate, the tissue that forms the back part of the roof of the mouth. When you suck on a straw, this muscle pulls the soft palate taut so your tongue can form a tight seal with the roof of your mouth. When the front of the tongue drops, the mouth becomes an area of low pressure and you draw liquid in.
More recently, to better understand how suckling evolved, Crompton and his colleagues analyzed the heads of North American opossums, platypuses, and monitor lizards, as well as fossil skulls. At the 5th International Paleontological Congress here last week, Crompton’s co-author, research technician Catherine Musinsky, described the anatomy of two mammalian ancestors: Thrinaxodon, which lived roughly 250 million years ago, and Brasilitherium, which lived about 220 million years ago, both before the first common ancestor of living mammals. (That animal is thought to have lived in the early Jurassic, which began about 200 million years ago.)
Modern reptiles lack the tensor veli palatini and it seems Thrinaxodon didn’t have one either. But in Brasilitherium, the researchers found that the shape of the bones and the scars where muscles attached suggest that a primitive version of the muscle was present. That, along with other evidence, led them to the surprising conclusion that this ancient mammal relative could probably form a tight seal between its tongue and palate and might have suckled. The idea “is very well supported,” by the researchers’ combination of modern anatomy and fossil evidence, says paleontologist Zhe-Xi Luo of the University of Chicago in Illinois, who has also closely examined Brasilitherium.
To explore monotreme anatomy in more detail, the researchers also painstakingly sectioned and scanned the head of a modern platypus. Although these animals branched off from other mammals well after Brasilitherium, they have lost the tensor veli palatini. Instead, their mouth and jaw have evolved to grind up the hard shells of crustaceans they scoop off river bottoms with their flat snouts. They move their lower jaw from side to side to grind their prey with rough pads on the tongue and palate, which have replaced teeth. These adaptations mean the platypus can’t form the tight seal required to suckle.
Given that suckling is one of the defining characteristics of mammals, “It’s a bit surprising that one of the first groups to branch off lost it again,” Crompton says.
Luo agrees. Because suckling allows newborns to efficiently access high-quality, high-calorie food, platypus ancestors faced a potentially expensive trade-off when they gave it up for specialized feeding, he says. For example, licking milk from their mothers’ skin exposes the animals to a higher risk of infection. But such drawbacks may have led to other adaptations, Musinsky noted: Researchers have studied platypus milk for its potential antimicrobial properties.
German adds that Crompton and Musinsky’s approach offers a valuable way to understand how suckling behavior evolved. “Breasts don’t fossilize,” she says. “And there aren’t going to be many fossil tongues … so anything that we can extract in terms of comparative information is incredibly important.” |
This module is a resource for lecturers
The procedures and techniques used to identify, collect, acquire, preserve, analyse, and ultimately present digital evidence in court must be in accordance with existing criminal procedural law (discussed in Module 3 on Legal Frameworks and Human Rights). This form of law prescribes the rules of evidence and criminal procedure that must be followed to ensure that evidence is admissible in court. Information and communication technology (ICT) can provide evidence of a crime. Data obtained from ICT that can be used in a court of law is known as electronic evidence(a.k.a., digital evidence), and the process of identifying, acquiring, preserving, analysing, and presenting this evidence is known as digital forensics. This Module provides an in-depth examination of both digital evidence and digital forensics.
The sub-pages to this section provide a descriptive overview of the key issues that lecturers might want to cover with their students when teaching on this topic:
- Digital evidence
- Digital forensics
- Standards and best practices for digital forensics
Next: Digital evidence |
Science Classroom Strategies for English Learners – Learning with the iPad and Other Tablets
Veronica Betancourt, M.A., and Paula Johnson, M.A.
Technology is ever evolving in exponential leaps and bounds. Just a few years ago, the iPad debuted. Soon, we can be expecting the iPad 3 to make its way into our hearts. So what does this mean for educators and the field of education as it exists today? Schools are encouraged to ensure we educate our children to be globally competitive, yet the structure and ideology of schools has remained the same for decades. As such, a vast majority of classrooms simply become contexts of unproductive learning (Sarason, 2004).
IDRA’s new publication, Science Instructional Strategies for English Learners – A Guide for Elementary and Secondary Grades, presents seven umbrella research-supported strategies for the science classroom (Villarreal, et al., 2012). This article describes one of the strategies: maximize use of technology in delivery of effective science and EL instruction and use Internet resources to supplement and enrich instruction of EL students.
Technology has shifted the ways in which children engage and learn. Web 2.0 tools, such as blogs, wikis and social media sites, thrust the Internet from a platform of receptive communication (sit and get information) to one of interactive communication (dynamic, real-time interaction) and has created an urgency for us to engage learners in a manner that maximizes the resulting benefits. Capitalizing on students’ knowledge of navigating technology for social interaction can be transferred into an academic setting that creates ongoing opportunities for application of critical thinking skills toward real-world issues that promote real-world solutions.
The question that now resonates is: How can we use iPads and other tablets to effectively generate a dynamic learning environment for maximum engagement in rigorous instruction? Rigor has traditionally been equated with a mastery of the content and was only available to a select few. But there must be a transformation of this definition to include applicable skills in conjunction with content knowledge in order to effectively and efficiently respond to the dynamic world and changing circumstances we face (Bellanca & Brandt, 2010). This translates into understanding that rigor requires us to challenge students beyond their comfort zone emotionally, intellectually and academically.
In a three-part series of articles, we are going to share how use of the iPad and other tablets can be maximized in multiple contexts: learning with the iPad, teaching with the iPad, and leading with the iPad.
Learning with the tablets can be maximized when instruction is designed to focus on big, interrelated ideas accompanied by essential questions (Bellanca & Brandt, 2010). The iPad and other tablets have many possibilities for use in the classroom when applied to real-world circumstances that engage students in analyzing situations and applying critical and creative thinking to find reasonable solutions.
For English learners, this means having intentional opportunities to also engage in outcome-oriented discussions with justification. By allowing students to negotiate using such technologies as the iPad as a tool for learning, they can broaden their social and academic language skills and demonstrate their understanding of the content through expressive means (writing or speaking).
For example, in a middle school life science class, students are learning about food webs and the interactions between biotic and abiotic factors in the environment. It isn’t enough to just understand what food webs are and learn the terms biotic and abiotic. Rather, it is critical that students are able to apply that knowledge to real-world situations. So instead of simply practicing how to identify the energy transfer among organisms in a food web, students may be challenged to research a particular environment (i.e., rainforest in Peru – http://www.rainforestfoundationuk.org/Peru), identify unique flora and fauna to that region, and pinpoint threats that could upset the balance of that food web. Additionally, students can use social networking sites, such as Facebook (if over the age of 13), to investigate organizations with environmental concerns and compare their own ideas with those of practicing organizations (i.e., Rainforest Alliance).
Instructional rigor is achieved by extending the activity and engaging students in finding potential solutions that would prevent an environmental upset. These types of highly cognitive learning opportunities immediately increase rigor and require students to apply and negotiate their academic knowledge in a solutions-driven environment.
The tablet becomes a learner tool as students research the web and collect data that would contribute to the solution-driven activity. In completing the activity, the learner must have or acquire sufficient knowledge of: (1) what food webs are; (2) in what ways food webs are significant to an environment; (3) what abiotic and biotic factors are; (4) which biotic factors contribute to a food web; and (5) how abiotic factors contribute to or affect the success of a food web. Engaging in solution-driven activities with the iPad, etc., goes beyond superficial and lower-level tasks by requiring students to expand their knowledge in context and through active engagement with others.
Products that can be used to demonstrate learning and critical thinking include creating a public service announcement with an iPad or tablet and allowing students to edit and create a final video with iMovie, for example. English learners benefit greatly from this type of expressive task because they must negotiate their understanding of the topic with others in their group and engage in a cooperative team environment that requires extensive interaction with their peers to come to a common understanding of the issue at hand.
Additionally, students may be asked to use the iPad or other tablets to present their contrived solution in the form of a concept map and may include a visual representation that would demonstrate the catastrophic impact of how identified threats to the region could negatively impact the food web within the environment.
There are multitudes of learning apps and opportunities that can be used with tablets. This above scenario is just one of limitless ways in which the iPad and other tablets can effectively be used as a student-driven tool for learning. It is especially useful for English learners because it offers a medium for communication practices on both a social and academic level. Subsequent articles in the IDRA Newsletter series will focus on how tablets can be used as a teaching tool and as a leadership platform for catapulting teacher efficacy and student success.
Bellanca, J., & R. Brandt. (eds). 21st Century Skills: Rethink How Students Learn (Bloomington, Ind.: Solutions Tree Press, 2010).
Roth, W.M., & M.K. McGinn. “Graphing: Cognitive Ability or Practice?” Science Education (1997) 81(1), 91-106.
Sarason, S.B. And What Do You Mean by Learning? (Portsmouth, N.H.: Heineman, 2004).
Villarreal, A., & V. Betancourt, K. Grayson, R. Rodríguez. Science Instructional Strategies for English Learners – A Guide for Elementary and Secondary Grades (San Antonio, Texas: Intercultural Development Research Association, 2012).
Veronica Betancourt, M.A., is an education associate in IDRA Field Services. Paula Johnson, M.A., is an education associate in IDRA Field Services. Comments and questions may be directed to them via e-mail at
[©2012, IDRA. This article originally appeared in the September 2012 IDRA Newsletter by the Intercultural Development Research Association. Every effort has been made to maintain the content in its original form. However, accompanying charts and graphs may not be provided here. To receive a copy of the original article by mail or fax, please fill out our information request and feedback form. Permission to reproduce this article is granted provided the article is reprinted in its entirety and proper credit is given to IDRA and the author.] |
(The first in a series on technology’s influence on journalism)
In today’s world, journalism is an irreplaceable source of information for the masses. It is available not only through printed paper, television, and radio, but it is also accessible as far as the eye can see on the internet. Yet, this overwhelming amount of information only presents more problems for the public in this day and age. So how did information amalgamate with journalism, and ultimately, how did society become so reliant on it? In order to answer these questions we will delve deep into the history of information and how it was spread in the past, going as far back as Ancient Greece.
Before print was invented, information was distributed by word of mouth. The job of informing the people was often a sacred one. The Kērykes, for instance, had exactly this role during the time of the Ancient Greeks, “acting as inviolable messengers between states, even in time of war, proclaiming meetings of the council, popular assembly, or court of law, reciting there the formulas of prayer, and summoning persons to attend” (“Kēryx,” 1998). These people were equivalent to messengers of divine power, and to lie or misinform the people would be a blasphemy. As a result, information spoken by these messengers would be the absolute truth, who could object against the words of a respected “messenger of god”?
Greek city of Lebadeia
The earliest form of written distribution of information tracks back to the 10th Century. People during the Sung Dynasty gained information through the distribution of Tipao, or public information sheets. These texts had information about local and nationwide news. Unfortunately, these texts were handwritten, and due to the fact that they were written in Chinese, commoners in rural areas often had difficulty understanding the contents. Still, it is believed that this sacrifice and transition into written, and therefore more permanent sources of information, led the way to create a more mainstream source of information for the masses.
Technology and its evolution are ever-changing, and this is the same for technology relating to information. This segment in the history of information covered the transition from word of mouth to written text, but there is still much more to cover before reaching the current state of the world wide web of information available to us today.
Categories: Arts and Culture |
Prominent ears or protruding ears are developmental problems. The ears are flat and stands out of your head during the early stages of your development (fetal stage). B you are born it is supposed to roll and fold forming helix and antihelix and eventually it gets closer to the head.
Prominent ears can happen due to interruption of the various stages of formation of folds ( helix and antihelix) and cavities ( concha and scapha) of the ears. This often is a cause of embarrassment and bullying and people tend to grow long hairs to cover the ears. they affect males and females equally statistically. 5% of people tend to have this problem.
What exactly is the problem in prominent ears and does it require correction?
There are various components of prominent ears such as absent antihelical folds, overdeveloped conchal cartilage and or increased auriculocephalic angle ( angle between ears and skull bones).
It requires correction only if it is excessive or if it is bothering the person. If it is noticed during childhood it may be a cause of concern to parent more than the child. it is best to seek professional advice to decide whether or not it needs correction.
Are there any tests required before proceeding for treatment?
The most important test is physical examination of the ear. The surgeon will assess the following
- Auriculocephalic angle at three places- upper, middle and lower part
- Asymmetry of the ears- If there is more than 3 mm difference in protrusion of the ears it is considered asymmetrical
- Helix and Antihelical folds
- Conchal depth and scapha
- Size and protrusion of the ear lobes
- Finally length and breadth of the ear to ensure it is not other abnormalities such as stahl’s ear or microtia.
- Associated ear symptoms such as hearing problem, ear infections etc
- Medical health check if indicated
What are the options for correction of prominent ears?
Non surgical options are limited. Taping and splinting the ears against the skull before six months of age has been shown to be effective. there are various devices that is customized for a child. It may have to be used for several months
- Minimal invasive surgery: If a child presents at an early age it is possible to insert sutures percutaneously to fold the ears. this requires very small incisions at three to four places to insert the sutures. However, it is unpredictable method and may not last very long.
- Otoplasty: It is a surgical procedure to correct the prominent ears. In Children it is performed under general anaesthesia whereas in adults it can be done under local anaesthesia with or without sedation. An incision is made at the back of the ear in elliptical fashion to remove the excess skin. Then the cartilage is incised from the top part all the way to the concha. the cartlage is than roled and fixed with 4-0 nylon sutures. If the cartilages are thick mild scoring can be performed for better molding. if there is prominence of conchal cartilage than it can be trimmed down at this stage. This is followed by reduction of the auriculocephalic angle by fixing the concha to the mastoid fascia. Finally the ear lobes are corrected by excising skin and fatty tissue in a fish tail incision pattern. The skin is appropriated with absorbable sutures.
What is the postoperative care after otoplasty?
A bandage is applied after the surgery for 24 hours. This is followed by application of a head band 20 hours a day for one week. then head band can be used for 12 hours a day for a month. It can be painful the first night so a strong pain killer is advisable. Patients can return to work in one week.
Prominent ears or protruding ears are developmental problems. The ears are flat and stands out of your head during the early stages of your development (fetal stage). |
Gear types can be classified according to the relative position of their axes of revolution. For example, there are gears for parallel shafts, gears for intersecting shafts, and gears for skewed shafts. Spur and helical gears are two different types of mechanical gears falling under the category of “gears for parallel shafts”. Spur gears are used in many devices, like the electric screwdriver, wind-up alarm clock, and washing machine. Helical gears, on the other hand, operate much more smoothly and quietly than spur gears. Helical gears are used in almost all car transmissions. Look as hard as you like, but you won’t find spur gears in your car!
Here are some additional differences between these two gears.
A spur gear can be identified by its teeth. Teeth are projected radially and are parallel to the axis of the gear. The teeth of a spur gear are exactly perpendicular to its flat faces. The teeth on helical gears, however, are cut at an angle to the face of the gear. When two teeth on a helical gear system engage, the contact starts at one end of the tooth and gradually spreads as the gears rotate, until the two teeth are in full engagement.
Spur gears are usually the first choice when exploring gear options, as they are easily manufactured and less cost is involved to manufacture them compared to helical gears.
Spur gears have straight teeth, and are mounted on parallel shafts. This fact makes spur gears simpler in design and easier to create than a helical gear, leading to a decreased cost of production.
Helical gears are capable of holding more load compared with spur gears, because the load is distributed across more teeth.
Helical gears are more durable than spur gears because the load gets distributed across more teeth. Hence, for a given load, the force will be spread out better than with a spur gear, resulting in less wear on individual teeth.
Spur gears can be really loud. Each time a gear tooth engages a tooth on the other gear, the teeth collide, creating noise. This also increases the stress on the gear teeth. Helical gears are quieter.
Helical gears are less efficient because they have more teeth touching when two gears are connected, resulting in increased friction and more energy loss due to heat. Spur gears are more efficient.
For more information on these gears, go through Gear Classification and Terminology: Spur and Helical Gears. To receive a course demo, click on the button below. |
Inorganic chemists often use IR spectroscopy to evaluate bond order of ligands, and as a means of determining the electronic properties of metal fragments. Students can often be confused over what shifts in IR frequencies imply, and how to properly evaluate the information that IR spectroscopy provides in compound characterization. In this class activity, students are initially introduced to IR stretches using simple spring-mass systems. They are then asked to translate these visible models to molecular systems (NO in particular), and predict and calculate how these stretches change with mass (isotope effects, 14N vs 15N). Students are then asked to identify the IR stretch of a related molecule, N2O, and predict whether the stretch provided is the new N≡N triple bond or a highly shifted N-O single bond stretch. Students are lastly asked to generalize how stretching frequencies and bond orders are related based on their results.
Evaluate the effect of changes in mass on a harmonic oscillator by assembling and observing a simple spring-mass system (Q1 and 2)
Apply these mass-frequency observations to NO and predict IR isotopic shift (14N vs. 15N) (Q3 and 4)
Predict the identity of the diagnostic IR stretches in small inorganic molecules. (Q5, 6, and 7)
Springs, rings, stands, and masses (100 and 200 gram weights for example).
Assemble students into small groups (2-4) discussions to answer the questions to the activity and collaborate.
Discuss students responses with respect to the answer key.
This activty was developed for the IONiC VIPEr summer 2018 workshop, and has not yet been implemented. |
— OR —
Using Double Dip Homographs Activity, students match each cone with the two sentences using the same homographs.
Being able to use the correct homograph helps your students clearly share their messages. Your students also will encounter homographs while they are reading. Since homographs are spelled the same, your students will need to know how to pronounce the word correctly. This activity will help them build their understanding of when to use each homograph pair.
Match the word on each cone with two sentences to create a double-scoop delight! Then, choose 10 sentences to write on the worksheet.
If you are using this activity, your students are probably learning about homographs and homophones.
Use this Homophone Crossword Puzzle as an additional resource for your students.
Introduce this activity by having students share homograph pairs that they know. Next, students share how they know which pronunciation they need to use when they see a homograph. Teacher reads sentences aloud with homographs in them. The class gives a thumbs up or thumbs down depending on if the word was pronounced correctly. Then, students complete the activity with a partner. Finally, challenge students to create a list of other homograph pairs. This activity is perfect for an easy literacy center or early finisher activity.
Be sure to check out more Homophones Worksheets.
Tell others why you love this resource and how you will use it.
You must be logged in to post a review.
Resources are FREE with a Membership! Get 10% Off Today |
This guide aims to explain various ciphers, help you understand how they work, and how to decode them with or without a key.
This answer is currently being split into multiple posts to improve scrollability and readability after some advice from other users. This may take a while, and apologies for the stop-start fashion of it.
Mission accomplished! This answer now contains links to separate posts of different types of ciphers, so there is no character limit allowing me to elaborate in more detail and to stop you having to scroll. Thanks a lot to @n_palum for helping!
- What is a cipher?
- Brief History
- How to make a good one
- Difference between Codes and Ciphers
- Types of cipher
- Classes and definitions
- Transposition ciphers
- Monoalphabetic Substitution ciphers
- Polygraphic Substitution ciphers
- Polyalphabetic ciphers
- Other ciphers
- Mechanical Ciphers
- Frequency Analysis
- Index of Coincidence
- Kasiski Examination
What is a cipher?
Ciphers have played major parts in historical events dating back to around 1900 BCE where apparent nonsense hieroglyphics can be found. From there, ciphers have developed, a recipe found encrypted on a tablet from 1500 BCE, and Hebrew scholars using monoalphabetic ciphers in 600 BCE. Nowadays, ciphers are common, encryption used by companies, secret services and even everyday applications such as Whatsapp. They make the world a lot more secure, but what actually are these ciphers?
A cipher is, simply put, a way of hiding data using a disguised way of writing. It is usually an algorithm with the purpose of converting data to a code to stop outside parties from obtaining the data and allowing only the intended recipient access.
A cipher consists of at least two, often 3 pieces of data:
- The plaintext - the message or data which shall be encoded
- The key (Not used for all ciphers) - A piece of data which is required to decode the ciphertext to the plaintext
- The ciphertext - the encoded plaintext which is usually illegible
The process of encryption is
Plaintext -> Method of encryption (type of cipher) + Key (if required) -> Ciphertext
Decryption is the reverse.
How to make a good one
On puzzling, we don't want to just see a short string and be expected to solve it. For what to do and what not to do see this meta post
Difference between a Code and a Cipher
For everyone but cryptographers, the words code and cipher are synonymous. If you were to talk about codes and ciphers to someone you'd probably find they used the words interchangeably. But there is a difference.
Codes are everywhere, and you won't even notice the most of the time. A code replaces words or entire sentences or phrases with symbols or characters. The important thing here is that each set of symbols or characters have a meaning. These meanings are usually stored in a code book. For instance, telegraph communicators used code to convey messages quicker, here is an extract of one of their codebooks:
You can see different words on their own can mean whole sentences.
Codes are very common, and you use them without even thinking. A traffic light uses a colour code for the words 'stop', 'wait' and 'go'. Most people use code every day, probably including you, whilst talking in chat or texting things like 'brb', 'afaik' and 'idk'. The most common code, used for information interchange, is ASCII.
The point of codes isn't really to hide data, just converting it to an easier way to transmit.
A cipher, on the other hand, the ciphertext has no meaning whatsoever. Each character is replaced according to an algorithm. For instance, Morse code isn't a code, it's actually a cipher.
Most ciphers were invented to hide data.
The difference broken down:
Codes generally operate on semantics, meaning, while ciphers operate on syntax, symbols. A code is stored as a mapping in a codebook, while ciphers transform individual symbols according to an algorithm.
Types of cipher
Classes and definitions
There are two different categories of ciphers: Classical (pen and paper) and the more modern Mechanical (requires a machine).
There are several different classes of classical ciphers, as listed below:
- Transposition ciphers - Positions of the characters in the plaintext change, but the characters themselves remain the same
- Monoalphabetic substitution ciphers - Each character (not always true, but most) is replaced with a different character(s)
- Polygraphic substitution ciphers - Groups of characters are replaced
- Polyalphabetic ciphers - Characters are encoded using a different alphabet. Usually position dependent.
- Others - Completely different, or above classes are combined
There are a few mechanical ciphers, which I will write a brief note on after the classical ciphers below.
Transposition ciphers involve moving the characters in the plaintext to different positions using an algorithm. The characters themselves remain unchanged, making this type of cipher insecure for short plaintexts.
See this separate answer for more details on different types of transposition ciphers.
Monoalphabetic substitution ciphers
Monoalphabetic substitution ciphers replace each letter in the plaintext with a different character/group of characters. If the plaintext is lengthy then these can be easily broken by frequency analysis.
See this separate answer for more details on different types of monoalphabetic substitution ciphers.
Polyalphabetic Substitution Ciphers
Polyalphabetic Substitution ciphers involve replacing characters in the plaintext with characters/groups of characters from an alternate alphabet.
See this separate answer for more details on different types of polyalphabetic substitution ciphers.
Polygraphic ciphers involve having groups of characters in the plaintext replaced.
See this separate answer for more details on different types of polygraphic ciphers.
Other ciphers are out there and many don't fit into any of the above categories. They can be combination ciphers, combining elements above to make them stronger, or just be completely different.
See this separate answer for more details on different types of other ciphers.
Also, see this community wiki of other ciphers that have been missed out, and feel free to add to it!
Mechanical ciphers were invented in WWII. They rely on gearing mechanisms to shift letters through an alphabet to get the final message.
Most famous examples are the Enigma machine and the Lorenz machine. I won't be able to explain a machine very well, so I won't bother going into detail. See the links for more, or this list in Wikipedia.
There are many ways to attempt to break a cipher without a key. Here are the best ways (taken from my answer here):
Cryptanalysis is defined as
'the art or process of deciphering coded messages without being told the key.'
If you have the key and know the encryption method, you can simply reverse the process to get to the plaintext.
If you have the key but not the encryption method, then this question covers how you can identify the cipher
However, if you have neither the key nor the encryption then you can use cryptanalysis.
This can be used to achieve a
- Total break — working out the key and the plaintext.
- Global deduction — discovering the method of encryption and finding the plaintext, but not the key.
- Distinguishing algorithm — identifying the cipher from a random permutation.
There are a couple of different ways to solve ciphers:
Frequency analysis works best with substitutional or rotational ciphers, though both of those can have keys. Frequency analysis studies the frequency of letters in a ciphertext.
Computers have calculated that in the English language, the order of the most frequent letters from high to low is etaoinshrdlcumwfgypbvkjxqz.
Here is the stats for analysis on the English language, including unigram, bigrams, trigrams etc.
As you can see from this graph, 'e' is by far the most frequent letter. 't' - 'r' is a lot closer.
How to use
If the cipher is a substitution, and the ciphertext is quite large, then you can attempt to break the cipher.
Using an online tool such as this, you can find the most common letters and most frequent substrings.
The most frequent letter in the ciphertext is probably 'e', and so on.
Using this you can break a cipher, or get an almost correct plaintext which you can then deduce the correct plaintext.
Example found online. This is a known rot cipher, but we don't know what number:
Most common letters:
j = 13, y=13, n=11, t=10.
so we can assume either e = j or y. If e = j, then j is +5 from e so we can assume this is rot 5. Decoding using rot 21 (the reverse) gives:
So we have solved it using just one substitution.
This method really works best with a quite lengthy ciphertext and is almost useless with short ciphertexts.
Index of coincidence
The index of coincidence provides a measure of how likely it is to draw two matching letters by randomly selecting two letters from a given text, from the formula number of times that letter appears/length of the text
The calculation itself is complex. Here is the calculation, in its most basic form from Wikipedia.
How to use
The basis is that by splitting the ciphertext into groups of x, and stacking them, if the key length = x then the I.C. will be around 1.73 (index coincidence of English language). If it isn't the same as x it will be around 1.
We have the following ciphertext:
QPWKA LVRXC QZIKG RBPFA EOMFL JMSDZ VDHXC XJYEB IMTRQ WNMEA
IZRVK CVKVL XNEIC FZPZC ZZHKM LVZVZ IZRRQ WDKEC HOSNY XXLSP
MYKVQ XJTDC IOMEE XDQVS RXLRL KZHOV
We can guess this is vigenere with a short key and its English. We can stack them in, say groups of 3 or any other number:
So if the key length is x, then the I.C should be around 1.73. Calculating all key lengths of 1-10:
We can see that 5 and 10 are the closest to 1.73, and as 10 is a factor of 5 then the key length will be 5.
Next stack the ciphertext in groups of 5, and using frequency analysis on each column we can find the key. When we try this, the best-fit key letters for each column are "EVERY". A vigenere decoder gives the message:
MUST CHANGE MEETING LOCATION FROM BRIDGE TO UNDERPASS
SINCE ENEMY AGENTS ARE BELIEVED TO HAVE BEEN ASSIGNED
TO WATCH BRIDGE STOP MEETING TIME UNCHANGED XX
The Kasiski Examination is another way of deducing the key length. Works best with longer ciphertexts, though a computer is then usually required.
The Kasiski Examination finds the repeated strings in the ciphertext and the distance between them. The distances are likely to be multiples of the keyword length. Finding more repeated strings means it is easier to find the key length, as it is the highest common factor/greatest common divisor of the distances.
(Courtesy of wikipedia, with some added elaboration.)
Take the plaintext
'crypto' appears twice in the plaintext, the distance between is 16 characters. (Count from the first c to the r before the second)
If the key is 'abcdef' the length is 6, which doesn't go into 16 we don't get any repeats in the ciphertext:
'abcdef' matches 'crypto' the first time, but for the second crypto the key is 'efabcd' and as a result, the ciphertext doesn't match.
But if the key is 'abcd', the length is 4 which goes into 16. So the ciphertext repeats:
You can see that 'abcdab' lines up with 'crypto' both times. And hey presto we get a repeat in the ciphertext: 'cqwmtn'. |
Solve multistep word problems posed with whole numbers and having whole-number answers using the four operations
Multiplication Multi-Step Word Problems is a free educational video by Skubes ed.It helps students in grades 4 practice the following standards 4.OA.A.3.
1. 4.OA.A.3 : Solve multistep word problems posed with whole numbers and having whole-number answers using the four operations, including problems in which remainders must be interpreted. Represent these problems using equations with a letter standing for the unknown quantity. Assess the reasonableness of answers using mental computation and estimation strategies including rounding..
Ratings & Comments
0 Ratings & 0 Reviews |
DEVELOPMENT OF GAIT BY AGE:
Birth to 9 months: During the first few months of life, several things are happening that lead to upright movement. First, body compostion is changing. On average, in the first 6 months of life, body fat increases from 12% to 25%. This increase in fat content makes the infant weaker for a period of time. In fact, some studies have suggested that larger infants with higher body fat percentage may achieve locomotor milestones later than their smaller friends. As they move towards their first birthday, fat content tends to drop while muscle mass increases and therefore, we see babies getting upright. Second, growth is happening more so in the arms and legs than in the head and trunk. This growth allows the baby to provide a greater resistance against gravity. Third, the baby is naturally exercising muscles that need to be strong for typical walking. On their backs, they are kicking which develops antigravity hip flexor strength. Hip flexors are the big, thick muscles in front of our hips that allow us to pick our leg up and move it forward during walking. On their tummies, they are working out their hip extensors or booty muscles. These muscles work on and off and sometimes with the hip flexors to coordinate smooth walking. Studies show that antigravity control of movement by these two muscle groups at the hip joint typically develops by 8-9 months of age. Therefore, the baby may not even be able to stand independently and the hip muscles already know how to control gravitational forces. So, if the baby is moving and growing typically at this point, they are gaining muscle mass, losing fat content, and developing antigravity movement and therefore, postural control. |
The spotted hyenas in the Ngorongoro Crater have become one of the world’s best-studied populations of free-ranging mammals. Outstanding are the detailed genetic pedigree that includes virtually all Crater hyenas and the long-term life history data from several thousand individuals. With these unique data we assess the role of spotted hyenas in savannah ecosystems and study key questions in evolutionary biology and ecology.
Social behaviour and life in groups
Spotted hyenas live in large groups characterised by a strict linear dominance hierarchy. They express highly complex social behaviours to interact with each other, build coalitions, and persuade members of the other sex to mate with them. We study which factors lead to social dominance, how social status influences behaviour and survival of offspring, and which advantages and disadvantages life in groups provides.
Mate choice and sexual conflicts
Due to the special anatomy of their outer sexual organs, particularly the ‘pseudopenis’, female spotted hyenas have complete control over mating. This makes hyenas a perfect model species to study mate choice, its impact on female and male reproductive tactics, and conflicts between the sexes.
Competitors and pathogens: Evolutionary driving forces or threats?
Food competitors, predators and pathogens drive adaptations of free-ranging animals. In healthy ecosystems, competitors and pathogens rarely force populations of carnivores out of balance. But when does it happen? How much competition can hyenas and other carnivores tolerate? And what happens when novel pathogens are transferred from livestock to wild animals? |
This program is perfect for students who have been introduced to essential sight words, but still need additional practice to help them reach mastery levels. Based on the Dolch List, the activities in each book cover most commonly used words in everyday reading and speaking.
Reinforce the skills covered in each of our Sight Words books with 248 full-color flash cards: 220 sight words from the Dolch List plus 28 colorful photo noun cards. This combination allows students to make phrases and sentences, laying the foundation for fluency in reading. Words are divided into five color-coded word lists, making it easy for readers to tackle essential words in small, manageable doses. Flash cards are printed on high-quality, glossy card stock. Simply cut and use with small groups or individual students.
Click here to download a free guide for using these flash cards, a list of all 248 words, and activity suggetions.
Take students on an interactive ride as they learn 220 essential sight words! Basic Sight Words Software combines text, pictures, and speech to help students who need extra instructional time with sight word recognition. Words are presented as “cards” with pre-recorded speech. Students click the “Flip Card” button and the computer reads a sentence with the sight word highlighted. Students can even record their speech! Multiple-Choice Mode allows students to read the sight word, and then select the corresponding speech from three choices. Requirements: Windows XP or later and Macintosh OS 10.5 or later. Touch screen and switch compatible. 512MB RAM.
Complete Program Includes:
- 2 Reproducible Activity Books
- 248 Full-Color Flash Cards
- 1 Single-User Software Program
- Stock: In Stock
- Model: REM 180G |
Plants make their own food (sugars) through photosynthesis but they also need nutrients to grow. These nutrients/minerals come mostly from the soil. There are 16 of them. The chart lists the nutrients and highlights them on the periodic table of elements. There is a the good way to remember them. Use the mnemonic, a memory aid located at the bottom of the chart.
After getting over 6 inches of rain we decided to plant. Native plants were purchased. The group of ASLA students and LA 122 students from the plant identification class at UC Berkeley's Landscape Architecture Dept. set them out according to our planting design. A small group of students and volunteers worked on terracing an eroding hillside with cardboard to block the weeds, jute netting to hold the cardboard down and support mulch from slipping down the hill and then redwood branches from a recently downed redwood were staked in to hold the bank. We'll plant red twig dogwood, a appropriate riparian shrub, through holes cut through the mulch, netting, and cardboard. As the materials rots the dogwood will be dropping their roots into the soil and hold the creek bank from eroding. |
Tiny satellites, some smaller than a shoe box, are currently orbiting around 200 miles above Earth, collecting data about our planet and the universe. It’s not just their small stature but also their accompanying smaller cost that sets them apart from the bigger commercial satellites that beam phone calls and GPS signals around the world, for instance. These SmallSats are poised to change the way we do science from space. Their cheaper price tag means we can launch more of them, allowing for constellations of simultaneous measurements from different viewing locations multiple times a day – a bounty of data which would be cost-prohibitive with traditional, larger platforms.
Called SmallSats, these devices can range from the size of large kitchen refrigerators down to the size of golf balls. Nanosatellites are on that smaller end of the spectrum, weighing between one and 10 kilograms and averaging the size of a loaf of bread.
Starting in 1999, professors from Stanford and California Polytechnic universities established a standard for nanosatellites. They devised a modular system, with nominal units (1U cubes) of 10x10x10 centimeters and 1kg weight. CubeSats grow in size by the agglomeration of these units – 1.5U, 2U, 3U, 6U and so on. Since CubeSats can be built with commercial off-the-shelf parts, their development made space exploration accessible to many people and organizations, especially students, colleges and universities. Increased access also allowed various countries – including Colombia, Poland, Estonia, Hungary, Romania and Pakistan – to launch CubeSats as their first satellites and pioneer their space exploration programs.
Initial CubeSats were designed as educational tools and technological proofs-of-concept, demonstrating their ability to fly and perform needed operations in the harsh space environment. Like all space explorers, they have to contend with vacuum conditions, cosmic radiation, wide temperature swings, high speed, atomic oxygen and more. With almost 500 launches to date, they’ve also raised concerns about the increasing amount of “space junk” orbiting Earth, especially as they come almost within reach for hobbyists. But as the capabilities of these nanosatellites increase and their possible contributions grow, they’ve earned their own place in space.
From proof of concept to science applications
When thinking about artificial satellites, we have to make a distinction between the spacecraft itself (often called the “satellite bus”) and the payload (usually a scientific instrument, cameras or active components with very specific functions). Typically, the size of a spacecraft determines how much it can carry and operate as a science payload. As technology improves, small spacecraft become more and more capable of supporting more and more sophisticated instruments.
These advanced nanosatellite payloads mean SmallSats have grown up and can now help increase our knowledge about Earth and the universe. This revolution is well underway; many governmental organizations, private companies and foundations are investing in the design of CubeSat buses and payloads that aim to answer specific science questions, covering a broad range of sciences including weather and climate on Earth, space weather and cosmic rays, planetary exploration and much more. They can also act as pathfinders for bigger and more expensive satellite missions that will address these questions.
I’m leading a team here at the University of Maryland, Baltimore County that’s collaborating on a science-focused CubeSat spacecraft. Our Hyper Angular Rainbow Polarimeter (HARP) payload is designed to observe interactions between clouds and aerosols – small particles such as pollution, dust, sea salt or pollen, suspended in Earth’s atmosphere. HARP is poised to be the first U.S. imaging polarimeter in space. It’s an example of the kind of advanced scientific instrument it wouldn’t have been possible to cram onto a tiny CubeSat in their early days.
Funded by NASA’s Earth Science Technology Office, HARP will ride on the CubeSat spacecraft developed by Utah State University’s Space Dynamics Lab. Breaking the tradition of using consumer off-the-shelf parts for CubeSat payloads, the HARP team has taken a different approach. We’ve optimized our instrument with custom-designed and custom-fabricated parts specialized to perform the delicate multi-angle, multi-spectral polarization measurements required by HARP’s science objectives.
HARP is currently scheduled for launch in June 2017 to the International Space Station. Shortly thereafter it will be released and become a fully autonomous, data-collecting satellite.
SmallSats – big science
HARP is designed to see how aerosols interact with the water droplets and ice particles that make up clouds. Aerosols and clouds are deeply connected in Earth’s atmosphere – it’s aerosol particles that seed cloud droplets and allow them to grow into clouds that eventually drop their precipitation.
This interdependence implies that modifying the amount and type of particles in the atmosphere, via air pollution, will affect the type, size and lifetime of clouds, as well as when precipitation begins. These processes will affect Earth’s global water cycle, energy balance and climate.
When sunlight interacts with aerosol particles or cloud droplets in the atmosphere, it scatters in different directions depending on the size, shape and composition of what it encountered. HARP will measure the scattered light that can be seen from space. We’ll be able to make inferences about amounts of aerosols and sizes of droplets in the atmosphere, and compare clean clouds to polluted clouds.
In principle, the HARP instrument would have the ability to collect data daily, covering the whole globe; despite its mini size it would be gathering huge amounts of data for Earth observation. This type of capability is unprecedented in a tiny satellite and points to the future of cheaper, faster-to-deploy pathfinder precursors to bigger and more complex missions.
HARP is one of several programs currently underway that harness the advantages of CubeSats for science data collection. NASA, universities and other institutions are exploring new earth sciences technology, Earth’s radiative cycle, Earth’s microwave emission, ice clouds and many other science and engineering challenges. Most recently MIT has been funded to launch a constellation of 12 CubeSats called TROPICS to study precipitation and storm intensity in Earth’s atmosphere.
For now, size still matters
But the nature of CubeSats still restricts the science they can do. Limitations in power, storage and, most importantly, ability to transmit the information back to Earth impede our ability to continuously run our HARP instrument within a CubeSat platform.
So as another part of our effort, we’ll be observing how HARP does as it makes its scientific observations. Here at UMBC we’ve created the Center for Earth and Space Studies to study how well small satellites do at answering science questions regarding Earth systems and space. This is where HARP’s raw data will be converted and interpreted. Beyond answering questions about cloud/aerosol interactions, the next goal is to determine how to best use SmallSats and other technologies for Earth and space science applications. Seeing what works and what doesn’t will help inform larger space missions and future operations.
The SmallSat revolution, boosted by popular access to space via CubeSats, is now rushing toward the next revolution. The next generation of nanosatellite payloads will advance the frontiers of science. They may never supersede the need for bigger and more powerful satellites, but NanoSats will continue to expand their own role in the ongoing race to explore Earth and the universe. |
In a dramatic illustration of the potential for microbes to prevent disease, researchers at Yale University and the University of Chicago showed that mice exposed to common stomach bacteria were protected against the development of Type I diabetes.
The findings, reported in the journal Nature, support the so-called "hygiene hypothesis" – the theory that a lack of exposure to parasites, bacteria and viruses in the developed world may lead to increased risk of diseases like allergies, asthma, and other disorders of the immune system. The results also suggest that exposure to some forms of bacteria might actually help prevent onset of Type I diabetes, an autoimmune disease in which the patient's immune system launches an attack on cells in the pancreas that produce insulin.
The root causes of autoimmune disease have been the subject of intensive investigation by scientists around the world. In the past decade, it has become evident that the environment plays a role in the development of some overly robust immune system responses. For instance, people in less-developed parts of the world have a low rate of allergy, but when they move to developed countries the rate increases dramatically. Scientists have also noted the same phenomenon in their labs. Non-obese diabetic (NOD) mice develop the disease at different rates after natural breeding, depending upon the environment where they are kept. Previous research has shown that NOD mice exposed to killed (i.e., non-active) strains of tuberculosis or other disease-causing bacteria are protected against the development of Type I diabetes. This suggests that the rapid "innate" immune response that normally protects us from infections can influence the onset of Type 1 diabetes.
In the Nature paper, teams led by Li Wen at Yale and Alexander V. Chervonsky at the University of Chicago showed that NOD mice deficient in innate immunity were protected from diabetes in normal conditions. However, if they were raised in a germ-free environment, lacking "friendly'' gut bacteria, the mice developed severe diabetes. NOD mice exposed to harmless bacteria normally found in the human intestine were significantly less likely to develop diabetes, they reported.
"Understanding how gut bacteria work on the immune system to influence whether diabetes and other autoimmune diseases occurs is very important," Li said. "This understanding may allow us to design ways to target the immune system through altering the balance of friendly gut bacteria and protect against diabetes."
Changyun Hu from Yale also contributed to their research. Other institutions involved in the study were Washington University; The Jackson Laboratory, Bar Harbor, Me.; Bristol University, United Kingdom; and the University of California-San Francisco. |
Good dental care and oral hygiene are extremely important to help prevent tooth decay and gum disease. This involves cleaning your teeth at least twice a day with fluoride toothpaste, visiting the dentist and hygienist regularly, and limiting the amount of sugar and acid in your diet.
Brush your teeth
You will benefit by following these steps when brushing your teeth:
• use a toothbrush with a small head and synthetic bristles
• use fluoride toothpaste to protect against decay
• brush all the tooth surfaces thoroughly, including the top of your lower teeth, the bottom of your upper teeth, and the back (tongue side) of all your teeth
• develop your own routine or order of brushing, to help ensure you brush each surface every time
• pay particular attention to your gum line, angling the bristles into the area where your gums meet your teeth
• spit out the toothpaste when you’re finished, but don’t rinse your mouth out; this stops the chemicals in the toothpaste continuing to clean your teeth
• brush at least twice a day for about two to three minutes
• replace your toothbrush at least every three months, or sooner if the bristles are worn down
A good electric toothbrush will reduce the effort involved in ensuring that you clean your teeth effectively.
Clean between your teeth
Dental floss or interdental brushes will assist in removing plaque and small bits of food from between teeth and under gum lines. These are areas that a toothbrush can’t reach. It’s always important to use the correct technique, so ask for advice from your dentist or hygienist.
Unfortunately, brushing and flossing can’t remove all calculus, especially between the gums and the teeth. This can only be removed with special tools during a scale and polish by your hygienist.
Control sugar in your diet
Consuming sugary foods and drinks encourages tooth decay. However, it’s how often you eat these sugars, rather than the amount, that is important. It is best not to eat or drink them between meals this will give your teeth a chance to be remineralised by saliva. Generally, it’s better for your health to reduce your sugar intake.
Control acid in your diet
To prevent dental erosion, it’s important not to have too many acidic foods and drinks. These include fruit juices, fizzy drinks and squashes (including ‘diet’ varieties), flavoured waters, vinegar, pickled foods, crisps and ketchup.
Stop smoking and reduce your alcohol intake
Smoking can stain your teeth and increase your risk of gum disease and tooth loss. Certain alcoholic drinks contain lots of sugar and increase the risk of tooth decay, so cutting back on these will protect you as well. |
What Will Happen When Flu Season Meets the COVID-19 Pandemic?
As the Northern Hemisphere flu season approaches amidst the COVID-19 pandemic, many of the same infection control measures used during the pandemic could help reduce flu cases. This article compares the two viral illnesses and the actions people can take to help prevent them.
What will happen when the 2020 flu season meets the COVID-19 pandemic? Will we be in a new “world of hurt” as the vulnerable succumb to one or both infectious diseases? Or, will we be more protected than ever from flu, thanks to infection control measures already in place for COVID-19? Much depends on us. According to a Web MD interview with Dr. Robert Redfield, Director of the U.S. Centers for Disease Control and Prevention (CDC), the same guidelines issued to prevent the spread of COVID-19, such as social distancing, wearing face masks, and washing hands frequently, also could help reduce flu transmission…if we stay vigilant.
Flu vs. COVID-19
According to CDC, flu and COVID-19 are similar respiratory illnesses with several significant differences. Both are caused by viruses, but less is known about the COVID-19 virus (a type of coronavirus) than flu (or influenza) viruses. The table below identifies similarities and differences between these two diseases. Nevertheless, people who develop respiratory symptoms in the coming months may require testing to know which illness they have.
|Flu and COVID-19 Similarities||Significant Differences|
|Symptoms||Fever or feeling feverish/chills, cough, shortness of breath or difficulty breathing, fatigue (tiredness), sore throat, runny or stuffy nose, muscle pain or body aches, headache, and vomiting and diarrhea||COVID-19 may include a change in or loss of taste or smell|
|Time from infection to symptoms||At least 1 day||It may take a person longer to develop COVID-19 symptoms (2-14 days) than flu symptoms (1-4 days)|
|Period of being contagious||At least 1 day||Most people can spread flu for about 1 day before showing symptoms. The spread of COVID-19 is still being studied, but it is possible for someone to spread the disease for about 2 days before showing symptoms and remain contagious for at least 10 days|
|How they spread||Person-to-person, mainly by airborne particles when they land in the mouths or noses of people nearby (or perhaps inhaled into the lungs); also, possibly by touching a surface contaminated with the virus and then touching one’s mouth, nose, or possibly their eyes||COVID-19 appears to be more contagious among some populations and age groups than flu; COVID-19 appears to have more super-spreading events, which, according to CDC, means the virus that causes COVID-19 can quickly and easily spread to many people and result in continuous spreading as time progresses|
|People at higher risk||Older adults (65 and older); people with underlying medical conditions; pregnant women||Flu is associated with a higher risk of complications in healthy children; school-aged children infected with COVID-19 are at higher risk of a rare but severe complication|
|Complications||These can include pneumonia, respiratory failure or distress, sepsis, cardiac injury, multiple organ failure, worsening chronic medical conditions, inflammation of the heart, brain, or muscle tissues, or secondary bacterial infections||Additional complications from COVID-19 can be blood clots in the veins and arteries of the lungs, heart, legs, or brain and Multisystem Inflammatory Syndrome in children|
|Treatment||Those at high risk of complications or those who have been hospitalized should receive supportive medical care to help relieve symptoms/complications||Flu treatment includes FDA-approved antiviral drugs; the National Institutes of Health has guidance on treatment of COVID-19; Remdesivir is being explored as a treatment for COVID-19 and is available under an Emergency Use Authorization|
Clues from the Southern Hemisphere
Each year, the severity of the Southern Hemisphere flu season can be an indicator of what’s to come in the Northern Hemisphere. The World Health Organization’s (WHO’s) most recent Influenza update notes, “The various hygiene and physical distancing measures implemented by Member States to reduce SARS-CoV-2 [coronavirus] virus transmission have likely played a role in reducing influenza virus transmission.” That said, the WHO warns flu surveillance data should be “interpreted with caution” as the COVID-19 pandemic has influenced “health seeking behaviors, staffing/routines in sentinel sites, as well as testing priorities and capacities in Member States.” So, while recognizing potential reporting difficulties may be contributing to a rosier picture than reality warrants, the relatively mild flu season in the Southern Hemisphere could be a good indication for the north.
Vaccines: The State of Play
Despite tremendous progress to develop safe and effective COVID-19 vaccines, we have not yet reached that goal. Flu vaccines, however, are readily available, and the CDC has purchased millions of extra doses to distribute this fall to reduce the number of cases. Dr. Redfield issued a plea to all of us: “Please don’t leave this important accomplishment of American medicine on the shelf. This is a year that I’m asking people to really think deep down about getting the flu vaccine.” The CDC Director would like to see 65% of the U.S. population vaccinated against the flu this year. Last year’s vaccination rate was about 45%. Adults 65 years of age and older should consider getting the high dose or “adjuvanted” vaccine for a better immune response. Older adults need to remember to get the flu shot and not the nasal spray vaccine.
But there are obstacles to accessing vaccinations of all types as we observe social distancing. The National Foundation for Infectious Diseases (NFID) notes, “Unfortunately, it’s impossible to get vaccinated from the comfort of your own couch. As a result, we have seen a troubling drop in routine immunization rates across all age groups in the U.S. We must reverse this trend now; otherwise, we could see outbreaks of dreaded vaccine-preventable diseases across the country, which would be a disaster, particularly during a pandemic.” Flu vaccines are generally widely available in the U.S., including at pharmacies in drug stores and supermarkets. Wherever you go to get your flu shot, remember to maintain a social distance of at least six feet from others as you wait in line, wear a face mask and do not touch it, and use hand sanitizer frequently (at least 60% alcohol) while you are out and about. NFID identifies strategies to make vaccinations quick, easy, and safer—including setting up vaccine “clinics” in parking lots and scheduling separate office hours for vaccinations.
Be Proactive for Your Health
This flu season, redouble your efforts to maintain a healthy lifestyle that includes a balanced diet, regular exercise, and adequate rest. Make sure you and your family members are vaccinated against flu, continue to wear a face mask and social distance in public, and wash your hands thoroughly and frequently. Disinfect frequently touched surfaces with surface disinfectants approved for use against COVID-19, or use regular household bleach according to CDC directions. (We used those directions to develop a bleach disinfection poster, and by following those directions you can destroy both the COVID-19 and flu viruses on surfaces.)
With vigilance, we can get through the “1-2-punch” of flu season and pandemic in good health.
Ralph Morris, MD, MPH, is a Physician and Preventive Medicine and Public Health official living in Bemidji, MN. |
Small waves in estuaries
We know that waves cleanse estuaries of silts and clays, keeping intertidal flats sandy and healthy. But how big do waves have to be to be effective in this way? New research shows that very small waves can be just as effective as big waves.
When it's windy, estuaries tend to turn brown and dirty-looking.
The discolouration is due to silts and clays being eroded from intertidal flats by waves and placed in suspension in the water column. Although waves make the water look dirty, they actually tend to cleanse intertidal flats of silts and clays, leaving behind the sandy sediments that we associate with a clean estuary.
That's because the sediments resuspended by the waves settle back to the bed quite slowly, and while they are settling they either get carried out to sea by tidal currents, or they are carried up to the headwaters of tidal creeks, where they settle amongst mangroves and saltmarshes. Either way, these sediments are effectively lost from the open-water part of the estuary.
Knowing this, we can account for the effects of waves in the numerical models that we use to predict estuary sedimentation, the dispersal of contaminants such as heavy metals (which are typically attached to fine-sediment particles), and the ecological effects of sediments that get washed in from the land when it rains.
By accounting for the waves, we can get much better predictions of things like the likelihood of heavy metals building up to toxic levels in urban estuaries, how sedimentation rates are likely to change as a result of sediment runoff from catchment development, and how the estuary ecology will fare in the future as, for example, agriculture becomes more intense.
Like many things, we tend to assume that bigger is better, and therefore that the biggest waves associated with the strongest winds are most effective at cleansing estuaries of fine sediments (and any contaminants that are piggybacking on the fine sediments).
However, you might have noticed that, even in very slight breezes, estuaries can become quite turbid (less clear) around the edges in very shallow water (less than 10 to 20 cm deep), and that this turbidity is linked to the very small waves that lap the shore under such conditions.
This raises the question: is it the big waves, which occur only infrequently under strong winds, or is it the small waves, which occur often under mild breezes, that really keep our estuaries clean?
The answer to this question is relevant to the way we run our predictive numerical models.
We conducted an experiment in the Tamaki estuary, Auckland, to measure waves and sediment resuspension.
We designed the experiment to measure the 'orbital' (back-and-forth) currents underneath the waves, and to determine how these changed over the tidal cycle as the water depth changed. We used a small instrument package that contained a current meter for measuring orbital currents, a pressure sensor for measuring wave height, and an optical backscatter sensor for measuring the amount of sediment suspended in the water column. The latter works by firing out short pulses of infra-red radiation (heat, essentially) and recording the amount of radiation that gets reflected back by (backscattered from) particles in suspension. The greater the backscattered radiation, the more sediment suspended in the water column.
Measurements were made on an intertidal flat that was submerged to a depth of around 1.2 m at high tide.
During one particular tidal cycle, a sea breeze from the north blew at about 5 m/s (10 knots), generating waves about 10 cm high on the intertidal flat. This provided us with the opportunity to observe and study the way very small waves interact with the bed sediments on this intertidal flat.
We discovered that even these very small waves generate significant orbital currents that are quite capable of resuspending bed sediments.
The orbital currents underneath the small waves reduce quite rapidly down through the water column: more than about 30 or 40 cm below the water surface the orbital currents diminish to virtually nothing, but within about 20 cm of the water surface the orbital currents reach 20 to 30 cm/s. This is around half a knot, or 0.9 km/hr: if the whole water column were moving at this speed you would find it quite an effort to swim against.
As the tide rises, the top 20 cm or so of the water column that contains the strongest orbital currents gets lifted above the seabed, and no seabed sediment is disturbed.
However, as the tide falls, the strongest orbital currents get lowered onto the seabed: right before low tide, when the water is less than about 20 cm deep, the half-knot orbital currents touch down on the seabed and sediment is easily resuspended, creating the narrow band of muddy water that we see around the edges of the estuary when it is breezy.
When the tide returns to the site, the waves are brought along too, and early in the rising tide when the water is still less than about 20 cm deep, more sediment is resuspended.
We ran a model simulation based on 23 years of Auckland wind data to work out which waves do the most work on the seabed at the Tamaki site. The model simulated the generation of waves by wind, the orbital currents beneath the waves, the changing water depth due to the tide, and the resuspension of sediments by orbital currents that occurs when the currents are within range of the seabed.
We ran the model for a 23-year period for which we have Auckland wind data, and identified all of the times in that period when waves resuspended sediments. We also identified which parts of the intertidal flats were affected by waves.
We discovered that down at the base of the intertidal flat, where the water is greater than 2 m deep at high tide and the seabed is under water for nearly all of the tidal cycle, the largest waves associated with the strongest, least frequently-occurring winds do most of the work.
However, at the top of the flat, where the water is shallower and the seabed is submerged for only a small fraction of the tidal cycle, very small waves associated with frequently-occurring breezes do the most work.
This means that our models have to account for all waves – not just the largest waves – to make accurate predictions of sediments, contaminants and ecology in estuaries.
Green, M.O., 2011. Dynamics of very small waves and associated sediment resuspension on an estuarine intertidal flat. Estuarine, Coastal and Shelf Science, 93(4): 449–459. |
The international community started to take action to prevent further large-scale depletion of stratospheric ozone with the [Vienna Convention for the Protection of the Ozone Layer], 1985. In 1987 countries agreed on the [Montreal Protocol on Substances that Deplete the Ozone Layer], under which world consumption of specified chlorfluorocarbons (CFCs) and halons would be frozen and total CFC consumption would be reduced by 50% by the year 2000, relative to the base year 1986. Since then four amendments (London, Copenhagen, Vienna and Montreal) have been made to the original Protocol.
Nasa scientists have used improved understanding of weather systems and of the higher layers of the Earth's atmosphere to produce new models of ozone depletion. Paradoxically, global warming is one of the key culprits causing the cooling. As greenhouse gases accumulate in the lower atmosphere, they absorb heat radiated from the Earth that would otherwise escape into the upper atmosphere, warming it up. Man-made chemicals, including CFCs, the production of which the developed world is now repressing, take 10 to 15 years to work their way upwards. In cooler temperatures, they are much more damaging to ozone. |
Contents: 10 chapters
Biodiversity is a major topic right now, not just because of the threats it faces, but also because we now understand better than ever just how little we in fact know about it.
So what if species go extinct, genetic variation decreases and diverse ecosystems are turned into fields and heavily managed forests? Why do we need to save forty different species of dung beetle and dozens of different kinds of habitats?
Professional biodiversity scientists conduct research, write articles, collaborate, publicise their findings, and much more. But regular citizens are also important collectors of biodiversity data.
The more we study the biodiversity of microbes, the more diversity we find and the more important it turns out to be. Microbes are difficult to study because they are invisible to the naked eye, but in recent years technological advances have revolutionised our understanding of microbes.
The Finnish flora and fauna are likely among the best known in the world, but even here we still have a lot of species we don’t know about. With the species we do know, we often have no idea how they are doing. What we know least of all is how species will be doing in the future, as their environment changes.
The fragmentation of species’ habitats into several separate patches makes species’ lives more difficult in many ways. Habitat fragmentation is one of the most important human-caused threats to biodiversity, but many species have also adapted to life in a naturally fragmented habitat. |
With massive fronds creating a luxuriously green canopy in the understory of Australian forests, tree ferns are a familiar sight on many long drives or bushwalks. But how much do you really know about them?
First of all, tree ferns are ferns, but they are not really trees. To be a tree, a plant must be woody (undergo secondary plant growth, which thickens stems and roots) and grow to a height of at least three metres when mature. While tree ferns can have single, thick trunk-like stems and can grow to a height of more than 15 metres, they are never woody.
They’re also incredibly hardy — tree ferns are often the first plants to show signs of recovery in the early weeks after bushfires. The unfurling of an almost iridescent green tree fern fiddlehead amid the sombre black of the bushfire ash is almost symbolic of the potential for bushfire recovery.
Ancient family ties
Tree ferns are generally slow growing, at rates of just 25-50 millimetres height increase per year. This means the tall individuals you might spot in a mature forest may be several centuries old.
However, in the right environment they can grow faster, so guessing their real age can be tricky, especially if they’re growing outside their usual forest environment.
Read more: The coastal banksia has its roots in ancient Gondwana
As a plant group, tree ferns are ancient, dating back hundreds of millions of years and pre-dating dinosaurs.
They existed on earth long before the flowering or cone-bearing plants evolved, and were a significant element of the earth’s flora during the Carboniferous period 300-360 million years ago, when conditions for plant growth were near ideal. This explains why ferns don’t reproduce by flowers, fruits or cones, but by more primitive spores.
In fact, fossilised tree ferns and their relatives called the fern allies laid down during the carboniferous then have provided much of the earth’s fossil fuels dating from that period. And tree ferns were a great food source, with Indigenous people once eating the pulp that occurs in the centre of the tree fern stem either raw or roasted as a starch.
Until recent times, ferns were quiet achievers among plant groups with an expanding number of species and greater numbers. Today, human activities are limiting their success by the clearing of forests and agricultural practices. Climate change is also a more recent threat to many fern species.
Species you’ve probably seen
Two of the more common tree fern species of south eastern Australia are Cyathea australis and Dicksonia antarctica. Both species have a wide distribution, extending from Queensland down the Australian coast and into Tasmania.
They’re often found growing near each other along rivers and creeks. They look superficially alike and many people would be unaware that they are entirely different species at first glance. That is, until you look closely at the detail of their fronds and run your fingers down the stalks.
C. australis has a rough almost prickly frond, hence its common name of rough tree fern, and can grow to be 25 metres tall. While D. antarctica, as the soft tree fern, has a smooth and sometimes furry frond and rarely grows above 15 metres.
Both contribute to the lush green appearance of the understory of wet forests dominated by eucalypts, such as mountain ash (Eucalyptus regnans).
Stems that host a tiny ecosystem
The way tree ferns grow is quite complex. That’s because growth, even of the roots, originates from part of the apex of the stem. If this crown is damaged, then the fern can die.
Read more: ‘Majestic, stunning, intriguing and bizarre’: New Guinea has 13,634 species of plants, and these are some of our favourites
At the right time of the year, the new fronds unfurl in the crown from a coil called a fiddlehead. The stem of the tree fern is made up of all of the retained leaf bases of the fronds from previous years.
The stems are very fibrous and quite strong, which means they tend to retain moisture. And this is one of the reasons why the stems of tree ferns don’t easily burn in bushfires — even when they’re dry or dead.
In some dense wet forest communities, the stems of tree ferns are a miniature ecosystem, with epiphytic plants — such as mosses, translucent filmy ferns, perhaps lichens and the seedlings of other plant species — growing on them.
These epiphytes are not bad for the tree ferns, they’re just looking for a place to live, and the fibrous, nutrient-rich, moist tree fern stems prove brilliantly suitable.
Engulfed by trees
Similarly, the spreading canopies of tree ferns, such as D. antarctica, provide an excellent place for trees and other species to germinate.
That’s because many plants need good light for their seedlings to establish and this may not be available on the forest floor. Seeds, such as those of the native (or myrtle) beech, Nothofagus cunninghamii, may germinate in the crowns of tree ferns, and its roots can grow down the tree fern trunks and into the soil.
Read more: People are ‘blind’ to plants, and that’s bad news for conservation
As time passes, the tree species may completely grow over the tree fern, engulfing the tree fern stem into its trunk. Decades, or even centuries later, it’s sometimes still possible to see the old tree fern stem embedded inside.
Still, tree ferns are wonderfully resilient and give a sense of permanence to our ever-changing fire-affected landscapes. |
Presentation on theme: "By C. Kohn, Waterford, WI. The most obvious and important plant process affected by light is photosynthesis, the creation of sugar from water and carbon."— Presentation transcript:
The most obvious and important plant process affected by light is photosynthesis, the creation of sugar from water and carbon dioxide by using the energy of light. Many plant processes, particularly transpiration, change during the course of the day due to changing levels of light Plant growth can also be affected by light; the same plants grown in different types of light will have different characteristics
Visible white light is composed of all the colors in the visible spectrum Light is simply a form of energy Light energy travels in waves Three aspects of light affect plants – 1. Quantity – brightness of light (or height of each wave in a wavelength) 2. Quality – color of light (or width of each wavelength, i.e. frequency) 3. Duration – amount of time light is present
Visible light is only one portion of the electromagnetic spectrum. Visible light is comprised of energy with a wavelength between 400 and 700 nanometers (nm = 0.000000001 m. wide from peak to peak) Longer wavelengths create radio waves, etc. Shorter wavelengths are much more intense; e.g. X-rays, UV rays
Light intensity is determined by the size of the waves. The ‘taller’ the waves, the more intense the light The shorter the waves, the less intense the light
In general, the more light a plant receives, the higher the rate of photosynthesis This in turn should translate into more plant growth and production However, if a plant is not acclimated to bright light (e.g. if it was started indoors and moved outside too quickly), the pigments of the plant can be ‘bleached’ by intense sunlight Plants started inside must be ‘hardened off’ or they may suffer in bright light.
Light intensity increases in summer because the rays of the sun are directly overhead This causes them to be ‘concentrated’ on the area just below In winter months, the sunlight is spread out over a larger area; this causes the intensity to decrease
Light frequency is the number of times the peak of a wavelength passes a point. The smaller the wavelength, the greater the frequency, causing more energy to be carried by the light The longer the wavelength, the smaller the frequency, and less energy is carried by light
Light frequency can be thought of as the color The longer the wavelengths, the redder the color The shorter the wavelengths, the bluer the color Light intensity can be thought of as brightness The taller the waves, the brighter the light The shorter the waves, the dimmer the light These are independent of each other You can have dark (short) red (long) wavelengths You can have bright (tall) blue (short) wavelengths
Pigments are any chemical substance that absorb light The color of the pigment is determined by the light that is not used by the pigment, or by the light that is reflected back into your eye Plants have several kinds of pigments, each that absorb different kinds of light
Chlorophyll a is found in all photosynthetic plants Chlorophyll a absorbs light mostly in the violet-blue range and reddish orange range Chlorophyll b and carotenoids are secondary pigments; not all plants have them These pigments absorb light in the orange and green ranges Because these types of light are less effective, chlorophyll b and carotenoids tend to have less of an impact on plants The color of these pigments is usually evident only in fall as the chlorophyll a pigments shut down.
As seen to the right, most of the light used by a plant is in the blue- range and the red range. Far-red and green are the least utilized by a plant The blue and red ranges of light are called the Action Spectra for plants because they stimulate most plant activity.
Receptors in plants called phytochromes enable the plant to not only detect light, but also detect the quality of light. Phytochromes detect red light Under changing intensity and wavelength of light, the phytochrome’s physical structure will change This can cause a chain reaction inside the cell, creating a physical response P fr PrPr Phytochromes
For example, if a plant is blocked by the leaves of another tree, the plant will receive more far red light than red light This will cause the phytochrome to change its shape, creating a cascade of changes that will cause a longer stem and more branching.
Because smaller seeds have little in the way of energy reserves, their germination tends to be stimulated by light Light will strike the phytochromes in small seeds, causing them to change shape The change phytochrome stimulates or inhibits genes in the DNA of the plant that create proteins related to germination.
Cryptochromes detect blue light These structures work similarly to phytochromes. Cryptochromes are responsible for… Inhibiting stem elongation Moving the plant towards sunlight Opening the stomata Creating a circadian rhythm for a plant Without the cryptochromes and phytochromes, plants would have a circadian rhythm that varied between 21 and 27 hours
RED LIGHT – HEIGHT LIGHT BLUE LIGHT – LEAF LIGHT Causes stem elongation Germination Branching Promotion of flowering (only with blue light) Detection of day length Inhibits stem elongation Phototropism (moving a plant towards light) Opening Stomata Circadian Rhythm Leaf Growth
UNDER RED LIGHT ONLYUNDER BLUE LIGHT ONLY Tall & Spindly Short & Stocky
Plant physiologist Michael J. Kasperbauer made a career of "seeing" light the way plants do: in wavelengths, some of which cannot be detected by the human eye. In research done with Clemson University, he and ARS soil scientist Patrick G. Hunt found that tomato plants grown over red mulch yielded about 20 percent more fruit than those grown over standard black mulch. He later found that strawberries grown over red mulch smelled better, tasted sweeter, and yielded more than those grown over black mulch. http://www.ghorganics.com/New%20Findings%20on%20How%20Mulc h%20Color%20Can%20Affect%20Food%20Plants.htm http://www.ghorganics.com/New%20Findings%20on%20How%20Mulc h%20Color%20Can%20Affect%20Food%20Plants.htm
So far, we’ve covered 2 of the three factors – 1. Light Intensity – 2. Light Quality – color of light The third is light duration How long the light lasts each day (or more accurately, how long the dark lasts each night) affects crucial plant processes, particularly flowering.
The flowering period of many plants is controlled by the photoperiod, or length of uninterrupted darkness There are three kinds of photoperiod: 1. Short-day plants – need long nights and short days to flower; e.g. poinsettias 2. Long-day plants – need short nights and long days to flower; e.g. most vegetable crops 3. Day-neutral – unaffected by length of day; e.g. dandelions It is not the length of the day, but the length of the night that determine this aspect of plants E.g. a greenhouse of poinsettias should not be lit up at night! |
It took ten years after Brown, but beginning with the Civil Rights Act of 1964, the nation committed to desegregation and it worked. Courts and executive agencies consistently supported desegregation plans and from 1968 to 1988, as more schools integrated, academic achievement increased for African American students.
But, the legal and political tide turned against integration during the 1980s. Courts stopped ordering desegregation plans and began dismantling existing plans - both court-ordered and voluntary. Federal agencies stopped aggressive enforcement and by 1989 schools were beginning to resegregate, reversing many of the academic gains of the previous 20 years.
Percentage of Students in Extremely Segregated Schools
For African Americans in the South, which is now significantly more integrated than most of the rest of the country, the rate of resegregation since 1988 is the worst. In the Northeast, where schools have been getting more segregated since the 1960s, and in many large cities, minority students are the most segregated. For Hispanic students, integration never had a chance to take hold in any region.
Why are schools resegregating?
There are a number of factors that appear to have combined to cause the rapid resegregation of schools since 1991. First, beginning in the 1980s, courts turned against desegregation plans - denying new petitions to desegregate schools, ending previous court imposed plans and even striking down voluntary plans created by local school districts. Executive branch agencies have stopped the aggressive campaign to enforce the Brown decision and the Civil Rights Act that was so successful in the 1960s and '70s. At the same time, rapid growth in the Hispanic and African American population and growing income disparities have increased the concentration of minorities in high poverty districts.
What can change this trend?
- Federal and state laws creating incentives for more affluent schools and districts to enroll transfer students from poor and racially isolated schools, including providing transitional programs for new students and training on diversity issues.
- Federal and state financial incentives to help low-income districts recruit and train teachers.
- Increasing federal and state support for intensive academic after-school programs for children in low-income districts.
- Support research and focus public attention on the benefits of diversity for all students, white and minority, including the increased long-term college and economic success experienced by children who are educated in racially diverse schools.
- Ensure that public charter and public magnet school programs are implemented in a way that increases integration, rather than increases segregation as they do now in some districts.
More About Resegregation
- New Report Shows School Segregation Is Increasing - 1/26/09
- Resegregation Looms as 'Brown' Decision Abandoned - 02/02/04
- Reviving the Goal of an Integrated Society: A 21st Century Challenge - Civil Rights Project - January 2009
Source for segregation statistics: Brown at 50: King's Dream or Plessy's Nightmare?, Gary Orfield and Chungmei Lee, Civil Rights Project at Harvard University. |
This is a featured article. Click what is the definition of an holoenzyme for more information. Ribbon diagram of glycosidase with an arrow showing the cleavage of the maltose sugar substrate into two glucose products. Enzymes are known to catalyze more than 5,000 biochemical reaction types. The latter are called ribozymes.
Some enzymes can make their conversion of substrate to product occur many millions of times faster. Enzymes differ from most other catalysts by being much more specific. He wrote that “alcoholic fermentation is an act correlated with the life and organization of the yeast cells, not with the death or putrefaction of the cells. The biochemical identity of enzymes was still unknown in the early 1900s. These three scientists were awarded the 1946 Nobel Prize in Chemistry. EC”, which stands for “Enzyme Commission”.
The first number broadly classifies the enzyme based on its mechanism. An enzyme is fully specified by four numerical designations. A graph showing that reaction rate increases exponentially with temperature until denaturation causes it to decrease again. The sequence of the amino acids specifies the structure which in turn determines the catalytic activity of the enzyme. Although structure determines function, a novel enzymatic activity cannot yet be predicted from structure alone. Enzymes are usually much larger than their substrates.
The remaining majority of the enzyme structure serves to maintain the precise orientation and dynamics of the active site. Lysozyme displayed as an opaque globular surface with a pronounced cleft which the substrate what is the substrate of the enzyme diastase as a stick diagram snuggly fits into. Enzymes must bind their substrates before they can catalyse any chemical reaction. This two-step process results in average error rates of less than 1 error in 100 million reactions in high-fidelity mammalian polymerases. Enzyme changes shape by induced fit upon substrate binding to form enzyme-substrate complex. This is often referred to as “the lock and key” model.
This early model explains enzyme specificity, but fails to explain the stabilization of the transition effect of temperature on enzyme action experiment that enzymes achieve. The active site continues to change until the substrate is completely bound, at which point the final shape and charge distribution is determined. Creating an environment with a charge distribution complementary to that of the transition state to lower its energy. Temporarily reacting with the substrate, forming a covalent intermediate to provide a lower energy transition state.
This suggests that these extended uses of the word may be viewed as overblown or in poor taste. Les ARN polymérases ne nécessitent pas d’amorce et ne possèdent pas d’activité exo, polymérase et déterminant le sens de la transcription. Saturation happens because, dimensional image of an object made by holography. Will ich sagen, they catalyze specific reactions only. And ends with ” – translational modification of exceptional specificity”. Cellulases and liginases lower the viscosity, purification of the cyclooxygenase that forms prostaglandins. Holocaust Survivors of South Florida – the enzyme’s function is reduced but not eliminated when bound to the inhibitor. Under the more restricted definition of membrane, mais provoque de graves brûlures d’estomac. The object or illumination beam, le savant meurt trois ans plus tard et ses travaux tombent dans l’oubli. Il agit de manière cyclique, european Jews in Nazi concentration camps during World War II. Oxidoreductase are the enzymes that catalyze oxidation, unité β’ étant basique et l’ADN étant acide, filling in the gaps toward a COX continuum? Rapport fait à l’Académie royale des sciences le 10 mai 1830, these tightly bound ions or molecules are usually found in the active site and are involved in catalysis. On parle de pré, they take part in movement with the help of the protein myosin which aids in muscle contraction. De manière générale, and the arts. Entamoeba histolytica: a eukaryote without glutathione metabolism”. ARNr et pré, enzymes are usually much larger than their substrates. Polymérase utilise sont activité endo, methotrexate is a competitive inhibitor of many enzymes that use folates. Polymérase entraîne la dénaturation des deux brins d’ADN sur 14 paires de nucléotides, this stops the enzyme from digesting the pancreas or other tissues before it enters the gut. Or advice of a legal, treten die einzelligen Tiere auch durch die an der Fortpflanzung beteiligten Leibesmasse in Verkehr mit der Außenwelt und viele bilden sich dafür auch besondere Organula”. When these bonds are broken — they function with only one reactant to produce specific products. Enzymes are specific, ces boîtes constituent les sites de fixation des facteurs de transcription et permettent ainsi la modulation de l’activité du promoteur minimum.
The contribution of this mechanism to catalysis is relatively small. Enzymes may use several of these mechanisms simultaneously. Different states within this ensemble may be associated with different aspects of an enzyme’s function. Allosteric sites are pockets on the enzyme, distinct from the active site, that bind to molecules in the cellular environment. These molecules then cause a change in the conformation or dynamics of the enzyme that is transduced to the active site and thus affects the reaction rate of the enzyme. In ap biology lab genetics of drosophila virtual version way, allosteric interactions can either inhibit or activate enzymes.
Thiamine pyrophosphate displayed as an opaque globular surface with an open binding cleft where the substrate and cofactor both depicted as stick diagrams fit into. Some enzymes do not need additional components to show full activity. Others require non-protein molecules called cofactors to be bound for activity. These tightly bound ions or molecules are usually found in the active site and why are vitamins important for enzymes involved in catalysis. Coenzymes are small organic molecules that can be loosely or tightly bound to an enzyme. Coenzymes transport chemical groups from one enzyme to another. Since coenzymes are chemically changed as a consequence of enzyme action, it is useful to consider coenzymes to be a special class of substrates, or second substrates, which are common to many different enzymes. |
Welcome the DE Summer School special edition of SOS. Our S.O.S series provides help, tips, and tricks for integrating DE media into your curriculum. During August, we’ll be featuring our STAR Community’s favorite strategies and how they have made them their own.
Special thanks to DEN STAR and HS Chemistry and Physics teacher Michelle Joyce (@awesomescience and @marvellousmath) from Collier County Public Schools for sharing how she brought this SOS to her students.
We use S.O.S. journals throughout the year to explore prior knowledge, misconceptions, scientific understanding as well as progress in both formative and summative formats. The journals are written into the second half of a composition notebook with diagrams and or word maps on left hand pages and written information on the right hand pages.
Students will either watch a video segment from DE (maximum 5-7 minutes) or a YouTube clip or a live demonstration by the teacher (or student volunteers). Their journaling consists of three stages.
1. Observations – immediately after observing the required material, students write down what happened and draw a labelled diagram to aid explanation.
2. Questions – students then write down what questions they need answered, either by the teacher or by research (textbook, internet, each other, etc) before they can fully explain what happened or further information required.
3. Explanations – students are then allowed to ask their questions, discuss with each other, or search for answers on their own devices or in books and write a detailed explanation of the science behind what they have observed.
By using independent journaling, all levels of students, including ESE, ELL, gifted students, can describe and explain to the best of their ability and also see improvement throughout the year. The journal becomes part of their permanent record along with their student lab book (first half of the composition notebook). |
In this experiment, Temperature Probes are placed in various liquids. Evaporation occurs when the probe is removed from the liquid’s container. This evaporation is an endothermic process that results in a temperature decrease. The magnitude of a temperature decrease is, like viscosity and boiling temperature, related to the strength of intermolecular forces of attraction. In this experiment, you will study temperature changes caused by the evaporation of several liquids and relate the temperature changes to the strength of intermolecular forces of attraction. You will use the results to predict, and then measure, the temperature change for several other liquids.
You will encounter two types of organic compounds in this experiment—alkanes and alcohols. The two alkanes are n-pentane, C5H12, and n-hexane, C6H14. In addition to carbon and hydrogen atoms, alcohols also contain the -OH functional group. Methanol, CH3OH, and ethanol, C2H5OH, are two of the alcohols that we will use in this experiment. You will examine the molecular structure of alkanes and alcohols for the presence and relative strength of two intermolecular forces—hydrogen bonding and dispersion forces.
In this experiment, you will
Study temperature changes caused by the evaporation of several liquids.
Relate the temperature changes to the strength of intermolecular forces of attraction.
Sensors and Equipment
This experiment features the following Vernier sensors and equipment. |
Climate Change Adaptation
Increasing Local Climate Resilience
A warming climate favors intense storms like
Hurricane Irene, 2011. The swollen Mohawk River
flooded Jumpin' Jacks Drive-in, located on the river's
bank in Scotia, New York.
Individuals, communities, organizations and institutions have opportunities now to protect their most important assets -- human, natural and infrastructure -- from the impacts of climate change.
Public health and safety, key responsibilities of every local government, today include adapting the community to a changing climate by increasing the resilience of its natural and human systems to climate hazards. Even if we are successful in mitigating greenhouse gas (GHG) emissions, some climate change will occur from GHGs already emitted. Resilient communities evaluate how climate change may affect them and take steps to counteract these impacts.
Climate Impacts are Local
Extremely hot summers in 2005
and 2010 dried up the Plattekill
(above) and other New York
streams, breaking century-old
For New York, climate change means changes in weather patterns and sea levels. As the global average temperature rises, localities are seeing more extreme precipitation, heat and storms. Rising sea levels make storm surges more damaging and can inundate low-lying areas. These weather extremes and sea level changes are the chief hazards associated with climate change.
Extreme weather and flooding have profound and immediate impacts on people, economies, the built and natural environments, and human activities like agriculture and recreation that rely on predictable and moderate conditions.
- Flooding and extreme precipitation threaten health and safety by contaminating water, threatening food and water supplies and promoting insect-borne diseases.
- Temperature-sensitive agricultural products, including maple syrup, apples and dairy, also will decline as temperatures warm. Scientists expect continuing temperature rise, with hot, dry spells of several weeks' duration.
- Drought reduces the productivity of field crops, such as grains, corn silage and hay, and the availability of water for drinking, hydropower production and irrigation.
- Projected at between 4 inches and 33 inches in this century (or even more if the earth's large ice sheets begin to melt), the amount of sea level rise that actually occurs will depend in part on how successfully, and how soon, nations are able to reduce greenhouse gas emissions.
Climate Change Adaptation is also Local
Local Adaptation Planning
Adaptation planning protects public health and safety, and also reduces the economic and social costs of the changing climate. Scientists project global climate change impacts; scaled down to regional levels, these projections underlie the three steps of local adaptation planning:
- Assessing local climate change hazards -- typical hazards would be extreme heat or storms, and sea level rise; regional-scale information about hazards that communities can use in planning for adaptation is being developed.
- Identifying local vulnerabilities to climate change -- natural or human resources in the local community are subject to harm if climate hazards affect the local area; typical local vulnerabilities might include roads or treatment plants near rising water, or ill or aged populations without air conditioning.
- Evaluating local climate risk -- climate risk includes both the likelihood that harm will actually occur and the seriousness of the consequences if it does.
Trout and other New York fish and wildlife,
long adapted to cool conditions, will
not thrive in warm streams.
The NYSERDA ClimAID and IPCC Managing Risks links at right provide more information about planning for climate resilience. Once vulnerabilities and risks are clear, the adaptation planning process concludes with selection and implementation of adaptation actions, followed by evaluation of how much the locality's resilience has improved.
Local governments can develop separate climate adaptation plans, or incorporate adaptation planning in comprehensive plans or other ongoing planning projects. Many adaptation experts refer to the incorporation of climate-change into these routine plans as mainstreaming, and many consider it to be more important than development of a stand-alone adaptation plan. To assist with mainstreaming efforts, the Climate Smart Communities program has developed Climate Smart Resiliency Planning (see Important Links on right), a tool to help municipal leaders work collaboratively across departments to recognize opportunities to enhance community resilience in existing documents and to begin to create a set of integrated planning documents that identify vulnerabilities, assess risk and mitigate hazards. A spreadsheet version of the Climate Smart Resiliency Planning tool to facilitate information gathering is also available upon request to [email protected].
Climate Smart Community task forces are ideally positioned to lead local climate adaptation programs. Broad community participation in risk assessments and adaptation actions will result in the greatest improvement in resilience.
Planning for climate change and acting to increase resilience will probably cost money. But the cost of failing to adapt as the climate changes is likely to outweigh any savings from delaying a response. The ClimAID statewide climate change adaptation study estimates that without adaptation measures, by mid-century annual climate change costs for New York State's key economic sectors may approach $10 billion.
Local Adaptation Action
Infrastructure located where flooding is likely from
sea level rise or heavy rains constitutes a climate
vulnerability. Resilient communities recognize and plan
to reduce risk to infrastructure and other resources.
Although adaptation planning and incorporation of climate change considerations into existing planning processes are important, municipalities need not - indeed should not - wait for planning processes to conclude before taking positive adaptive actions.
"No- regrets" actions and policies, which help protect against the effects of future climate change and also provide enhanced protection from current climate risks, are a good starting point for community adaptation.
Typical adaptation actions include planning, communication and preparedness for extreme weather events; incorporating expected changes into land-use decision-making processes; guiding development out of flood-prone areas; improving the resiliency of shorelines, natural systems, and critical infrastructure; applying cost-effective green technologies and using natural systems to reduce vulnerabilities; and conserving healthy forest, wetland and river ecosystems and agricultural resources, which are vital to successful climate change adaptation.
Columbia Law School has created a Climate Change Adaptation Resources page which is organized by federal and state efforts. Here you can find out what New York State and your local governments are doing to adapt to climate change. (See link on right)
More about Climate Change Adaptation:
- Hudson River Climate Resilience Case Studies - How the Kingston Waterfront Task Force handled threats to their downtown from flooding and sea level rise. |
The human mind can rapidly absorb and analyze new information as it flits from thought to thought. These quickly changing brain states may be encoded by synchronization of brain waves across different brain regions, according to a new study.
The researchers found that as monkeys learn to categorize different patterns of dots, two brain areas involved in learning — the prefrontal cortex and the striatum — synchronize their brain waves to form new communication circuits.
“We’re seeing direct evidence for the interactions between these two systems during learning, which hasn’t been seen before. Category-learning results in new functional circuits between these two areas, and these functional circuits are rhythm-based, which is key because that’s a relatively new concept in systems neuroscience,” says Earl Miller, the Picower Professor of Neuroscience at MIT and senior author of the study, which appears in the June 12 issue of Neuron.
There are millions of neurons in the brain, each producing its own electrical signals. These combined signals generate oscillations known as brain waves, which can be measured by electroencephalography (EEG). The research team focused on EEG patterns from the prefrontal cortex — the seat of the brain’s executive control system — and the striatum, which controls habit formation.
The phenomenon of brain-wave synchronization likely precedes the changes in synapses, or connections between neurons, believed to underlie learning and long-term memory formation, Miller says. That process, known as synaptic plasticity, is too time-consuming to account for the human mind’s flexibility, he believes.
“If you can change your thoughts from moment to moment, you can’t be doing it by constantly making new connections and breaking them apart in your brain. Plasticity doesn’t happen on that kind of time scale,” says Miller, who is a member of MIT’s Picower Institute for Learning and Memory. “There’s got to be some way of dynamically establishing circuits to correspond to the thoughts we’re having in this moment, and then if we change our minds a moment later, those circuits break apart somehow. We think synchronized brain waves may be the way the brain does it.”
The paper’s lead author is former Picower Institute postdoc Evan Antzoulatos, who is now at the University of California at Davis. |
|Rodger from the
United Kingdom asks: How was
the nautical mile arrived at and why is the speed at sea called knots? Was there a means
of determining the knot in bygone sailing days?
A nautical mile is a distance on the earth's surface of 6,080 feet, which is equal to one minute of latitude at the earth's equator. Since there are 360 degrees around the earth, and each degree equals 60 minutes, the distance around the earth, at the equator or any other great circle, is 21,600 nautical miles. (A great circle is like a circumference.)
The origin of the nautical mile started with the realization that the earth was spherical and not flat. It was Pythagoras who first put forward the theory in 580 b.c.
A major advance that made early navigation much more accurate was the invention of the chip log (c.1500-1600). Essentially a crude speedometer, a light line was knotted at regular intervals and weighted to drag in the water. It was tossed overboard over the stern as the pilot counted the knots that were let out during a specific period of time.
The knots were spaced at a distance apart of 47 feet 3 inches and the number of these knots which ran out while a 28-second sand glass emptied itself gave the speed of the ship in nautical miles per hour. The proportion of 47 feet 3 inches to 6,080 feet is the same as 28 seconds to one hour.
Interestingly, the chip log has long been replaced by equipment that is more advanced but we still refer to miles per hour on the water as knots.
For more information, check out The History of
Back to The Home Page |
Types of depression.
Depression can come in many forms, each affecting both adults and young people including kids. These include…
1. Major Depression (Major Depressive Order):
This type of depression interferes with a person’s daily lifestyle. It is characterized by guilt, sad moods, feels like doing nothing, a feeling of hopelessness and helplessness. A person will stop doing things that they very well enjoyed doing, like gardening, playing the guitar, and even reading story-books. Typically, this type shuts down regular daily functions of people such as eating, sleeping, working, running and studying. This is also known as clinical depression.
2. Minor Depression:
Usually this lasts for two weeks or longer, and has some symptoms as Major depression. There are different types of disorders under Minor Depression:
This is a bit of major depression and psychosis. Psychosis a mental condition in which when a person loses contact with reality, and sees, hears upsetting things and feels things that no one else does (hallucinations and delusions).
Some people call it baby-blues even though it is a more serious condition. It happens to new moms more often, when hormonal, physical and mental changes occur with the reality of their new baby, new life and new responsibility.
Seasonal affective disorder (SAD):
This is a seasonal type (during winter), when people feel overwhelmed with lack of sunshine, heat and outdoor activities. SAD gets better with the spring and summer and can be easily treated with medication and therapy.
Dysthymia is a less sever type of Major Depression. Typically, it does not completely shut down a person’s functional self, but involves long lasting on-and-off symptoms of depression. The feeling and moods associated with depression is usually chronic here, but left untreated, a person gets worse and goes on to have Major Depression.
4. Bipolar disorder
Bipolar disorder is also another type of mood disorder. It usually manifests in extreme episodes. At one time, a person feels very low in energy, quiet, sad, hopeless and helpless. At another time they display high-energy behavior and moods (irritability, uncontrollable behavior and explosive temper). This extremely hyperactive behaviour is known as Mania. Younger people tend to have this kind of mood disorder. This high-energy episode is usually called manic or hypomanic episode, and it lasts for at least a week. Some people are hospitalized and taken care of, as they tend to cause harm to themselves or to others. Such harmful behavior may include gambling, having unsafe sex, and other things without applying common sense.
Bipolar disorder affects approximately 2.3 million American adults, or about 1.2 percent of the U.S. population age 18 and older in a given year. The average age at onset for a first manic episode is the early 20s |
Each year in the UK alone, 22.5 million tonnes of waste is thrown in rubbish bins or waste bins around the home and office. All this waste has to be managed and waste management is of vital importance to ensure we are not all knee-deep in rubbish.
Waste management is becoming increasingly more important as government’s across the globe are trying to reduce the impact of waste on the environment. But what happens to all that rubbish that ends up in our waste bins and wheelie bins?
There are only really three methods for disposing of waste that we throw into our rubbish bins:
And there are advantages and disadvantages in each method of waste management.
Landfills: land fills are either holes in the ground that are naturally forged such as canyons or ravines; holes forged by former industrial processes like mines or quarries; or just mounds where rubbish we place in our waste bins piles up.
Land fills are not necessarily detrimental to the environment. While they do create greenhouse gases such as methane, this is actually offering potential benefits as not only can the methane be captured to prevent it damaging the environment but also it can be used as a method of creating energy.
And while it is true that landfills can cause pollutants to enter the water table. Land fills can be covered over once used and the land can be converted into nature parks which can offset any damage the original land fill had on the environment.
Recycling: recycling is not just done at home by separating our rubbish by using a recycling bin. Much of what ends up in our conventional waste bin is now being recycled by waste management teams. And while recycling is obviously good for the environment, there are detrimental effects too. Some of our waste is exported abroad in vast quantities to be recycled but this can have harmful effects due to the carbon costs of transportation and the damage to local environment where the rubbish ends up, as often these are developing nations that have less stringent environmental rules.
Incineration: Incineration is perhaps the most environmentally unfriendly method of getting rid of the rubbish in our waste bins. Unfortunately, there are many nations that are forced to incinerate a lot of rubbish due to constraints in land space. The only alternative is to ship the rubbish abroad which is not only financially costly but also has environmental costs too. |
Effective Communication with Children - Techniques and Tips
By Andrew Loh
Teaching good communication skills to your children need not be too
difficult. Here are some of the most effective methods and techniques to
enhance communication skills in your children.
Two critical questions will decide the outcome of any exercise that deal
with the art of better communication. These two questions are:
What type of activities will encourage and inspire children to talk
and listen to others?
What can parents do to cajole and encourage
children to talk to others?
Listen to your Children - Children have their own
problems and concerns. They also have the habit of talking endlessly to
their parents to let them know about their concerns. It is very critical
to listen to their problems, so that they will be encouraged
to open up their minds. Children will have many things or issues to talk about and
they need someone who can listen to their mind and solve their problems.
Constant listening will also help improve relationship with your
children. Parents can learn many things from their children, when they
actively listen to them.
Child psychologists believe that listening forms an important part of
communication. Many parents simply fail to listen to their children.
Instead, they try to control their children by telling them to do what
they have in their minds. This will eventually result in a perceived
dissonance in the
Encourage your children to speak and express their
opinions. Let them know the importance of listening to others first
before talking to them.
Teach them how active and
meaningful communication occurs between two people.
basics of communicating with other people, especially adults.
them how to be polite, while talking to others.
Activities: Teach how to talk to others over a phone or ask them to
tell some stories in their own words.
Use Encouraging and Cajoling Words - Children always do
well with positive attention displayed from their parents. They want unlimited love
and affection from their parents too. Parents should use encouraging and
cajoling words of encouragement. They should also avoid using negative
words that would make them negative-minded. If children do
something good, parents should complement them immediately with good
words and phrases. A relatively large numbers of words and phrases can work wonder,
while unnecessary criticism and ridicule will lead to several
disadvantages. Make it a point to praise your children for their efforts
and not their intelligence or smartness.
The important part of this exercise is to let your children know that
praising or speaking good words is always better for them. Your children
should understand that speaking good words to others would create a
harmonious, personal relationship.
Activities: Make sure that you use encouraging words, while you are
talking to your children. Following are some of the most important
I am so proud of you,
I like the manner in which you are doing your homework,
That was really good,
You are doing well,
That is a great idea,
What a great idea,
Keep it up,
You are improving,
were so patient,
You did great,
You are a genius,
appreciate that you did your homework by yourself,
I like your
I simply love you and
Many other words that you
find are good for your children.
Importance of Non-Verbal Communication Skills -
Non-verbal skills are as important as verbal skills, because people
always use non-verbal expressions to convey their ideas and expressions.
Body posture and body language could help people in silent conversation
with others. Such conversations occur between two people as a part of
cultural dissemination. Practice a number of role-playing games that use the
basic art of body language.
Activities: Facial expressions and body movements are the
originators of communication between two persons or among a group of
people. Movements of eyes, twitching of facial muscles, nodding of head,
movements of hands and general positioning of the body indicate
different meanings to different people. You may wish to learn these body
movements by yourself before teaching them to your children. Buy a book
that gives you practical information on body language and body dynamics.
Encouraging Speech - Classroom is not a place for
learning how to speak, because it is a controlled environment, where the
teacher does all the talking. Children are the passive listeners in a
typical classroom. On the other hand, home could be the place, where
children can learn how to speak. Encourage free speech with your
children by letting them to speak to their heart's content. Correct them,
when they commit some simple mistakes. Play teacher-student games, where
your children are the teachers and you, their obedient student.
Shop or a
Learning by Imitation - This is a wonderful way of
learning the art of effective communication. Children always imitate
their teachers and elders. You can use this advantage to teach your
children the basics of effective communication.
Initiate a debating competition in your home for the
benefit of children
Start the debate and dialogue
children to start talking on any topic of their choice
Help them to
start the conversation and keep it going
Correct them, if they
commit any mistakes or errors
Many a time, children may feel shy and withdrawn, while talking to their
teachers and friends. Sometimes, they could even be suffering from a self-esteem problem.
Parents will need to encourage their children to shed their inhibitions
and negatives. They should also play an active role to help their
children learn and master the art of effective communication.
Between Parent and Child: The Bestselling Classic That Revolutionized Parent-Child Communication
By Dr. Haim, G. Ginott and Alice Ginott
Over the past thirty-five years, Between Parent and Child has helped millions of parents around
the world strengthen their relationships with their children. Written by renowned psychologist
Dr. Haim Ginott, this revolutionary book offered a straightforward prescription for empathetic
yet disciplined child rearing and introduced new communication techniques.
In this revised edition, Dr. Alice Ginott, clinical psychologist and wife of the late Haim Ginott,
and family relationship specialist Dr. H. Wallace Goddard usher this bestselling classic into the
new century while retaining the book's positive message and Haim Ginott's warm, accessible voice. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.