content
stringlengths 275
370k
|
---|
Bioengineers at EPFL have found a way to radically increase the efficiency of single-cell RNA-sequencing, a powerful tool that can “read” the genetic profile of an individual cell.
Single-cell RNA sequencing, or “scRNA-seq” for short, is a technique that allows scientists to study the expression of genes in an individual cell within a mixed population – which is virtually how all cells exist in the body’s tissues. Part of a larger family of “single-cell sequencing” techniques, scRNA-seq involves capturing the RNA of a single cell and, after multiple molecular conversion reactions, sequencing it. Since RNA is the intermediate step from gene (DNA) to protein, it provides an overview about which genes in that particular cell are active and which are not.
Because scRNA-seq captures the activity of all genes in the cell’s genome – thousands of genes at once – it has become the gold standard for defining cell states and phenotypes. This kind of data can reveal rare cell types within a cell population, even types never seen before.
Cost and efficiency
But scRNA-seq isn’t just a tool for basic cell biology; it has been widely adopted in medical and pharmacological research as it is capable of identifying which cells are actively dividing in a tissue, or which are reacting to a particular drug or treatment.
“These single-cell approaches have transformed our ability to resolve cellular properties across systems,” says Professor Bart Deplancke at EPFL’s School of Life Sciences. “The problem is that they are currently tailored toward large cell inputs.”
This isn’t a trivial problem, as scRNA-seq methods require over a thousand cells for a useful measurement. Dr Johannes Bues, a researcher in Deplancke’s group, adds: “This renders them inefficient and costly when processing small, individual samples such as small tissues or patient biopsies, which tends to be resolved by loading bulk samples, yielding confounded mosaic cell population read-outs.”
The DisCo solution
Bues, with Marjan Biočanin and Joern Pezoldt, also in Deplancke’s group, have now developed a new method that allows scRNA-seq to efficiently process samples with fewer cells. Published in Nature Methods, the method is dubbed “DisCo” for “deterministic, mRNA-capture bead and cell co-encapsulation dropleting system”.
Unlike usual single-cell methods that rely on passive cell capture, DisCo uses machine-vision to actively detect cells and capture them in droplets of oil and beads. This approach allows for continuous operation, and also renders scaling and serial processing of cell samples highly cost efficient.
As shown in the study, DisCo features precise particle and cell positioning, and controls droplet sorting through combined machine-vision and multilayer microfluidics. All this allows for continuous processing of low-input single cell suspensions at high capture efficiency (over 70%) at speeds that can reach 350 cells per hour.
Overview and critical feature assessment of the DisCo system
a, Schematic diagram of the DisCo microfluidic device, which contains three inlet channels for cells, beads and oil (shown twice for illustration purposes); two outlets for waste and sample liquids, and several Quake-style microvalves (green boxes): 1, cell valve; 2, bead; 3, dropleting; 4, oil; 5, waste; 6, sample. Particles are detected by a camera and are placed at the Stop point. b, DisCo co-encapsulation process on the DisCo device (red, closed; green, open; light brown, dropleting pressure (partially closed)). c, The co-encapsulation process of two beads as observed on-chip. Dyed liquids were used to examine the liquid interface of the carrier liquids. Channel sections with white squares are 100 μm wide. d, The droplet capture process as observed on-chip. Valves are highlighted according to their actuation state (red, closed; green, open). e, Image of DisCo droplet contents. Cells (blue circles) and beads (red circles) were co-encapsulated and the captured droplets were imaged. Mean bead-size is approximately 30 μm. f, Droplet occupancy of DisCo-processed cells and beads (total encapsulations, n = 1,203). Bars represent the mean, and error bars represent ±s.d. g, Cell capture efficiency and speed for varying cell concentrations (2–20 cells per μl, total encapsulations, n = 1,203). h, DisCo scRNA-seq species separation experiment. HEK 293T and murine IBA cells were processed with the DisCo workflow for scRNA-seq, the barcodes merged and the species separation visualized as a Barnyard plot. i, Comparison of detected transcripts (UMIs) per cell of conventional Drop-seq experiments. UMIs per cell from HEK 293T data for conventional Drop-seq experiments are compared with the HEK 293T DisCo data. Drop-seq datasets were down-sampled to a similar sequencing depth. Box plot elements showing UMI counts per cell represent the following values: center line, median; box limits, upper and lower quartiles; whiskers, 1.5-fold the interquartile range; points, UMIs per cell. j, Total cell processing efficiency of DisCo at low cell inputs. Input cells (HEK 293T) ranging from 74 to 170 were quantified by impedance measurement. Subsequently, all cells were processed with DisCo, sequenced and quality filtered (>500 UMIs). The red line represents 100% efficiency, and samples were colored according to the recovery efficiency after sequencing.
To further showcase DisCo’s unique capabilities, the researchers tested it on the small chemosensory organs of the Drosophila fruit fly, as well as on individual intestinal crypts and organoids. The latter are tiny tissues grown in culture dishes closely resembling actual organs – a field that EPFL has been spearheading for years.
The researchers used DisCo to analyze individual intestinal organoids at different developmental stages. The approach painted a fascinating picture of heterogeneity in the organoids, detecting various distinct organoid subtypes of which some had never been identified before.
“Our work demonstrates the unique ability of DisCo to provide high-resolution snapshots of cellular heterogeneity in small, individual tissues,” says Deplancke.
Availability – The source code for the machine-vision software is available on github (https://github.com/DeplanckeLab/DisCo_source) |
When you get into the nitty-gritty of climate change, the word albedo often finds its way into the discussion — usually accompanied by words such as ‘reflectance’ or ‘ice’ — and there’s a good reason for that.
In essence, the albedo effect is very straightforward. Albedo is a measure of how a surface can reflect light, usually a concept that people relate to colors. Lighter colors reflect light more effectively than darker colors, which is why darker clothes get hot in the sun (but white clothes don’t).
The details, however, can get much more complex.
Albedo is commonly studied as a property in astronomy. Researchers look at different celestial bodies (planets, asteroids, comets) and infer certain things about their properties based on their albedo. But albedo is also important closer to Earth.
Albedo ranges from 0 (absorbs all radiation) to 1 (reflects all radiation). The higher the albedo, the more it reflects. This is sometimes represented in terms of percentages, from 0% to 100%.
The normal albedo of snow, for example, is nearly 1.0, whereas that of charcoal is about 0.04. A typical ocean albedo is approximately 0.06, while bare sea ice varies from approximately 0.5 to 0.7. In other words, when snow melts (and the underlying surface is exposed), more solar energy will be absorbed, capturing more heat.
The ice-albedo feedback is the relationship between the amount of solar radiation reflected and the climate response. The areas, which contribute the most to reflecting the sunlight, are closest to the poles. Two main places have the largest areas with a reflective surface, Greenland with almost 2.17 million km² and Antarctica with 1.2 million km².
Now imagine this scenario:
Human activity produces greenhouse gases.
This increases atmospheric temperatures, which leads to ice melting.
Less ice means less reflective area.
Less reflection means more absorbed energy.
More absorbed energy means more heat.
More heat means less ice. It’s a circular effect that builds on itself.
The opposite effect can happen as well: if the climate gets colder, ice can be sustained for a longer period, a bigger icy area means more radiation being reflected, the temperature continues dropping. This is called positive feedback — we go through a response that leads to another and another and another, and in the end it all came back to the original cause of the effects, exacerbating it.
Now that we know the general effect, let’s look at more subtle aspects.
Think about the ice that is floating. With the positive feedback effect kicking in, ice doesn’t even need to melt to affect the albedo — it just needs to break up. By dividing into smaller blocks, the ocean is exposed to the sun, and the ocean is darker. The seawater has a low albedo, it warms up faster and helps with the melting and the shredding of more ice.
Things can start off slowly, but accelerate quickly building on the feedback loop like a snowball effect. There’s also an inertia to the whole process, and it often builds up more inertia in time (meaning it’s harder to reverse the more time passes).
It’s not easy to understand the processes that govern albedo, however, due to the sheer complexity of the phenomenon and all the factors involved. Just look at a simplified (and very famous) diagram that pops up in meteorology lectures and gives students headaches.
It may seem complex, but we can break it down easily.
It’s basically a representation of the energy coming into the Earth and being emitted by the Earth. The left side of the diagram is the energy radiation coming from the sun. Out of this energy (342 W/m² that comes from the sun in this example), a part is reflected by clouds and atmospheric gases, a part is reflected by the surface, and the rest is absorbed.
Meanwhile, on the right, we see its counterpart — the radiation energy emitted by Earth. The more greenhouse gases in the atmosphere, the more of this energy is absorbed and the less is emitted.
This is the so-called energy budget of the planet. If it is in balance, the temperature remains constant. But if an imbalance is created (say, if we emit too many heat-trapping greenhouse gases), an imbalance is created and the temperatures start to change.
This is what we’re seeing now: human activities are creating imbalance by emitting greenhouse gases. In addition to the direct heating effect, this also creates a feedback loop with things like albedo.
Imbalances can happen naturally and have happened before in the Earth’s history, but this time, we’re pretty sure it’s us — all the evidence points to it.
Let’s talk about clouds
Clouds are important for the albedo as well, but it takes the right cloud to reflect a lot of energy.
If a cloud is fluffy and close to the surface, it has a strong reflectance and it will diminish radiation from entering, which results in cooling the area. A thin, high-altitude cloud won’t reflect radiation so well, but it traps radiation coming from Earth to the sky, in the end, it makes things warmer. The albedo of low thick clouds such as stratocumulus is about 90%. The albedo of high thin clouds such as cirrus may be as low as 10%.
On average, clouds contribute to reflecting solar energy, but the mechanisms aren’t always straightforward. More research is needed to thoroughly understand the effects that clouds have on planetary albedo.
There are no borders
Feedback mechanisms regarding albedo are irrefutable mechanisms in the climate system. The water-vapor feedback is undoubtedly a key process to the global warming. While we may not understand all the details and subtleties of this mechanism, we know it exists and we know it’s affectin the planet’s temperatures.
With more carbon dioxide(CO2) in the atmosphere, temperature rises, this increase can maintain water in the vapor form. This rise in the water concentration increases the air temperature because water vapor is also a greenhouse gas. This also melts ice and snow, revealing darker surfaces that absorb more heat.
It’s crucial to understand that that this isn’t a nationaal or local phenomenon. The atmosphere is a continuous medium, a fluid with different gases. The atmosphere knows no human borders. Our climate system is not made of separated feedbacks, they interact with each other and that is what causes a global climate crisis.
As CO2 levels are now at 414 ppm*, a level that is unprecedented in the past 600,000 years, and possibly in the past 20 million years. Without rapid and decisive action to reduce our emissions, we are headed towards levels that will cause irreversible, catastrophic changes to the planet’s climate. This will affect everyone on the globe without exception. The laws of physics will not feel sorry for us.
*This article is not updated regularly to reflect changes in the Earth’s CO2 concentration. You can check out this NASA page for regular updates. |
Although silicon is mainly associated with microchip devices and advances in computing, the alloy that silicon forms with germanium can be used as a thermoelectric material, which is, in the presence of a gradient of temperature, able to generate an electrical voltage and vice versa. This thermoelectric effect has been long known. Nevertheless, it has not been widely used because of its modest efficiency. In recent years, the interest on thermoelectricity has revamped due to the use of thermoelectric devices for micro‐energy harvesting or as a large‐scale conversion of residual heat into electricity. This increase in the research on thermoelectrics is mostly due to the impact nanostructuration has on improving the efficiency of these materials, which has been increased almost a factor of three over the last 20 years. The purpose of this chapter is to highlight the ways in which silicon‐germanium has improved its efficiency by nanostructuration.
Considering the decreasing fossil fuels and increasing energy demand worldwide, a pressing need for improved direct thermal (wasted heat) into electrical energy conversion is imposed. The wasted heat comes from the energy transportation, vehicles, electricity generating sources, industry, etc., which tampers the actual efficiency of the initial resources. For instance, around 30% of the energy obtained from the fuel of a car is actually used in its movement. The other 70% is lost in the form of heat, friction, and cooling the car. Furthermore, it is completely reasonable to look for alternative energy technologies to reduce our dependence on fossil fuels and greenhouse gas effects. This necessity has fostered multiple lines of research, including the conversion of thermal energy through thermoelectricity . As an example, the most recent International Energy Outlook 2016 (IEO2016) prepared by the USA Energy Information Administration shows the energy production predictions for the year 2040, based on the data recorded previously (Figure 1a). It is shown that the total world consumption of energy will increase a 48% from 2012 to 2040. Renewable energies are the fastest‐growing energy sources over the predicted period, with a foreseen increase in their consumption of around 2.6%/year between 2012 and 2040. In Figure 1a, CPP refers to a Clean Power Plan (CPP), which is a USA regulation that aims to reduce carbon dioxide emissions from electric power generation by 32% within 25 years, relative to the levels of 2005 in the USA.
Focusing on the future of the different sources of energies, that is Figure 1b, world net electricity generation is envisioned to increase by 69%, in 2040, going from 21.6 trillion kilowatt hours (1012 kWh) registered in 2012 to 25.8 trillion kWh predicted for 2020 and to 36.5 trillion kWh in 2040. It is worth noting that, even with initiatives as the CPP, or the development predicted for renewable energies, fossil fuels will still account for a 78% of the energy used in 2040.
For these reasons, in late 2015, representatives from 185 countries and the European Union (EU) met in Paris to reach a commitment to addressing climate change, called Paris‐COP21. This worldwide engagement is expected to drive innovation in renewable energies, battery storage, energy efficiency, and energy recovery. One of the main conclusions obtained in the conference is that climate change is often discussed as a single problem, but solving it will require a wide variety of solutions . The EU budget for low carbon‐related research under Horizon 2020 has been effectively doubled for the period 2014–2020, and the EU has promised to invest at least 35% of Horizon 2020 resources into climate‐related activities . In the United States, hundreds of major companies, including energy‐related companies such as ExxonMobil, Shell, DuPont, Rio Tinto, Berkshire Hathaway Energy, Cal‐pine, and Pacific Gas and Electric Company, have supported the Paris‐COP21 . In the coming decades, there will be a need for more energy‐efficient technologies, easily compatible with the non‐renewable energies (that will not disappear in the near future as it can be seen in Figure 1b). Certainly, thermoelectric materials and especially thin films are interesting players in this scenario. Its ability to convert waste heat into electricity regardless of the source of heat generation, stability over time, and the ability to generate electricity locally without the need for transportation are some of its many advantages.
Silicon‐Germanium (SiGe) Nanostructures for Thermoelectric Devices: Recent Advances and New Approaches to High Thermoelectric Efficiency | InTechOpen, Published on: 2017-05-31. Authors: Jaime Andrés Pérez‐Taborda, Olga Caballero‐Calero and Marisol Martín‐González |
For Educational Purposes Only
Do you have problems sleeping or wake up in the middle of the night and can’t go back to sleep?
Well, it’s important to pause and recognize that sleep is a necessity for every body system to run properly. An insufficient amount of sleep will affect our mental and physical well-being the following day. It also hurts our immune system, and metabolism and adds risk for health problems. The period of time called deep sleep is the period when the body experiences renewal and cellular repair. Deep sleep may play a major role in the manufacturing of ATP energy at the cell level.
The body has a circadian rhythm system, which is an internal clock. There is a suprachiasmatic nucleus (SCN) in the brain that runs the show. It's made up of approximately 20,000 neurons. The whole cycle begins when light enters the eyes. It stimulates the nerves of the retina and then moves through the optic nerve to the brain’s hypothalamus where the SCN is located.
It has been estimated that around half of all American adults state that they feel sleepy during the day, approximately 3 to 7 days a week. Also, between 10% to 30% have problems sleeping regularly.
In general, adults between the ages of 18 and 64 years old require 7 to 9 hours of sleep a night. Those over 65 years old need 7 to 8 hours. Although, in the United States, there is a report that shows 35.2% of all adults sleep less than 7 hours a night.
Individuals who work as factory workers or plant operators who work different types of machinery and equipment have more difficulty getting enough sleep. It has been shown that 44% receive less than 7 hours of sleep per night.
There are some tips below to help individuals sleep better.
- One’s diet should consist of nutrient-dense foods including fruits and vegetables.
- Research shows that consuming foods containing the amino acid, tryptophan help individuals sleep better and longer at night. One study showed that cereal containing tryptophan improved sleep and reduced anxiety and depression. It is believed that it helps increase melatonin and serotonin levels. Foods with tryptophan include milk, tuna, turkey, and chicken.
- Try to eliminate or reduce stimulating ingredients such as caffeine, especially in the afternoon.
- Add exercise to your daily routine to release tension
- Relax before bedtime
- Evaluate your mattress and pillow to determine if they are comfortable for sleep.
- Try to reduce screen time before bedtime.
Melatonin is a hormone that works in the SCN on the melatonin receptors in the suprachiasmatic nucleus (SCN) to help one to go to sleep. Melatonin is produced by the pineal gland, which is called the “Seat of the Soul”. The pineal gland receives information about the light-dark cycle from the outside and provides information on when to secrete melatonin. It is important to maintain balanced levels of melatonin in the body. The reason is that it helps to balance circadian rhythms related to the sleep/wake cycle. It also helps the regulation of different hormones including female reproductive hormones and it’s involved in menstrual cycles.
Chamomile is a soothing botanical that helps one feel sleepy because it contains a bioactive called apigenin, which binds to GABA receptors in the brain and helps promote sleepiness. It assists in providing a sedative feeling. Chamomile has been shown to improve the quality of sleep.
Passionflower is a botanical that grows in South America and also in the southern part of the United States. The name came from the resemblance of the 3 nails on the cross, and 5 stamens, which signify the 5 wounds of Christ. It provides a relaxing effect and some research investigators suggest that it affects gamma-aminobutyric acid (GABA) levels in the brain influencing mood.
Lemon balm, which is Melissa officinalis L. leaf extract is a member of the mint family. It has been used traditionally for stress and anxiety as far back as the Middle Ages. It has been studied for its benefits on relaxation and support during burdensome times. Research shows that it may be combined with other herbs to provide a calming effect and help one to sleep.
Magnesium is a mineral that has been suggested to relax the muscles. Some data suggests some adults may have low levels. There is research that has shown that magnesium improved subjective measurements of insomnia. These measurements included the efficiency of sleep, sleep time, and sleep onset latency, which is the amount of time that it takes one to fall asleep after going to bed. Also, the study showed it improved early morning awakening.
Vitamin B6 (pyridoxine) is a water-soluble vitamin essential for different processes in the human body. It has been positioned to help one to obtain restful sleep since it is a helper in producing melatonin in the body. One study evaluating vitamin B6 found that it supports the subject's memory and recall of dreams.
In conclusion, if you have problems sleeping it is important to address it since sleep is critically important to health and for every body system to run properly. Without enough sleep, one will have problems mentally and physically functioning at their best.
It also affects the immune system, and metabolism and increases the risk of health problems. Statistics show that many adults do not sleep the recommended hours, which are 7 to 9 hours of sleep a night. There are different suggestions to help one to sleep. Dietary aids that may help include melatonin, botanicals, chamomile, passion flower, and lemon balm. Also, the mineral magnesium and vitamin B6 may be helpful.
If individuals take a supplement that promotes sleep, they should not attempt to drive or use heavy machinery after taking them.
If you have a health condition and or take medication, it’s always best to check with your healthcare provider before taking supplements.
Abbasi B, Kimiagar M, Sadeghniiat K, et al. The effect of magnesium supplementation on primary insomnia in elderly: A double-blind placebo-controlled clinical trial. J Res Med Sci. 2012;17(12):1161-1169.
Aspy DJ, Madden NA, Delfabbro P. Effects of Vitamin B6 (Pyridoxine) and a B Complex Preparation on Dreaming and Sleep. Percept Mot Skills. 2018;125(3):451-462.
Aulinas A. Physiology of the Pineal Gland and Melatonin. [Updated 2019 Dec 10].
Bravo R, Matito S, Cubero J, et al. Tryptophan-enriched cereal intake improves nocturnal sleep, melatonin, serotonin, and total antioxidant capacity levels and mood in elderly humans. Age (Dordr). 2013;35(4):1277-1285.
Cases J, Ibarra A, Feuillère N, et al. Pilot trial of Melissa officinalis L. leaf extract in the treatment of volunteers suffering from mild-to-moderate anxiety disorders and sleep disturbances. Med J Nutrition Metab. 2011;4(3):211-218.
County Health Rankings. (n.d.). South Dakota: Insufficient Sleep. Retrieved July 13, 2022; from https://www.countyhealthrankings.org/app/south-dakota/2020/measure/factors/143/data
National Center for Chronic Disease Prevention and Health Promotion, Division of Population Health. (2017, May 2). CDC - Data and Statistics - Sleep and Sleep Disorders. Retrieved July 13, 2022; from https://www.cdc.gov/sleep/data_statistics.html
National Sleep Foundation. (2020, March 7). The National Sleep Foundation’s 2020 Sleep in America® Poll Shows Alarming Levels of Sleepiness and Low Levels of Action. Retrieved July 13, 2022.
Sleep Foundation. https://www.sleepfoundation.org/how-sleep-works/sleep-facts-statistics. Retrieved July 13, 2022. |
Facts you should know about breasts
Picture of the anatomy of the breast
- The breasts are medically known as the mammary glands.
- The mammary glands are made up of lobules, milk-producing glandular structures, and a system of ducts that transport milk to the nipple.
- Lymphatic vessels in the breast drain excess fluid.
- Breast growth begins at puberty in humans, in contrast to other types of primates in which breasts enlarge only during lactation.
- Breast tissue develops in the fetus along the so-called "milk lines," extending from the armpit to the groin.
What are the breasts (mammary glands)?
The breasts, located on the front of the chest, are medically known as the mammary glands. The term "breast" is sometimes used to refer to the area at the front of the chest.
Families With Breast Cancer
Ms. G. is a 40-year-old woman with two small
children. Like most women, she is concerned about her chances of developing
breast cancer. She asks her doctor about her risks.
Although breast cancer is a worry for most women, Ms. G. is especially worried because of a
family history of breast cancer. Her mother and sister had breast cancers that were diagnosed
at young ages.
What are the anatomical features of the breast?
The mammary gland is made up of lobules — glandular structures that produce milk in females when stimulated to do so. The lobules drain into a system of ducts, connecting channels that transport the milk to the nipple. Between the glandular tissue and ducts, the breast contains fat tissue and connective tissue.
Both males and females have breasts. The structure of the male breast is nearly identical to that of the female breast, except that the male breast tissue lacks the specialized lobules, as there is no physiologic need for milk production by the male breast. Abnormal enlargement of the male breasts is medically known as gynecomastia.
The breast does not contain muscles. Breast tissue is located on top of the muscles of the chest wall. Blood vessels and lymphatic vessels (a system of vessels that drains fluid) are located throughout the breast. The lymphatic vessels in the breast drain to the lymph nodes in the underarm area (axilla) and behind the breast bone (sternum).
In females, milk exits the breast at the nipple, which is surrounded by a darkened area of skin called the areola. The areola contains small, modified sweat glands known as Montgomery's tubercules. These glands secrete fluid that serves to lubricate the nipple during breastfeeding.
See a medical illustration of breast anatomy plus our entire medical gallery of human anatomy and physiology
What are the most common medical conditions affecting the breasts?
Breast health is a source of concern for most women. Although breast cancer
is a fairly common malignancy affecting one out of every eight women in the U.S.
at some point in life, benign (non-cancerous) conditions of the breast are much
more common. In fact, most masses and
lumps in the breasts are not cancer.
Breast cancer occurs in males as well, but it accounts for a small percentage of
all breast cancers.
Among the benign breast conditions,
cysts and fibrocystic changes are common.
One type of benign tumor in particular, known as a fibroadenoma, is common in
young women. Infections of the breast tissue can also occur, particularly during
breastfeeding. Mastitis is the medical term for inflammation of the breast.
Latest Women’s Health News
- Urinary Incontinence Affect Woman’s Mental Health
- Take These Key Steps to Good Urological Health
- Fibroid Pain, Bleeding Is Driving Thousands to ER
- Mammography Rates Plummeted During Pandemic
- Obesity May Help Trigger Heavier Periods: Study
- Want More News? Sign Up for MedicineNet Newsletters!
Daily Health News
- H5N6 Bird Flu Infection in China
- COVID Vaccine Misinformation
- Antibiotic-Resistant Pneumonia
- Mask Mandate Returns to L.A.
- Fermented Foods Help Microbiome
- More Health News »
Trending on MedicineNet
- Guillain-Barre Syndrome
- What Triggers Shingles?
- Normal Blood Sugar Levels
- Identify Tick Bites
- Why Is Autism Increasing?
What happens to the breasts in pregnancy?
During pregnancy, the breasts grow further due to stimulation by estrogens
(female hormones). The growth during pregnancy is more uniform than that
observed at puberty. The amount of tissue capable of producing milk is
approximately the same in all women, so women with smaller breasts produce the
same amount of milk as women with larger breasts. During pregnancy, the areola
becomes darker and enlarges in size.
How does breast tissue develop?
Breast tissue begins to form in the fourth week of fetal life. In the fetus,
breast tissue develops along two "milk lines" that start at the armpit and
extend to the groin. Uncommonly, an extra (ancillary) breast can develop along
this line. On the skin surface, an extra nipple (supernumerary nipple) may
develop along this line.
Picture of the milk lines
How are human breasts different from other species?
In other primates (such as apes), the breasts develop only when they are
producing milk. After the young have been weaned, the breasts flatten again. In
humans, the breasts enlarge at puberty and stay enlarged throughout a woman's |
How did the biggest galaxies form? Based on the ages of stars inhabiting them, the largest elliptical galaxies — those kind of boring egg-shaped clouds of stars with no pretty spiral arms — formed fairly early in the history of the Universe. While smaller elliptical galaxies likely are the modern version of submillimeter bright galaxies (SBGs), star-forming structures visible from the early cosmos, astronomers have failed to identify the progenitors of the largest galaxies. However, a new paper might have the answer: the authors caught a pair of early galaxies right before they collided, after which they likely merged into one.
Where one galaxy is insufficient, two may do instead. A new set of observations caught two bright elliptical galaxies right before the act of merging into one that would have a combined mass large enough to make the equivalent of 400 billion Suns. Hai Fu and colleagues determined that these galaxies collided more than 10 billion years ago and that the merger was driving extremely rapid star formation, at least ten times the rate seen in ordinary galaxies. Based on these observations, the researchers concluded that such collisions could be responsible for the birth of the largest galaxies, allowing for most of them to finish forming by 9.5 billion years ago. [Read more…] |
When epidemiologists are working to track a disease the three elements that they focus on aretheagent thehost andtheenvironment.Followingare explanations ofeach element. Theagentiswhat causesthe disease.Theagentis often a virus or bacteria but can also be caused by otheragents. In the scenario described in theIntroduction theagentis theE.colibacteria. Thehostrefers to those who are contracting the disease. Thehostdoesnt necessarily become sick.Hostscan serve as carriers of the disease who do not show outward signs or symptoms. Finally theenvironmentrefers to the external factors that support or cause the disease to spread. In the E. coli scenario the lettuce became infected either at the farm or during transport;the lettuce was sold to the distributorand then to Sams Sandwich trucks;and people ate the lettuce on these sandwiches. Each of these steps is important to tracking the disease and epidemiologists work diligently to trace and report each of the steps to prevent further transmission of the disease. The following is an example of an epidemiologic triad. Agent:Salmonellabacteria Keep in mind that agents can be biologic (e.g. bacteria or viruses) chemical (e.g. poisons or alcohol) physical (e.g. trauma or radiation) or nutritional (e.g. a lack or excess of essential nutrients). Host factors:Individualswho are particularly vulnerable (e.g. the very young or very old and immunocompromised individuals). Note that in general host characteristics can include age sex race religion customs occupation genetic factors other health factors and immunologic status. Environmental factors:Contaminatedkitchen surfaces or utensils undercooking of contaminated food items or contaminated chicken. Environmental factors can include temperature crowding noise pollution food and radiation. Changes in one factor in the epidemiological triad can influence the occurrence of disease by increasing or decreasing a persons risk for disease. Consider how variations in each factor shown in the example can influence the manifestation of disease.Refer to pages 435-440 in your Friisand Sellers(2021) textbook for further explanation. In this Discussion you will apply the epidemiologic triad to a disease of your choiceto gain a better understanding of both the model and the disease. To prepareu202ffor this Discussion: Access and review theEpidemiological Triad Learning Interaction. Select an infectious disease from the list proved inList of Infectious Diseasesdocumentfor Week 1 Discussion. Review the Week 1Ebola Virus Disease in the Light of Epidemiological Triad (Kaur Sachdeva Jha &Sulania 2017)resource. Consider how the epidemiologic triad/triangle can be applied to your selected disease. Consider how variations in each factor shown in the examplefrom the textbookcan influence the manifestation ofthedisease. Refer to pages 435-440 in your textbook for further explanation. By Day 4 Posta comprehensive explanationofthe following: Identify ebola virus disease Followingare explanations ofeach element. Provide an example of an agent that is associated with theinfectiousdiseasethatyou selected. Discussu202fat least threeu202fexamples of environmental factors that contribute to the likelihood of transmission of theagent to an individualfor the disease you have selected. Discussu202fat least threeu202fexamples of host factors that contribute to the likelihood of transmission of theagent to an individualfor the disease you have selected. |
Unit 5—Overview of the Federal Rules of Evidence The law of evidence is a set of rules and principles that govern the admissibility of evidence into various types of judicial and administrative proceedings. Evidence is any matter, verbal, physical or electronic that can be used to support the existence of a factual proposition. One of its main purposes is to protect the jury from being misled. There are two basic categories of evidence, direct and circumstantial. Within these general groups there exist three types of evidence: testimonial, physical, and demonstrative. Any kind of evidence to be considered in a legal context must comply with the admissibility requirements of relevancy and materiality. Direct evidence tends to show the existence of a fact in question without the intervention of proving any other fact: Is the evidence to be believed without inferences or conclusions from it? Direct evidence depends on the credibility of the witness. For example, W testifies that she saw D strangle V with a stocking. W’s testimony is direct evidence on the issue of whether D did in fact strangle V with a stocking, since if that testimony is believed, the issue is resolved. Circumstantial evidence depends on both the credibility of the witness and inferences from the witness. Circumstantial evidence is evidence which, even if believed, does not resolve the matter at issue unless additional reasoning is used to reach the proposition to which the evidence is directed. For example, consider the prosecution of D for strangling V. W, a policeman, testifies that shortly after hearing V’s screams, he saw D running from the scene of the crime, and, after stopping D, found a stocking in D’s pocket. While this testimony is direct evidence on the issues of whether D was at the scene of the crime, was fleeing, and had a stocking in his pocket, it is merely circumstantial evidence as to whether D did the strangling. This is so because only by the application of additional reasoning does the evidence lead to the proposition to which it is addressed. The relevance of proffered evidence differs dramatically depending on whether the evidence is direct or circumstantial. When evidence is direct, so long as it is offered to help establish a material issue, it cannot be irrelevant. Circumstantial evidence, even if offered to prove a material fact, will be found to be irrelevant if the evidence has no probative value, i.e., it does not affect the probability of the proposition to which it was directed. Evidence may be testimonial (witness), physical (tangible objects and parts of the body), or demonstrative. Testimonial evidence is premised upon the witness’ personal knowledge and relies on the person’s five senses. Physical evidence is perceived as indisputable, scientifically sound and most important, neutral. The value of physical evidence cannot be understated. It is the silent, definitive witness. Physical evidence offers certainty and certainty equals proof. In 1953, the criminologist P.L. Kirk wrote that physical evidence cannot be absent because human witnesses are. It cannot perjure itself; only its interpretation can err. Only human failure to find it, study it, and understand it can diminish its value. The means by which physical evidence becomes proof is through forensic science. It often involves submission of some tangible object that was directly involved in the situation or incident (document, passport, weapon, narcotics, drugs, clothing, tax return, blood, hair, etc). Demonstrative evidence serves as an audio-visual aid and is designed to assist the trier of fact in understanding the witness’ testimony. It can include maps, models, x-rays, diagrams, models, computer graphics, statistics, etc.
Authentication requires the party offering contested evidence to provide a basis for the fact finder to believe that the item is what the proponent claims it to be. It |
Atoms and Molecules | General Chemistry 1
Elements in Chemistry
Elements: substances that cannot be broken down into simpler substances. They can be divided into two broad classes: metals (good conductors of electricity and heat, malleable, ductile) and nonmetals.
Compounds: substances with two or more elements. A compound or molecule consisting of just two atoms is called diatomic molecule.
Chemical symbols are abbreviations used to designate the elements.
H = Hydrogen, He = Helium, Li = Lithium
7 elements are found as diatomic molecules in nature: H2, N2, O2, F2, Cl2, Br2, I2
States of Matter
The states of matter are solid, liquid and gas:
A solid (s) ⇒ fixed volume + fixed shape.
The particles form an ordered lattice / network of atoms or molecules
A liquid (l) ⇒ definite volume but not a specific shape. It fills the container.
The particles are held together by forces but are still free to randomly move
A gas (g) ⇒ fills the entire volume of the container (no definite shape).
The particles rapidly move about the full volume of the container
Condensation (gas → liquid), Freezing (liquid → solid)
Melting (solid → liquid), Boiling (liquid → gas),
Sublimation (solid → gas), Deposition (gas → solid)
The Atomic Theory
The atomic theory was formulated by John Dalton. Here are the main points:
- Matter is composed of atoms, small and indivisible particles.
- All atoms of an element are identical and have the same mass.
- The atoms of different elements vary in size, in mass and in chemical behavior.
- Chemical compounds are composed at least of two atoms of different elements. The resulting particle is called a molecule.
- In a chemical reaction, the atoms are rearranged, separated, or recombined to form new compounds but no atoms are created or destroyed.
Atom = smallest constituent unit of matter that constitutes a chemical element.
An atom consists of protons, neutrons, and electrons:
Proton: located in nucleus, charge = +1, mass ~ 1 amu = 1.661 x 10-27 kg
Neutron: located in nucleus, charge = 0, mass ~ 1 amu
Electron: outside nucleus, charge = -1, mass = 5.5 x 10-4 amu << 1 amu
⇒ most of the mass of an atom is concentrated in its nucleus.
X = chemical symbol of the element
Z = atomic number = number of protons
Each element is characterized by a unique atomic number.
A= mass number = total number of protons and neutrons
Atoms of the same element always have the same number of protons Z.
All carbon atoms have 6 protons.
However, atoms can have a different number of neutrons.
Isotopes = atoms of the same element containing different numbers of neutrons.
Most elements occur in nature as mixtures of isotopes.
The number of neutrons N = A - Z
and are two isotopes of carbon.
They have both 6 protons but different numbers of neutrons (6 and 7 neutrons).
Ions = charged particles
⇒ atoms or molecules with a charge due to the loss or gain of one or more electrons
⇒ number of protons (charge +1) ≠ number of electrons (charge -1)
Ions positively charged = cations.
Ions negatively charged = anions.
Na+ = singly charged sodium ion = sodium cation
Cl- = singly charged chloride ion = chloride anion
The number of electrons an atom loses is related to its position on the periodic table
⇒ an atom will gain or lose electrons to form ions with the same number of electrons as the nearest noble gas.
Polyatomic Ions Naming
Ions can be polyatomic.
Names of polyatomic ions (general rules):
- end in -ate
- with 1 oxygen less than the most common ionic form: end in -ite
- with 2 oxygens less than the most common ionic form: prefixe hypo- + end in -ite
- with 1 oxygen more than the most common ionic form: prefixe per-
- with hydrogen attached: prefixe hydrogen
ClO3- = chlorate = most common form
ClO2- = chlorite
ClO- = hypochlorite
ClO4- = perchlorate
HPO42- = hydrogen phosphate
Naming binary compounds:
1 metal element + 1 nonmetal element or polyatomic ion:
Name of the compound = name of the metal + name of the nonmetal, with the suffix – ide or the name of the polyatomic ion. If the metal has a variable charge metal, the charge is indicated in bracket with roman numbers.
CaS = calcium sulfide
CaBr2 = calcium bromide
Fe2(SO4)3 = iron(III) sulfate
2 nonmetals elements:
Prefixes are used to indicate the number of atoms of a given element in a molecule (mono- = 1, di- = 2, tri- = 3, tetra- = 4 …). The prefix mono- is generally omitted. The final a or o of the prefix is combined with a name beginning with a vowel.
CO2 = carbon dioxide
PCl5 = phosphorus pentachloride
Binary acids (one anion and one hydrogen):
prefix hydro- + first syllable of the anion + suffix -ic + acid.
HCl = hydrochloric acid
Acids having oxygen in the compound:
polyatomic ions with an end in -ate finish in -ic for the acids
polyatomic ions with an end in -ite finish in -ous for the acids
NO3- = nitrate ⇒ HNO3 = nitric acid
NO2- = nitrite ⇒ HNO2 = nitrous acid |
The book “What is the World Made Of” written by Kathleen Weidner Zoehfeld and illustrated by Paul Meisel, explores all about solids, liquids, and gases. The illustrations and everyday examples provided give young students a deeper understanding of the distinct phases of matter.
The story opens by describing impossible scenarios, such as “Did you even drink a glass of blocks? Have you ever played with a lemonade doll, or put on milk for socks?” to introduce the importance of matter. A foundation is thus set, showing that all things are made of matter. As the story progresses, more and more examples are given for the three phases of matter; solids, liquids, and gases. Everyday connections are given to show the properties of matter. Solids are explained using clay to show that they hold shape. Milk is used to explain how liquids take the shape of their container and have a definite volume. Finally, air is used to portray the qualities of the gas state of matter. Connections to everyday life are also provided to help kids understand phase changes of water, such as the idea of making ice cubes and the melting of ice cubes in warmer temperatures. The book ends with an easy and informative summation of the lesson, offering hypothetical, funny questions that show the importance of the distinct phases and properties of matter in the day to day lives of children:
“Can you imagine a world where your toys melt when it gets too hot? Where the walls of your house turn into hazy gas, and animals just walk in and out as they please? A place where, on cold days, you have to swim through the air, and where everything you’d like to drink is hard as a block? What a crazy world it would be!”
To find out more about matter, on the inside cover, the book also offers questions and further experiments to illustrate specific concepts about the three states of matter.
This book provides a very thorough introduction to the topic of matter and its different phases. The book could be used for grades 1-3, providing easily understood, everyday examples of matter in different forms and the different phase changes of water (SOL K.5 (a)). The different properties of solids, liquids, and gases, including mass and volume, are also taught throughout the story (SOL 2.3 (a)(b)). The book provides hands-on activities and experiments for kids to more fully understand the topic, as well.
- This website provides an online quiz exploring the different states of matter. The quiz consists of 10 questions with correct answers and detailed explanations offered for each question.
- This lesson consists of an experiment “The Power Of Ice,” focusing on water and its properties in different phases. The lesson provides space for student hypothesis, materials and procedures, as well as experimental conclusions.
- This lesson plan will have students observe different phases of matter, changes in patterns, perform experiments, and explore differences between physical and chemical changes.
Book: What is the World Made Of? (All About Solids, Liquids, and Gases)
Author: Kathleen Weidner Zoehfeld
Illustrator: Paul Meisel
Publisher: Harper Collins Publishers
Publication Date: 1998
Grade Range: 1-3 |
For years, the brain of a child with autism has been a mystery. Doctors and parents wondered about the cause of autism, and it seemed that they would never get those answers. Autism is characterized on a spectrum with various expressions of difficulty with social interaction including difficulty with verbal and nonverbal communication. Children with ASD (Autism Spectrum Disorder, the official title of ‘autism’ after the May 2013 publication of the DSM-5) are associated with difficulties with motor coordination, attention, intellectual disabilities, and physical health problems like sleep and gastrointestinal problems. Autism is usually presented by age three and the process of diagnosing autism continues to change, according to the Autism Speaks foundation.
Dr. Thomas R. Insel, director of NIMH at the NIH says that “while autism is generally considered a developmental brain disorder, research has not identified a consistent or causative lesion.” The newest reports show that the architecture of the autistic brain is “speckled with patches of abnormal neurons.” In the study published in the New England Journal of Medicine, there is evidence that the brain irregularities of children with autism are due to abnormal prenatal development.
Social interaction and communication are essential characteristics of the human experience. As humans, we desire to create and develop relationships with each other. Autism Spectrum Disorder (ASD) is a neurological developmental condition that impairs this ability to relate. The spectrum refers to the fact that there are multiple conditions characterized by similar features all grouped together under this one disorder. These conditions include “classic” autism, Asperger syndrome, and Pervasive Developmental Disorder Not Otherwise Specified. There are also varying degrees of severity associated with ASD. So, depending on the disorder and degree to which a person suffers from this disorder, there is truly a wide spectrum of possible conditions created by ASD that many people around the world must deal with. More |
The Mayos, an indigenous people of northwestern Mexico, live in small towns spread over southern Sonora and northern Sinaloa, lands of remarkable biological diversity. Traditional Mayo knowledge is quickly being lost as this culture becomes absorbed into modern Mexico. Moreover, as big agriculture spreads into the region, the natural biodiversity of these lands is also rapidly disappearing. This engaging and accessible ethnobotany, based on hundreds of interviews with the Mayos and illustrated with the authors' strikingly beautiful photographs, helps preserve our knowledge of both an indigenous culture and an endangered environment.
This book contains a comprehensive description of northwest Mexico's tropical deciduous forests and thornscrub on the traditional Mayo lands reaching from the Sea of Cortés to the foothills of the Sierra Madre. The first half of the book is a highly readable account of the climate, geology, and vegetation of the region. The authors also provide a valuable history of the people, their language, culture, festival traditions, and plant use. The second half of the book is an annotated list of plants presenting the authors' detailed findings on plant use in Mayo culture.
1. The People and the Land
2. A Brief Ethnography of the Mayos
3. Historical and Contemporary Mayos
4. Plant and Animal Life
5. Eight Plants That Make Mayos Mayos
6. Plant Uses
7. An Annotated List of Plants
Appendix A. Mayo Region Place Names and Their Meanings
Appendix B. Yoreme Consultants
Appendix C. Gazetteer of the Mayo Region
Appendix D. Mayo Plants Listed by Spanish Name
Appendix E. Mayo Plants Listed by Mayo Name
Appendix F. Glossary of Mayo and Spanish Terms
David Yetman is Associate Research Social Scientist at The Southwest Center at the University of Arizona and author of Where the Desert Meets the Sea: A Trader in the Land of the Seri Indians (1988), Sonora: An Intimate Geography (1996), and Scattered Round Stones: A Mayo Village in Sonora, Mexico (1998). He is host of the PBS series "The Desert Speaks." Thomas R. Van Devender is Senior Research Scientist at the Arizona Sonora Desert Museum. He has published many articles on the ecology and evolution of the Sonoran desert and has done pioneering research to determine ancient climates and vegetation change through studies of packrat middens.
"David Yetman and Tom Van Devender and their Mayo consultants vividly bring to life a great depth of indigenous information in a thoroughly enjoyable and accessible manner. Here is a detailed account of a fast receding way of life from the arid edge of the American tropics presented by the leading researchers in the field. The Mayos are an enduring people and this book does them honor."—Richard Stephen Felger, Executive Director, Drylands Institute, Tucson, AZ
"Yetman and Van Devender, with abundant help from their knowledgable ethnobotanical teachers, the Mayo Indians of southern Sonora, reveal the botanical secrets of a vanishing habitat. This book is invaluable for ethnobotanists, a treasure chest for tropical afficionados, and a delight to all those with a love of wild plants in wild habitats."—Paul S. Martin, Emeritus Professor of Geosciences, University of Arizona |
Z with stroke
This article needs additional citations for verification. (September 2014) (Learn how and when to remove this template message)
Uses in alphabets
It was used in the Jaᶇalif alphabet (as a part of Uniform Turkic Alphabet) for the Tatar language in the first half of the 20th century to represent a voiced postalveolar fricative [ʒ], now written j. It was also used in the 1992 Latin Chechen spelling as voiced postalveolar fricative [ʒ]. It was also used in a 1931 variant of the Karelian alphabet for the Tver dialect. The 1931-1941 Mongolian Latin alphabet used it to represent [d͡ʒ].
Uses as currency symbol
Ƶ is used as a currency symbol in the Dragonball universe for "zeni".
Uses as variant of the letters Z and Ż
In Polish, the character Ƶ is used as an allographic variant of the letter Ż. Germans, Italians, French and Spaniards often use this sign in the hand-writing as the letter Z, as do others. In Greek, it is a handwritten form of the letter Ξ. The horizontal stroke distinguishes it from its Ζ counterpart.
Uses on computers
The Unicode standard specifies two codepoints:
- U+01B5 Ƶ LATIN CAPITAL LETTER Z WITH STROKE
- U+01B6 ƶ LATIN SMALL LETTER Z WITH STROKE
Use in Complex Analysis
The letter z is used in the mathematical field of complex analysis to represent a complex variable. In written work, students of the subject add a stroke to ensure they do not mistake it for the digit 2. |
The levels of carbon dioxide found in the atmosphere in modern times have been found to be nearly ten times higher than any other time since the extinction of the dinosaurs, 65 million years ago. The event that came the closest to today’s CO2 levels, the Paleocene-Eocene Thermal Maximum (PETM), occurred 55.5 million years ago, where a spike in greenhouse gasses caused global temperatures to increase by 5–8 °C over what we’re experiencing today. While the existence and cause of the PETM is well established, the source of the massive amount of CO2 that caused the temperature spike has been a complete mystery to scientists.
Astronomers with the Pale Red Dot Project have announced the discovery of an Earth-like planet orbiting Proxima Centauri, the star closest to Earth. The newfound planet, currently called "Proxima b", has the fortune of being 1.3 times the mass of the Earth, and orbits within the star’s Goldilocks Zone, meaning that it is just the right size and temperature to host liquid water, one of the requirements for life as we know it.
Scientists have discovered a reliable way to extract renewable energy from ordinary seawater. The research team, from the Ecole Polytechnique Fédérale de Lausanne’s Laboratory of Nanoscale Biology in Switzerland, employed a natural process called osmosis, where a fluid permeates through a membrane from one side to another.
An Asian studies expert and amateur satellite tracker have voiced their concerns over the possibility that China’s Tiangong-1 space station may be de-orbiting without control from the ground. The station, launched in 2011, was a temporary testbed for the technologies for Tiangong-2, a permanent station scheduled to be launched in September of this year. Beijing’s original plan was to de-orbit the spacecraft in 2013, but Tiangong-1, Chinese for "Heavenly Palace 1", remained in orbit after that date, conducting long-term endurance tests of the now unmanned station’s components. |
jasper | PLAINS of STONE
The name Jasper, "spotted or speckled stone," is derived via Old French jaspre (variant of Anglo-Norman jaspe) and dates back to ancient times in the oldest known writings. Relics from the earliest civilization show a reverence for this beautiful and powerful gem in the world of the Phoenicians, Persians and Hebrews. It was used in organizational and religious activities with prominent mention in sacred writings over four thousand years old. Found carved as seals for official documents, amulets and adorning the handles of weaponry Jasper of any color is considered a stone of protection and shielding. The Jasper of the ancient may have been of a more translucent nature of varying colors. Professor Flinders Petrie has suggested that the odem, the first stone on the High Priest's breastplate, translated “sard,” was a red jasper, whilst tarshish, the tenth stone, may have been a yellow jasper (Hastings's Dict. Bible, 1902). |
Egghead's Guide to Vocabulary - eBook
Add To Cart
- Grade Level▼▲
- Media Type▼▲
- Guides & Workbooks▼▲
- Resource Type▼▲
- Author / Artist▼▲
- Top Rated▼▲
Have questions about eBooks? Check out our eBook FAQs.
* This product is available for purchase only in the USA.
|Format: DRM Protected ePub|
Publication Date: 2013
Peterson's egghead's Guide to Vocabulary will help students improve the range of their vocabulary, boost their scores on verbal ability tests, and improve their diction on any writing assignment. With the help of Peterson's new character, egghead, students can strengthen their vocabulary with narrative cartoons and graphics. Along the way there are plenty of verbal lessons and exercises, making this the perfect guide for students struggling to develop their vocabulary.
- egghead's tips and advice for improving vocabulary skills
- Hundreds of vocabulary words students can use to help improve their verbal scores on standardized tests
- Dozens of vocabulary-building exercises with plenty of examples of words usage
- Easy-to-read lessons with fun graphics that provide essential information to help those students who learn visually
Be the first to write a review!
Ask a Question▼▲
Find Related Products▼▲ |
From the article:
RIVERSIDE, Calif. – A research team, led by UC Riverside’s Ludwig Bartels, is the first to design a molecule that can move in a straight line on a flat surface. It achieves this by closely mimicking human walking. The “nano-walker” offers a new approach for storing large amounts of information on a tiny chip and demonstrates that concepts from the world we live in can be duplicated at the nanometer scale – the scale of atoms and molecules.
“Similar to a human walking, where one foot is kept on the ground while the other moves forward and propels the body, our molecule always has one linker on a flat surface, which prevents the molecule from stumbling to the side or veering off course,” said Bartels, assistant professor of chemistry and a member of UCR’s Center for Nanoscale Science and Engineering. “In tests, DTA took more than 10,000 steps without losing its balance once. Our work proves that molecules can be designed deliberately to perform certain dynamic tasks on surfaces.”
Even though this article stresses that this nano walker might be applied in storage technology, I think it’s pretty safe to conclude that advances like these will also help out nicely with realizing The Future Of Molecular Computing. |
1. What is history? What are its uses?
History is a systematic study of written records of the past, including the prehistory of man. History is a story of recorded events, (in chronological order), including the development and behavior of people; a written account of cultural and natural phenomena; something that belongs to the past; and one that is no longer worth consideration. History in the hands of the historian becomes a form of literature. That means history has an objective and a subjective element. In the hands of the historian, the study and recording of history is an attempt to give meaning to our past, present, and future. In this way, a good historian draws us into a personal relationship with the past, allowing us to grow and learn from it. As a discipline, history is the study of the past. In other words, historians study and interpret the past. In order to do this, they must find evidence about the past, ask questions of that evidence, and come up with explanations that make sense of what the evidence says about the peoples, events, places, and time periods under consideration. Because it is impossible for a single historian to study the history of all peoples, events, places, and time periods, historians develop specialties within the discipline.
History is important because the past held valuable lessons of how to succeed and how to avoid costly mistakes. History gave us insight into who we are, who we can be, and a sense of our identity. Our view of the past affects how we respond to our present circumstances. If our view of history is wrong, we are likely to make wrong choices today. These wrong choices will lead to further conflicts and a waste of resources that can eventually lead to the fall of an entire civilization.
It is said that experience is the best teacher. Still, our learning would be very narrow if we profited only from our own experiences. Through the study of history, we make other people’s experiences our own. History teaches judgment. It does this both by supplying a knowledgeable background and by training in the technique of criticism and reasoned conclusions.
2. How do we write history?
Historical writing, like history itself, is a vast and varied subject. It has gone on in every society and every age since the dawn of civilization. It is the production of records as part of the events themselves. These first-hand documents- the writings and utterance of leaders; the notes of eyewitnesses; the letters, diaries, and recollections of participants- are the primary sources of history. Next come the efforts to compile and systematize the record in chronicles and yearbooks, followed by the books and articles written on the basis of intensive research to find out how and why events happened as they did. Finally, there are works written specifically for reference or instructional purposes-encyclopedia articles, handbooks, and textbooks-whose purpose is to draw from the record in order to present a clears and convenient picture to the learner. Most of the time an historical narrative will be organized chronologically, with the sections determines by the important happenings.
Historical method is the process of critically examining and analyzing the records and survivals of the past. First step is the selection of the subject for investigation. Next, is the collection of probable sources of information on that subject. Third, is the examination of those sources for genuine and lastly the extraction of credible particulars from the sources. Historiography is the writing process of history. It is the imaginative reconstruction of the past from the date derived by that process.
3. How would you compare/ contrast history with other social sciences?
History- based primarily on facts rather than on imagination and feeling unlike humanities, literature, arts and philosophy; it tries to explain by particular description rather than by general analysis and laws; its aim is to depict the significant historical individual or situation in all its living detail; it is defined by its focused on time but it also has the characteristic of embracing all aspects of human activity as they occurred in the past; it is able to serve as the discipline that integrates the specialized work of other fields of social science; it is a study of the facts of man's social existence, which is essentially the common denominator of the social sciences; bears close relations to all the particular social sciences and is really no more different from them in concept, method and material than they are among each other.
Geography and political science: are the fields most intimately related to history to the extent of inevitable interpenetration and overlapping.
Geography- studies the terrestrial setting in which history has occurred, sets the spatial dimension of historical events and endeavors to explain the relation between land forms and resources on the one hand and man's historical accomplishments on the other.
Political science - endeavors to explain analytically and systematically the same vast range of political data and events that constitute a major portion of our historical experience.
Economics, sociology, anthropology and psychology: deal with more specialized approaches or realms of experience.
Economics- deals with relations of production and exchange expressed by money.Sociology- deals with the web of informal as well as formal relationships among people
Anthropology- deals with the patterns of behavior and belief that distinguish particular societies.
Psychology- deals with thought, emotion and behavior from the standpoint of the individual human behavior.
The only basic difference between history and the other social sciences is that social sciences take individuals and events, study the qualities they have in common and arrive at general laws about human affairs, whereas history is the study of a unique sequence of individuals, events, situations, ideas and institutions, occurring in the one-dimensional and irreversible stream of time.
4. What can history tell you?
Everything that exists in the present has come out of the past, and no matter how new and unique it seems to be, it carries some of the past with it. Everything has a history. At least part of the answer to any question about the contemporary world can come from studying the circumstances that led up to it. The more we understand about the past influences, the more we will know about the present subject to which they are related.
Indeed, most people are curious. In fact, children are always asking their parents the question "why" of things. Because everything has a history, most questions can be answered, at least in part, by historical investigation. Though our questions could go on forever, the answers are written somewhere in the record of the past. The record of the past is not only contained in musty volumes on library shelves; it is all around us in museums, historical preservations, and the antique furnishings and utensils contained in almost every household. Our minds are living museums because the ideas we hold come down to us by way of a long historical journey. Though we are usually unaware of it, the past and the history is always with us. Because history is literally at our fingertips, we can travel back into it without difficulty.
1. What approach would you use in studying history?
Transformative Learning Approach is one approach to better understand and interpret history. Through transformative learning, students learn in more experiential way and it is based more on experiences. Using this approach would help students to express their own idea and give out their own interpretation about certain things. In this way, they are able to learn new things in more understandable way because they relate what they study in their own knowledge. There is also what we call "Guided Participation" in which the teacher guides the student as they study about certain things such as history. Unlike the ordinary way of teaching wherein teachers talk and students listen, students may raise up a question and answer them accordingly. There is a pattern in transformative learning which helps the students to gain more knowledge and apply what they have learned.
2. What Skills Does a Student of History Develop?
The Ability to Assess Evidence. The study of history builds experience in dealing with and assessing various kinds of evidence—the sorts of evidence historians use in shaping the most accurate pictures of the past that they can. Learning how to interpret the statements of past political leaders—one kind of evidence—helps form the capacity to distinguish between the objective and the self-serving among statements made by present-day political leaders.
Learning how to combine different kinds of evidence—public statements, private records, numerical data, visual materials—develops the ability to make coherent arguments based on a variety of data. This skill can also be applied to information encountered in everyday life.
The Ability to Assess Conflicting Interpretations. Learning history means gaining some skill in sorting through diverse, often conflicting interpretations. Understanding how societies work—the central goal of historical study—is inherently imprecise, and the same certainly holds true for understanding what is going on in the present day. Learning how to identify and evaluate conflicting interpretations is an essential citizenship skill for which history, as an often-contested laboratory of human experience, provides training. This is one area in which the full benefits of historical study sometimes clash with the narrower uses of the past to construct identity. Experience in examining past situations provides a constructively critical sense that can be applied to partisan claims about the glories of national or group identity. The study of history in no sense undermines loyalty or commitment, but it does teach the need for assessing arguments, and it provides opportunities to engage in debate and achieve perspective.
Experience in Assessing Past Examples of Change. Experience in assessing past examples of change is vital to understanding change in society today—it's an essential skill in what we are regularly told is our "ever-changing world." Analysis of change means developing some capacity for determining the magnitude and significance of change, for some changes are more fundamental than others. Comparing particular changes to relevant examples from the past helps students of history develop this capacity. The ability to identify the continuities that always accompany even the most dramatic changes also comes from studying history, as does the skill to determine probable causes of change. Learning history helps one figure out, for example, if one main factor—such as a technological innovation or some deliberate new policy—accounts for a change or whether, as is more commonly the case, a number of factors combine to generate the actual change that occurs.
1. How can we use history in our lives?
People live in the present. They plan for and worry about the future. History, however, is the study of the past. Perhaps the most significant role of history in our lives is how it can serve as a teacher. History, as we know it, involves many pros and cons. Through it, we can learn how to live better, more effectively, based on how the past events happened. We apply these learnings through history and use it as a guide in facing our future trials in life.
It also helps us to understand people and societies. . This, fundamentally, is why we cannot stay away from history: it offers the only extensive evidential base for the contemplation and analysis of how societies function, and people need to have some sense of how societies function simply to run their own lives. History also allows us to understand change and how the society we live in came to be. These two fundamental reasons for studying history underlie more specific and quite diverse uses of history in our own lives. History well told is beautiful. Exploring what historians sometimes call the "pastness of the past"—the ways people in distant ages constructed their lives—involves a sense of beauty and excitement, and ultimately another perspective on human life and society. It also provides a terrain for moral contemplation, which provides moral understanding and identity. Studying the stories of individuals and situations in the past allows a student of history to test his or her own moral sense, to hone it against some of the real complexities individuals have faced in difficult settings.
1. How is history related to other social science?
History is almost always thrown together with the social sciences in the usual three- or four-way classifications of academic subjects. History is a study of the facts of man's social existence which is really the common denominator of the social sciences. It puts up close dealings to every social science. History is similar to other social sciences in perception, approach, and material. The conclusions of all these disciplines result to valuable and interesting insights to historians and those must not be overlooked. Equally, every discipline draw on the data of history and incorporate the perspective of history on the changing circumstances of human behavior. One notable recent influence of social sciences on history is the expanding interest in quantitative approaches to historical research.
2. What can you infer from the similarities and difference of history from other social sciences?
History is not really different to other social sciences because both history and social sciences deal with the facts of human being's social existence. Methods in writing history and social sciences are similar. In addition, the concepts under study in history are parallel with the concepts in social sciences. Each social science contributes to the study and writing of history. Indeed, history plays an important wherever interdisciplinary social science work has been developed.
But since history is the study of particulars (unique sequence of individuals, events, situations, ideas and institutions), it is impossible for the historian to proceed in the same way as the scientist, who tries to generalize from his observations and experiments to arrive at laws of natural phenomena. There are no laws of history in the strict sense, although there are of course many regularities and patterns in human behavior that once established by the social scientists, those must be taken into account by the historian in his investigations.
1. Give your own definition of history?
History is a social science that studies significant people, place and events in a chronological, systematic and analytical way. It contains lessons wherein we can learn from the mistakes of the past. It contains numerous issues that need to be debated without ends and guides the people whenever they are lost and give them the right path. It also contains interesting experiences of famous people and is guided by principles.
2. How can you make the study of story more interesting, relevant and fun?
Some students find history a boring subject, with all those dates, places and people to remember. One way of making it more interesting and fun to learn is by connecting the past and current events, making it more relevant. Another way is by piecing various points of view to come up with your own truth, just like the Hardy Boys. Making some skit, plays or dramas, makes studying more fun and enjoyable.
We should also make a connection of history with your interest to make it less boring. It is said that what is past is prologue. It's also said that history repeats itself. If that is true, then look at the study of history as a study of self-interest to learn from the mistakes of the past. A lot of people who want to be successful study the habits of already successful people; you, too, can study the past to improve your own understanding of how we got here; what we faced; where we failed; and how we can improve.
1. Is the study of history still relevant nowadays? why or why not?
The study of history in our generation nowadays is very important not only because it tells us the important events in the past; but it also teach us lessons that may help us to live our life to the fullest. The study history is very essential for every individual and in our society. History helps us understand people and the society, because history explains how people and society behave. It also helps us to know how things change and how we became like this. The study of history also provides us our own identity. Because of history, we know where we came from and who we really are. Historical data include evidence about how families, groups, institutions and whole countries were formed and about how they have evolved while retaining cohesion. Studying history is also essential to be a good citizen. It sometimes represents of citizenship history hope merely to promote national identity and loyalty through a history spiced by vivid stories and lessons in individual success and morality. But the importance of history for citizenship goes beyond this narrow goal and can even challenge it at some points.
The past causes the present and so the future, it is why we need to study our history. In studying history we can grasp how things change; also through history can we begin to can understand the factors that cause change; and also in history, we know what elements of an institution or a society persist despite change. |
The risk of a Zika virus transmission in Europe for the late spring and summer is low to moderate across the region, the WHO has announced in a new interim risk assessment. This takes into account both the likelihood of a local virus transmission through mosquitoes, and countries' individual capacities to prevent and contain it.
The Zika virus outbreak began in the Americas in 2015. The mosquito Aedes aegypti is the primary Zika vector, but the Aedes albopictus has also been described as a potential vector for spread.
To assess the likelihood of a Zika virus outbreak in the absence of protective measures, WHO scientists have taken into account the presence of these two subtypes of Aedes mosquitoes, the climatic conditions, urbanisation and population density, as well as flight connections to and from Latin America.
Eighteen countries face "moderate likelihood"
In the 53 countries that are part of the WHO Europe region, only the island of Madeira (Portugal), and the North-Eastern region of the Black sea (the coastal regions of Georgia and the Russian Federation) face a high likelihood of local Zika transmission, due to the presence of Aedes aegypti mosquitoes.
Eighteen further countries are defined as having a "moderate likelihood" of transmission owing to the presence of Aedes albopictus mosquitoes, with France leading the table. "France has a moderate risk of Zika virus transmission because of its high population densities, in particular in the regions where Aedes albopictus is found," explained Dr Colleen Acosta, epidemiologist at the WHO's division of Communicable Diseases & Health Security.
Thirty-six countries are classified as having a "low likelihood", because the Aedes mosquito is not found in their territory and they do not have the climatic conditions necessary for an outbreak.
The WHO has also assessed each country's capacity to respond quickly to a potential transmission. This means that even though a country may experience a Zika-virus transmission, its individual national ability to contain it can greatly reduce the risk of people getting sick. The WHO estimates that 79% of countries have put in place good measures to avoid any outbreak.
"Some territories are at high or moderate likelihood of experiencing a transmission of the virus during spring and summer, but the measures they have put in place to detect the virus, train doctors and protect the public decreases their overall risk of an outbreak. This is why we say the overall risk is low or moderate," WHO officials point out.
In June, a meeting involving European countries will take place in Portugal to assess individual countries' needs and strengthen their capacity to respond to the threats posed by the virus. Among their recommendations, the WHO stresses the need to increase vector-control activities to prevent the introduction and spread of mosquitoes and to reduce breeding sites. Protection of pregnant women, including from sexual transmission, should be a priority.
Guidelines on travelling to Latin America are unlikely to change throughout the summer, with the WHO advising pregnant women to avoid the region.
Countries classified as having a "moderate likelihood" of a local Zika transmission, include: Montenegro, Bosnia and Herzegovina, Albania, Georgia, Slovenia, Romania, Bulgaria, Switzerland, Greece, Turkey, San Marino, Monaco, Spain, Israel, Croatia, Malta, Italy and France. |
Arose from growing tensions between residents of great britain’s 13 north american between colonists and conflict between britain. Was the american revolution war broke out between the british and the american colonists the distance between america and britain precluded. Get an answer for 'what caused the growth of conflict between the american colonists and causes of the american american colonists revolt against britain.
Historical context & the causes of the american revolution conflict-riven britain’s land policy what does ezra stiles predict of the american colonists. The colonists themselves faced high conflict with american one of the primary causes of the war was increasing competition between britain and. What were the causes of the american leading up to great britain's 13 the part of the colonists many times conflict between the. The french and indian war was the north american conflict in a larger imperial britain, the anglo-american colonists, leading up to the.
The economic causes and consequences history an open conflict between the 13 colonies and britain in 1775the laws for the american colonists,. The american revolution began in 1775, as an open conflict between the united thirteen colonies and great britain many factors played a role in. Insurgencies: parallels between the causes and means of the american war for independence and afghanistan by american colonists slandered lord north. American history: a new world clash of cultures a good example of this was the conflict known as king american history: a new world clash of cultures back. The french and indian war their various american indian allies the british colonists were supported at conflict between france and great britain.
The conflict between the american colonies and england 4-31 explain the political and economic factors leading to the american american colonists. Causes of the war of 1812 : the war of 1812 occurred between the united states and great britain between great britain had violated american sovereignty by. Should the colonies have revolted against great britain author: the causes of the american revolution, conflict between ideas and institutions. The central grievance of the colonists was their lack was the principal irritant between britain and its american about taxes and the american. Causes of the american revolution is an amazing between great britain and the colonists american revolution e01 - the conflict ignites.
Read the excerpts below for the perspective of a black veteran from each side of the conflict between america and great britain, american colonists were. And exciting dimension to the analysis of the causes of the american revolution conflict between britain american revolution: britannic-american. Who were the key players in the events leading to the american events that led to conflict between the colonists and the conflict in the colonies. The conflict between the british and the american allies were britain’s as a result of the french and indian war the colonists who fought.
They considered the british colonists the huge cost of the war was another point of contention between britain and her american “causes of the american. The road leading up to the american revolution didn't happen overnight it took several years and many events to push the colonists to a point where they wanted to fight for their independence below are some of the key causes of the american revolution in the order they occurred one thing to keep. Description causes of the american revolution i can: identify the views of: ___patriots ___ loyalists ___ neutralists ___ i can. There were many causes of the american revolution, a group of colonists dressed as american indians boarded the connected with great britain by the. |
Learn all about average product in just a few minutes! Professor Jadrian Wooten of Penn State University explains average product and how to calculate it.
Average product is the average output per input, often reflecting the amount of product that can be made by a single worker.
Average product is output per unit of total input of a specific factor of production. It is the result of dividing the total output by the total input. For example, the average product of labor in a firm is the overall output divided by the total number of workers. This measure is analogous; for example, to the labor productivity in an economy (i.e., the overall number of goods and services produced per person, which is an indicator of living standards in a country). To calculate the average product, other inputs have to be held constant. The average product curve for a typical good is shaped like an inverted U (hump-shaped). For example, the average product curve will rise sharply if the labor force increases from one worker to two because the extra worker allows for a division of labor that enables specialization to occur. The slope of the curve will be positive as long as this process continues. At some point, there will be less scope for specialization, and adding extra workers will produce a smaller increase in output. Eventually, the difficulty of managing large groups of people will lead to a situation where additional workers will reduce average output (even faster than the total output is reduced, because the average output is divided by a larger number of workers); when this happens, the curve will slope downward. The average product curve will not reach zero, because the work is still being done and total output remains positive. |
The ancient Egyptian burial rituals were elaborate and complicated. Pharaohs and lesser officials constructed pyramids for their eternal resting places. Over time, pyramids grew larger and more complex with rooms for daily items necessary for the occupant to use in the afterlife. The development of pyramids was an evolutionary process and changed from one type to another.
The earliest type burial complex the Egyptians used was mastabas. They are rectangular mounds of dirt with burial chambers often dug into the ground. Around 3500 B.C., the earliest mastabas were made of mud brick with inward sloping sides. These often had niches to make an offering. Construction became more elaborate, and the shafts became lined with stone and the walls decorated. By building more chambers, they allowed the deceased to take more items to the afterlife. The mastabas became more elaborate and enclosed by temples made of stone. Ordinary people still used mastabas for burials after pyramid construction began. Few of these burial chambers survive because the mud bricks deteriorated. While this type of tomb was not considered a pyramid, they were the precursor to the step pyramid with sloped sides and steps. The pyramids kept the chambers and shaft concept for the body and accessories buried with it.
The first type of pyramid was a step design built for the Pharaoh Djoser at Saqqara. Djoser died in approximately 2649 during the third dynasty. There are indications the pyramid started out as a stepped mastaba that grew into a pyramid. The pyramid is square with sloped sides. A successively smaller square centred on top of a lower one, which gives the impression of steps. There are six steps and the pyramid is approximately 200 feet high. The top is a small flat area. The burial chamber is about 28 meters underground and made of granite. Inside is a complex system of decorated passageways. It is the first funeral building made of stone.
Snefru built the first type of classically shaped pyramid in the fourth dynasty about 2600 at Dahshur. Because the sides were too steep, they changed the angle about half way to the top. Because of the bend in the middle, its name is the Bent Pyramid. Snefru started another pyramid called the red pyramid that became the first type of classic pyramid without steps. The pyramids that followed used the same general design. The sides are straight at inward angles between approximately 45 and 55 degrees. The materials are limestone and granite. The insides have shafts and passageways that lead to the tomb and auxiliary rooms that hold the artefacts needed so the occupant could continue his journey to the afterlife. The last great pyramid was Menkaure constructed about 2490 B.C. After that, the Pharaoh's burials were in Valley of the Kings. |
Study 21 unit 5: deviance and crime flashcards from bianca f on studyblue. Theories of deviance determine the distinction between deviance and non‐deviance labeling theory questions who applies what label to whom. Written by a senior examiner, tony lawson, this aqa a2 sociology student unit guide is the essential study companion for unit 4: crime and deviancethis full-colour. Outsiders—defining deviance howard becker question the label “deviant” when it is applied to particular acts or people but rather take it as given. Merton described 5 types of deviance in terms of the acceptance point where the people in question lose sight of wwwcsc-sccgcca/text/rsrch. 5 they perceive themselves, thus channeling their behavior into deviance or conformity the study showed how labels open and close doors of opportunity.
Unit 6 text questions by: anna williams they are considered the gatekeepers of the whole system because victims make the initial desision about whether or not they. Critical thinking questions 1 which of the explanations for deviance discussed from social science 4249 at keystone national high school unit five: text questions. Chapter six: deviance and social control learning objectives explain the concept of deviance and why it is relative in nature analyze how ideal and real. Bibliography includes bibliographical references and index contents 1 introduction and self-care unit i: sociology of deviance and sexuality 2.
Below is an essay on criminology unit one text questions from anti essays 5 how are crime and deviance related do they represent the same actions. Deviance of the powerful according to the text, people who are eighty-five years of age and step-down care unit.
At least five ways of conceptualizing deviance are used charles r sanctions and social deviance: the question of deterrence and copy the text for your. View homework help - p lee unit 59 text questions from cs 101 at service high school review questions 1 what is deviance give one example of deviance deviance.
Sociology & you chapter 7: deviance and social control self-check the correct answer for each question is indicated by a 1. Sociological definitions of deviance: what is deviance this question is a good place to begin an analysis of the sociological field of deviance. Crime and deviance: past paper questions 2 mark questions 4 mark questions 5 mark questions identify two reasons why people in society might label the behaviour of. Look up definition of deviance and internal and external social controls in text supporting question #1: can i explain deviance in are in the unit.
Study flashcards on sociology 101 exam 2 review questions at cramcom quickly read text to speech which of the following statements is true of deviance a. Before answering this question, think for 425 418 387 366 deviance deviance chapter 7 deviance and social control. Relevant questions • ensure that text is legible and that spelling in the study of deviance explicit.
Soc101 questions by name questions by theory - 58 cards traditional marxist views on crime and deviance - 5 cards unit 5: sociology and the. Study sociology test chapter 6 conformity, deviance, and crime flashcards at proprofs - most of the cards are definitions and characteristics of the title many. Deviance and corruption 9 deviance, which involves “criminal and noncriminal which benefits either the officer in question or. Social norms and ideas about deviance and crime vary across deviance and social many of the kinds of questions asked by sociologists who study deviance. I found the deviance definition in (statistics) and the one-observation bernoulli deviance in scikit binomial deviance loss. A summary of structural functional theory in 's deviance its central idea is that society is a complex unit if fictional characters could text. Deviance topic questions deviance questions on the topic unit 1 essay deviance as a label subjectivists say that we cannot recognize deviance when we see. |
Sea level rise refers to an increase in the volume of water in the world’s oceans, resulting in an increase in global mean sea level. Sea level rise is usually attributed to global climate change by thermal expansion of the water in the oceans and by melting of Ice sheets and glaciers on land. Melting of floating Ice shelfs or icebergs at sea do not raise sea levels.
Sea level rise at specific locations may be more or less than the global average. Local factors might include tectonic effects, subsidence of the land, tides, currents, storms, etc. Sea level rise is expected to continue for centuries. Because of the slow inertia, long response time for parts of the climate system, it has been estimated that we are already committed to a sea-level rise of approximately 2.3 metres (7.5 ft) for each degree Celsius of temperature rise within the next 2,000 years. IPCC Summary for Policymakers, AR5, 2014, indicated that the global mean sea level rise will continue during the 21st century, very likely at a faster rate than observed from 1971 to 2010. Projected rates and amounts vary. A January 2017 NOAA report suggests a range of GMSL rise of 0.3 – 2.5 m possible during the 21st century.
(From Wikipedia: March 2017) |
Today, the US observes Martin Luther King, Jr. Day, a Federal holiday.
Dr. King was world-renowned for his work for the civil rights movement in America, leading to the Civil Rights Act of 1968.
Martin Luther King, Jr. was born Michael King, Jr. in 1924 in Atlanta, Georgia. His name was changed 10 years later, after his family visited Eisleben, Germany, the birthplace of Martin Luther, who founded Protestantism. His father, Michael King Sr., changed both his name and his son's name to Martin Luther in honor of him.
You can read more about his family history.
At Presidential inaugurations, it is traditional for the President of the United States to take the oath of office with one hand on a Bible. At President Obama's inauguration today he used two Bibles, one belonging to Abraham Lincoln, and the other to Martin Luther King, Jr..
One of the most famous moments in Dr. King's short life, was a speech he gave and known as "I have a dream," in which he lays out his vision for an America that celebrates freedom and equality.
Here's a short excerpt from the speech.
Do you remember the speech? How do you commemorate Martin Luther King, Jr. Day?
Let us know in the comments below.
Search for your ancestors: |
Weathering of wood is a combination of chemical, mechanical, biological, and sunlight-induced processes that change the appearance and structure of wood. After two months of exposure, all woods will turn yellowish or brownish, and then gray. Dark woods will become lighter, while light woods eventually darken. Surface checks, raised grain, cupping, and warping develop as wood continues to weather.
Recent research conducted by the Forest Products Laboratory indicates that failure to properly treat new lumber can reduce the average life of wood by 20 percent. Understanding the differences between finishes makes it easier to select the right product. In the past, finishes were made from alkyd or natural oil resins such as linseed, tung, soya, and paraffin. The resins were often blended with waxes to provide additional water repellency, and then diluted with a mineral spirits solvent.
Technological advances and environmental regulations on emission levels of volatile organic compounds (VOC's) have spurred the development of new products. Water-based products, particularly those formulated with certain water-reducible synthetic oils and resins, have excellent penetration and perform as well as, or better than, oil-based (alkyd) finishes. The performance of commercially available wood finishes is often listed on a product label or in literature supplied by the manufacturer. The American Society for Testing and Materials (ASTM) has standardized test methods to measure the water repellency and color retention of wood finishes. In ASTM test D5401-93, a finish is applied to a 2" by 4" section of wood, allowed to cure for seven days under controlled conditions, and then tested for water absorbency.
Standard ASTM G53-88 evaluates the water repellency of coatings exposed to ultraviolet light and condensation in a weather exposure chamber for 1000 hours. Manufacturers also use outdoor tests to measure weathering in various climates, and they might provide test results if you request them. Finishes are generally classified into two basic categories: those that form a film or coating on wood, and those that penetrate.
These products form a coating, or film, that is a barrier between wood and the elements. Film-formers include many alkyds, latex/acrylics, and varnish resins in solvent or water-based finishes. Products without pigments are considered to be a clear or transparent finish, and have little or no protection from ultraviolet (UV) radiation. Pigments are added to paints, solid color stains, and semi-transparent finishes to change the appearance of the wood and to provide protection from UV rays.
Some of the newer water-based coatings are semitransparent acrylic blends that have excellent flexibility. Unfortunately, due to their higher molecular weight, acrylics still form a film on the surface of wood, and are subject to the cracking that is a characteristic of all film-forming finishes. A film finish cracks as wood expands and contracts during normal moisture cycling and water gets underneath the finish and deteriorates the wood. Removing film-forming wood finishes can be difficult, but is often necessary before re-application. If the failing coat is not removed, then the new coat may blister and peel.
Penetrating wood finishes are oil or water-based products that saturate wood pores to prevent water penetration. They typically contain a drying oil or resin in a transparent or semitransparent stain. Advantages of penetrating finishes over films are that they provide long-term water repellency, they don't trap moisture in the wood, and they do not peel or blister. Natural oils (linseed and tung, for example) are initially very effective in stopping the absorption of water into wood, but tend to darken over time because they are a food for fungi. Buildings treated with natural oils and resins generally need extensive cleaning before reapplying the finish.
Some of the newer water-based systems have synthetic oils and resins and they provide excellent water repellency and color retention. One of the main advantages of synthetic resins is that unlike natural oils, they do not serve as a food for most biological growth, making future coats easier to apply.
Correct application is critical to performance. Follow the manufacturers' instructions, particularly with the newer water-based formulations. All finishes should be applied to a clean surface, but penetrating finishes must be applied to surfaces that are porous and free from previous coatings. Although chlorine bleach will effectively remove many stains like mold and mildew, it can damage wood and is toxic to people and plant life. Newer, chlorine-free cleaners are environmentally safe and can actually increase product penetration up to 25%.
Wood that is pre-treated with a cleaner or pressure washer will probably have some have better finish penetration.Water-based finishes tend to dry faster than oil-based products. To avoid lap marks, particularly on hot sunny days, apply these only in the shade: the cooler surface will absorb better and allow for easier application of a second coat.
Routine maintenance is necessary, but the life-span of a finish depends on a variety of factors. Construction details, exposure to the elements, product choice, surface preparation, and application techniques are all essential to success. Some finishes may even require chemical stripping or sandblasting to restore wood to the proper condition before re-treatment. Finishes that weather unevenly and are re-coated without removing the old finish will have an unsightly, patchy appearance.
Although the wood finish is only a small percent of the cost of a log home, it is one of the more critical elements in construction. To most consumers, aesthetic appeal is just as important as performance when selecting a wood finish. Understanding the properties and expected performance of various products makes the decision process much easier for you. |
Chardonnet's career in science began with engineering studies at the École Polytechnique; he also assisted Louis Pasteur (1822-1845) in his effortsto save the French silk industry from a devastating silkworm epidemic. Realizing there was a market for an artificial silk, Chardonnet built upon the work of the Swiss chemist George Audemars and Sir Joseph Swan of England to develop cellulose-based fibers. Audemars had received a patent in 1855 for the manufacture of synthetic fibers; by 1880 Swan had developed threads from nitrocellulose.
Chardonnet first treated cotton with nitric and sulfuric acids and then dissolved the mixture in alcohol and ether. He then passed the solution through glass tubes, forming fibers, and allowed them to dry. These fibers, called rayon (the term used in referring to any fiber developed from cellulose) were highly flammable until they were denitrated. Reportedly, some garments made of early rayon burst into flames when lit cigarettes were nearby, Unfortunately the techniques that existed at that time to denitrate the material weakened itand made it unsuitable for the textile industry. Chardonnet used ammonium sulfide to denitrate these fibers thus reducing the flammability and retainingfiber strength comparable to that of silk. He received the first patent for his work in 1884 and began manufacturing rayon in 1891. The material was displayed at the 1891 Paris Exposition where it won the grand prize. He was also awarded the Perkin medal in 1914 for his development of rayon.
After his work with fibers, Chardonnet went on to study a number of other subjects including ultraviolet light, telephony, and the movements of birds' eyes. He died in 1924 in Paris, France. |
Semiconductor structures can guide electron motion in versatile ways, so researchers have begun using them to explore the electron’s wavelike, quantum-mechanical nature. But the task is complicated by the constant jostling electrons face in a solid. In Physical Review Letters, researchers describe a way to inject electrons individually with a high energy that makes them easy to distinguish from the others in the semiconductor. The team precisely timed the motion of the injected electrons across a micrometer-sized sample and found that most electrons made the trip without losing energy. If they can also avoid collisions that disrupt their quantum state, these electrons could provide new ways to test fundamental quantum behavior.
Experiments in the field of quantum optics have demonstrated a wide range of quantum behaviors for light. Lately, researchers have begun replicating some of these experiments using the wave nature of electrons in solids, for example, probing the details of the interference between a pair of electrons . Masaya Kataoka of the National Physical Laboratory (NPL) in Teddington, UK, and his colleagues suspected that the experiments would be easier with electrons having energies far above that of thermally excited electrons. But “before we did these experiments, some people told me it’s impossible to detect this, because an electron quickly loses energy” as it travels through a material, says Kataoka. The energy loss was seen in earlier experiments done by others .
The NPL team adapted a so-called electron pump structure, developed for use as a current standard, which emits single electrons at a precisely defined rate. Built on a semiconductor surface, the pump includes a set of metal electrodes that surround a tiny circle of semiconductor material. Applying a large enough negative voltage to the electrodes confines electrons to the disk by providing an energy hill, or barrier, to their motion in every direction. This “quantum dot” can hold at most a few electrons, and periodically lowering the barrier on one side of the disk at a microwave frequency liberates one electron in each cycle, which is quickly replenished. By using a dot a few hundred nanometers across, the team generated electrons with thousands of times more energy than typical thermal electrons at the temperatures of their experiments (around kelvin).
Using another electrode micrometers away, the team produced a separate energy barrier that they also caused to oscillate in height. Each pumped electron that reached this distant barrier with enough energy to surmount it contributed a tiny pulse of charge to a detection circuit. The circuit registered a steady current showing the average number of electrons per second. The team measured this current using a range of amplitudes and relative timings for the two oscillating barriers.
Even though the system couldn’t detect single electrons directly, it gave information about single-electron effects. For example, the measured current showed which conditions allowed, during each cycle, a single electron to be released by the pump and then detected beyond the distant barrier. The team found that adding a -tesla magnetic field, which forced electrons to travel along the edge of the sample, allowed most electrons to retain virtually all of their energy during their trip to the distant barrier. But in a -tesla field, more than half of the electrons lost energy. “We don’t fully understand yet” why the strong field reduces the energy loss so much, says Kataoka.
The researchers went further, exploring the precise timing of the electron emission. By changing the relative timing of the oscillating barriers, they deduced that the moment of release from the pump was very well controlled, varying by no more than picoseconds from one cycle to the next. This was unprecedented time resolution for high-energy electrons. While the results don’t yet demonstrate long-lived quantum coherence for high-energy electrons, showing no loss of energy is an essential step toward that possibility.
Gwendal Fève of the École Normale Supérieure in Paris, a member of the team that previously demonstrated interference between pairs of lower-energy electrons, was pleasantly surprised by how long the electrons retain their energy in the new devices. “Their system is quite interesting and very promising,” he says, and the higher energy could make new types of measurement possible.
- E. Bocquillon, V. Freulon, J.- M. Berroir, P. Degiovanni, B. Placais, A. Cavanna, Y. Jin, and G. Feve, “Coherence and Indistinguishability of Single Electrons Emitted by Independent Sources,” Science 339, 1054 (2013).
- D. Taubert et al., “Relaxation of Hot Electrons in a Degenerate Two-Dimensional Electron System: Transition to One-Dimensional Scattering,” Phys. Rev. B 83, 235404 (2011). |
Threats to Sea Otters
It is estimated that the worldwide population of sea otters once numbered between several hundred thousand to over one million before being nearly hunted to extinction by fur traders in the 1700s and 1800s. Sea otters finally gained protections with the signing of the International Fur Seal Treaty of 1911, and became listed under the Marine Mammal Protection and Endangered Species Acts in the 1970s. Worldwide, numbers have slowly recovered but still stand far below original population numbers. While sea otters are vulnerable to natural predators, their populations are significantly impacted by several human factors as well.
Conflict with Humans
Direct conflict with humans, such as shootings and entrapment in fishing traps and nets pose a major threat to sea otter populations. Since sea otters eat many of the same shellfish humans like to eat, such as sea urchins, lobster and crab, they often find themselves in the same areas fishermen like to harvest. Some shell fishers view sea otters as competition and a threat to their economic gain. Many fishermen use fishing gear that can entangle sea otters and cause them to drown.
Whether intentionally or unintentionally, sea otters who find themselves too close to a fisherman’s harvest are often harmed or killed. Fortunately, the number of sea otters deaths from human conflict is slowly decreasing as a result of their protection under the Endangered Species Act and increased regulation of fishing nets.
Oil spills from offshore drilling or shipping are an immense threat to sea otter populations. When sea otters come into contact with oil, it causes their fur to mat, which prevents it from insulating their bodies. Without this natural protection from the frigid water, sea otters can quickly die from hypothermia. The toxicity of oil can also be harmful to sea otters, causing liver and kidney failure as well as severe damage to their lungs and eyes.
A historic example of the impacts oil spills have on sea otters is the 1989 Exxon Valdez oil spill in Alaska’s Prince William Sound, which killed several thousand sea otters. Sea otters are still threatened by events like this because countries around the northern hemisphere continue to ship and drill for oil throughout the Pacific and along coastal areas that sea otters call home. Because their numbers are low and they are located in a rather small geographic area compared to other sea otter populations, the California sea otter is especially vulnerable, and could be devastated by oil contamination.
Pollution on land runs off into the ocean, contaminating the sea otters’ habitat. This can jeopardize their food sources, as well as harm them directly. Sea otters are often contaminated with toxic pollutants and disease-causing parasites as a result of runoff in coastal waters. In California, parasites and infectious disease cause more than 40% of sea otter deaths. Hundreds of sea otters have succumbed to the parasites Toxoplasma gondii and Sacrocystis neurona, which are typically bred in cats and opossums. Scientists have also reported the accumulation of man-made chemicals, such as PCBs and PBDEs, at some of the highest levels ever seen in marine mammals.
Length: California sea otters grow to about 4 feet; Northern sea otters are slightly larger.
Weight: With California sea otters, 45 lbs (females) to 65 lbs (males). Northern sea otters can reach up to 100 pounds.
Lifespan: 10-15 years (males); 15-20 years (females) |
Design of Particle Accelerators
There are many types of accelerator designs, although all have certain features in common. Only charged particles (most commonly protons and electrons, and their antiparticles; less often deuterons, alpha particles, and heavy ions) can be artificially accelerated; therefore, the first stage of any accelerator is an ion source to produce the charged particles from a neutral gas. All accelerators use electric fields (steady, alternating, or induced) to speed up particles; most use magnetic fields to contain and focus the beam. Meson factories (the largest of which is at the Los Alamos, N.Mex., Scientific Laboratory), so called because of their copious pion production by high-current proton beams, operate at conventional energies but produce much more intense beams than previous accelerators; this makes it possible to repeat early experiments much more accurately. In linear accelerators the particle path is a straight line; in other machines, of which the cyclotron is the prototype, a magnetic field is used to bend the particles in a circular or spiral path.
Sections in this article:
More on particle accelerator Design of Particle Accelerators from Fact Monster:
See more Encyclopedia articles on: Physics |
Squash plants make a big statement in the vegetable garden with their exuberant growth and robust vines. They also will self-sow if any fruits are left in the garden over the winter. Occasionally, squash volunteers may grow in unexpected locations if seeds were distributed there by animals. Squash varieties include summer squash, such as zucchini, yellow squash and patty pan squash, as well as winter squash, such as Hubbard, pumpkin and butternut.
Summer squash grow on fairly upright plants and produce fruits mid to late summer. These fruits, including zucchini and yellow squash, have soft skins and flesh. They are sauteed, roasted or used in baked goods. Winter squash take a long time to ripen and have a hard outer rind with hard flesh. A cavity inside the fruit holds hundreds of seeds. Winter squash are used in soups, roasted or pureed for pies or spreads. They require a long cooking time to soften the flesh.
Squash begin as a small stem with one rounded leaf. As the plant grows, more leaves emerge. The leaves are large (4 to 8 inches wide), dark green, with a rounded shape and cut edges. All squash spread through rambling vines, although summer squash have a more upright compact growth than winter squash, according to Iowa State University. The plant produces yellow or orange blossoms midsummer to early fall, followed by fruits. Summer squash develop quickly and can grow from 6 inches to 12 inches in a few days. They are best harvested while small. Winter squash grow and ripen slowly.
Squash are warm-season crops, meaning they sprout when soil temperatures are above 65 degrees F and the last frost has passed. They usually emerge in early summer and grow quickly. Squash are not cold hardy. Vines and leaves blacken and wither with the first frost.
Squash, particularly winter squash and pumpkins, need a lot of space and can quickly take over the whole garden. They are best planted in their own space or given plenty of room to grow (at least 5 feet per plant). Summer squash produce prolifically, and one plant may provide as many as 20 large fruits. For most families, one or two plants is enough.
Pests and Disease
Squash may be troubled by boring insects or disease. The most common disease is powdery mildew, which covers the leaves with a white powder. This disease rarely affects the fruits, although it looks alarming. |
Welcome back to English composition. I'm Gavin McCall. Thanks for joining me. What are we going to learn today? We're going to learn all about the four types of pronouns, from subject and object pronouns to possessive and indefinite pronouns. As we remember, pronouns are one of the eight parts of speech, along with nouns, verbs, adjectives, adverbs, prepositions, conjunctions, and interjections.
Pronouns are words that stand in for a noun or a noun phrase. And an antecedent, that's the name for whatever noun or noun phrase that a pronoun stands in for or represents. The most common pronouns are I, he, she, it, you, him, her, me, they, my, our, your, their, this, that, those, these, who, and whose. Don't worry, you won't have to remember all of them and most should be very familiar to you. And of course, there are many, many more out there.
It's important to know them because pronouns have to agree with their antecedents both in number and gender. So knowing which one to use for what purpose is an important skill. As we'll see in this lesson, pronouns are versatile tools for writers as they can serve several functions in a sentence. Today, we'll cover subject, object, possessive, and indefinite pronouns in particular. Each of these types of differentiated by their purpose in the sentence.
The first type of pronouns we'll look at are subject pronouns. These refer to someone or something doing the action in the sentence. For example, in the sentence, he kicked the ball, the pronoun, he, is the subject pronoun, standing in for whatever man or boy kicked the ball and acting as the subject of the sentence in it's, his, place.
And if it was a girl or a woman kicking the ball, the pronoun I'd use would have to shift to match her gender, as in she kicked the ball even further. And the same goes for more than one antecedent. They kicked the ball until it went flat. In this example, it is also a pronoun because it's standing in for the ball.
As you might have already guessed after our discussion of subject pronouns, object pronouns are those that refer to someone or something that's being acted upon by the subject in the sentence. If for example, I said, the boy kicked me, I be using the pronoun me as an object pronoun, since in that sentence I or me is the object of the verb kick. Or if I was trying to say that the boy kicked his mother and his father, then I'd use a different pronouns. The boy kicked them, making sure to use the correct plural pronoun. And if he left his father alone, I'd write, the boy kicked her, referring only to his mother.
Possessive pronouns are a little different from either subject or object pronouns. These are used to indicate ownership and include words like mine, yours, his, hers, theirs, ours, and whose. They can be used indicate ownership of a thing, as in that book is hers.
Or if we want to know who the book belongs to, whose book is that? And just like subject and object pronouns possessives need to match the gender and the number of the antecedent they replace. For example, I think the book is actually theirs.
Probably the weirdest kind of pronouns are indefinite pronouns. As you might have guessed from the name, this group is the category pronoun that doesn't refer to something specific. There are three basic types of indefinite pronouns, universal pronouns, which include all, everybody, each, every, and both. Then there are partitives, which include any, anyone, anybody, either, neither, no, nobody, some, and someone. And finally quantifiers, including some, any, enough, many, and much.
One common error that writers make when using indefinite pronouns is to use a plural verb, even though most of these require a singular verb. For example in the sentence, everybody wants to rule the world, it could seem like everybody would use a plural verb, since the pronoun is referring to more than one person. But in reality, it needs the singular as it's essentially the same thing as saying that each person wants to rule the world.
And if I said that the boy kicked someone, that would also be an acceptable use of an indefinite pronoun, even though this one isn't standing in for the subject, but the object of the verb. And if I said that the boy has done enough for one day, I'd also be using an indefinite pronoun.
What did we learn today? We learned about the four primary types of pronouns, subject, object, possessive, and indefinite pronouns, and we got to see some of each in use. I'm Gavin McCall. Thanks for joining me.
The noun or noun phrase that a pronoun stands in for or represents.
A word that stands in for a noun or noun phrase in a sentence. |
If you’re interested in planets, the good news is there’s plenty of variety to choose from in our own Solar System. From the ringed beauty of Saturn, to the massive hulk of Jupiter, to the lead-melting temperatures on Venus, each planet in our solar system is unique — with its own environment and own story to tell about the history of our Solar System.
What also is amazing is the sheer size difference of planets. While humans think of Earth as a large planet, in reality it is dwarfed by the massive gas giants lurking at the outer edges of our Solar System. This article explores the planets in order of size, with a bit of context as to how they got that way.
A short history of the Solar System
No human was around 4.5 billion years ago when the Solar System was formed, so what we know about its birth comes from several sources: examining rocks on Earth and other places, looking at other solar systems in formation and doing computer models, among other methods. As more information comes in, some of our theories of the Solar System must change to suit the new evidence.
Today, scientists believe the Solar System began with a spinning gas and dust cloud. Gravitational attraction at its center eventually collapsed to form the Sun. Some theories say that the young Sun’s energy began pushing the lighter particles of gas away, while larger, more solid particles such as dust remained closer in.
Over millions and millions of years, the gas and dust particles became attracted to each other by their mutual gravities and began to combine or crash. As larger balls of matter formed, they swept the smaller particles away and eventually cleared their orbits. That led to the birth of Earth and the other eight planets in our Solar System. Since much of the gas ended up in the outer parts of the system, this may explain why there are gas giants — although this presumption may not be true for other solar systems discovered in the universe.
Until the 1990s, scientists only knew of planets in our own Solar System and at that point accepted there were nine planets. As telescope technology improved, however, two things happened. Scientists discovered exoplanets, or planets that are outside of our solar system. This began with finding massive planets many times larger than Jupiter, and then eventually finding planets that are rocky — even a few that are close to Earth’s size itself.
The other change was finding worlds similar to Pluto, then considered the Solar System’s furthest planet, far out in our own Solar System. At first astronomers began treating these new worlds like planets, but as more information came in, the International Astronomical Union held a meeting to better figure out the definition.
The result was redefining Pluto and worlds like it as a dwarf planet. This is the current IAU planet definition:
“A celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, and (c) has cleared the neighborhood around its orbit.”
Size of the eight planets
Jupiter is the behemoth of the Solar System and is believed to be responsible for influencing the path of smaller objects that drift by its massive bulk. Sometimes it will send comets or asteroids into the inner solar system, and sometimes it will divert those away.
Saturn, most famous for its rings, also hosts dozens of moons — including Titan, which has its own atmosphere. Joining it in the outer solar system are Uranus and Neptune, which both have atmospheres of hydrogen, helium and methane. Uranus also rotates opposite to other planets in the solar system.
The inner planets include Venus (once considered Earth’s twin, at least until its hot surface was discovered); Mars (a planet where liquid water could have flowed in the past); Mercury (which despite being close to the sun, has ice at its poles) and Earth, the only planet known so far to have life.
To learn more about the Solar System, check out these resources:
Solar System (USGS)
Exploring the Planets (National Air and Space Museum)
Windows to the Universe (National Earth Science Teachers Association)
Solar System (National Geographic, requires free registration) |
• Use indigenous materials, such as rammed
earth, or agricultural products, such as
straw bales for exterior walls.
• Use wind and solar power, which are viable in
this region and fit within this cultural setting.
• “Harvest” rainwater for irrigation.
• Use drought-tolerant and wind resistant
• Plant wind breaks.
• Berm buildings and incorporate sod roofs
to save on heating and cooling costs.
• Place buildings on an east-west orientation
to maximize southern solar gains.
• Minimize the intense western exposure.
• Plant deciduous shade trees on the south
and west sides.
• Include basements and berms to take
advantage of the Earth’s thermal mass
for cooling and insulation.
• See the “Common
Principles” section in
the introduction to this
chapter for more |
A valid argument, in formal logic, one in which the conclusion is correctly derived from the premises. That is, a valid argument is one in which, if the premises are true, the conclusion must also be true. A valid argument whose premises are true is called a sound argument, and its conclusion must be true. A valid argument whose premises are not all true is called a valid but unsound argument, and its conclusion is not necessarily true.
A deductive argument is said to be valid when the inference from premises to conclusion is perfect. Arguments and Inference: truth and validity |
This week, we used the book “So then I…” to learn about recess strategies. Recess can be a time of great freedom and fun, but for many students, recess can be a time of stress.
Common recess stressors include:
- feeling embarrassed
- not knowing what to do, or not being skilled enough
- not understanding if others are friends or foes
- being rejected by the group
When recess becomes a time of stress, it can have a significant impact on the learning that takes place in the classroom. Students may withdraw or act out.
Recess stress and how it impacts the learning in the classroom:
- hinders students from being open and ready to learn
- releases cortisol, which stops new learnings from “settling into memory”
- refocuses attention on surviving the school day, not thriving
Many students need explicit instruction in solving social challenges at recess and increasing calming skills.
I have scanned the entire book we read this week in class so that you can access it at home. Please click on the link below:
If you have any questions or concerns please let Madame Lexy know! |
Rainforests are actually forests that receive lots of rain. There are only a small number of rainforest around the world which can be seen in countries like Brazil, Congo, Indonesia, Mexico, Peru, and Venezuela to name a few. However, a lot of these rainforests are in danger because of factors like logging, mining, agriculture, and urbanization. And even though rainforests are few, it serves an important role to the ecosystem worldwide which is why people need to be informed and do whatever it takes to protect and save what is left of it.
The rainforest is a home to diverse types of living organisms. It has been called the jewels of the Earth since many interesting variety of plants, animals, and microorganisms resides there. Like all other living things in the world, rainforest creatures also have the right to survive which is why the rainforest should be protected.
Rainforests are also known as the world’s largest pharmacy. Aside from providing clean water and food for its inhabitants, one fourth of all modern medicines are discovered using rainforest plants. Although the breakthrough was first made by indigenous people, pharmaceutical companies tested variety of plants from rainforests which are currently used worldwide to cure sickness like fever, fungal infections, burns, respiratory problems, and wounds to name a few.
Rainforests also helps in stabilizing the world’s climate through the absorption of carbon dioxide from the atmosphere and replacing it with oxygen. When rainforests are destroyed, carbon dioxide will build up in the atmosphere creating a green house effect that can harm a lot of living organisms, including us humans.
Other reasons why rainforests should be protected are: it protects communities from floods, drought and soil erosion; it supplies the need of indigenous people living there, and it is also a fascinating place to visit where one can enjoy and appreciate the magnificence of nature.
In the end, everyone should think that everything (living or non-living thing) on Earth is created with a purpose. If people care enough for rainforests today, the future generation will still have a chance to experience its benefits. |
Researchers at Newcastle University in the UK have developed a new cell culture technique that allows scientists to produce enormous quantities of cells using a very small growth area. The key to the method is a “peptide amphiphile” coating that allows cells to grow and then “peel away” and detach from the growth surface, leaving space for more cells to grow. The method could be very useful in growing the huge numbers of cells required for upcoming cell therapies.
Cell therapy, using regenerative stem cells, holds enormous promise, but at present it is difficult to grow the huge numbers of cells required. Traditionally, cells are grown in batches inside plastic flasks before being treated using enzymes or chemicals to detach them from the flask. The number of cells grown per batch is limited by the space in each flask and the lab space needed to store and manage the flasks. Millions of patients could benefit from cardiac cell therapy, for example, but it has been estimated that a growth area equivalent to Central London and Midtown Manhattan running simultaneously would be required to produce enough cells.
This new technique approaches things a little differently. The researchers developed a “peptide amphiphile” coating that lets cells growing on it to detach themselves once they reach a certain stage of growth, meaning that they float away and can be collected from the liquid medium above. As one cell floats away, it leaves space for another cell to grow into, meaning that a flask can be transformed from a single-use piece of equipment, to a continuous source of cells.
“This allows us to move away, for the first time, from the batch production of cells to an unremitting process,” says Che Connon, a researcher involved in the study. “Remarkably, with this continuous production technique even a culture surface the size of a penny can, over a period of time, generate the same number of cells as a much larger-sized flask. With our new technology, one square meter would produce enough cells to treat 4,000 patients, while traditional methods would require an area equivalent to a football pitch!”
“There is a fantastically high number of patients in need of cell therapy, such as those suffering from heart, cartilage, skin and cancer related diseases,” says Martina Miotto, first author on the study. “Our new technology provides a much-needed solution while saving costs, reducing materials and improving the quality and the standardisation of the final product.”
Study in journal ACS Applied Materials & Interfaces: Developing a Continuous Bioprocessing Approach to Stromal Cell Manufacture… |
Primary teachers build on the informal knowledge children bring to school and begin teaching essential early number concepts. Young children are active learners and learn best in problem-solving situations where they have opportunities to investigate and construct their own ideas. Teachers continue asking questions which helps students make connections with prior knowledge, everyday life situations, manipulatives and other mathematical topics. Number sense begins to develop as children learn about the meaning of number words and other vocabulary, quantity, symbols and relationships numbers have to each other in our base-ten numeration system. Understanding how our number system works leads to greater success in learning operations and math topics. |
Microcontrollers have reached a cost point and capability stand point that developers for many applications no longer have to write strictly bare-metal code. Instead, developers can write code at a higher level similar to the way a application developer on a PC might write their code. In order to do this, there are two different mechanisms available to embedded software developers: API’s and HAL’s.
A HAL is a hardware abstraction layer that defines a set of routines, protocols and tools for interacting with the hardware. A HAL is focused on creating abstract, high level functions that can be used to make the hardware do something without having to have a detailed knowledge of how the hardware is doing it. This can come in extremely handy for developers who work with multiple microcontroller hardware and need to port applications from one platform to the next. A HAL can also be a great way to allow engineers who aren’t experts in the lower lying hardware to still write useful application code without the nitty-gritty details.
An API is an application programming interface that defines a set of routines, protocols and tools for creating an application. An API defines the high level interface of the behavior and capabilities of the component and its inputs and outputs. An API should be created so that it is generic and implementation independent. This allows for the API to be used in multiple applications with changes only to the implementation of the API and not the general interface or behavior.
Figure 1 below shows what a typical software stack-up might look like for a developer designing embedded software.
Figure 1 – Embedded Software Stack-up
APIs’ and HALs’ are closely related but serve two different functions within software development. The HAL sits between the low level drivers and provides a common interface for common software stacks such as an RTOS and middleware components like USB, ethernet and file systems. The HAL can act as a wrapper for providing a common interface between existing drivers and higher level code or it can exist as the driver interface itself. The API’s act as a toolkit to help high level developers quickly generate application code. It provides common interface code for controlling the real-time behavior of the system and accessing common components such as serial communication and file accesses.
Separating these two concepts and using a layered software architecture can dramatically increase the re-usability of embedded software. Imagine being able to swap out every layer beneath the HAL and replace it with new hardware and drivers. That is a perfect example of what might happen when its time to upgrade existing hardware. Instead of starting from scratch, just the code beneath the HAL would need to be updated. The same idea goes for removing code above the HAL. Same hardware, new application. The result, faster development cycles, increased code reuse and increased robustness due to heritage. |
(Coxiella burnetii infection)
Q fever is a worldwide zoonotic disease caused by the bacteria Coxiella burnetii. Although a variety of animals may be infected, cattle, sheep, and goats are the primary reservoirs for C. burnetii. Infected animals can shed the organism in birthing fluids, placenta, milk, urine, and feces. Coxiella is extremely hardy and resistant to heat, drying, and many common disinfectants, which enables it to survive for long periods in a contaminated environment (maternity pen, stall, barnyard). Infection of humans usually occurs by inhalation of C burnetii from air that contains barnyard dust contaminated by dried placental material, birth fluids, and excreta of infected animals. Other less common modes of transmission include ingestion of unpasteurized milk and dairy products, and tick bites.
The majority of infected humans exhibit mild flu-like symptoms or are asymptomatic. Acute and chronic clinical disease forms can occur in patients. Acute illness symptoms range from fever, headache, myalgia, non-productive cough, and gastrointestinal upset to more serious illness such as pneumonia, hepatitis, miscarriage, or myocarditis. Chronic Q fever is a severe illness occurring in less than 5% of infected patients. Endocarditis is the most common manifestation of the chronic form. Diagnosis of Q fever can be challenging, but the disease is often successfully treated when identified early.
Information on this page has been organized into two categories. Please choose one of the following. |
At the time of the first contact said on this subject in his essay the pristine myth: the landscape of the americas in 1492 some of the first documented cases of european/native. As spanish explorers ventured north to the plains, native americans began to trade with them, as well as the french and british expanding trade: 1541 coronado reaches quivera 1 of 1: the. Students on site topics native americans early contact native americans: early contact french priests were probably the first europeans to write about coming into contact with native. Native americans: interactions at the time of settlement background essay resources introduction and early contact european explorers typically viewed native americans and inuit. Essay 1, unit i when european explorers arrived in the new world around 1500 ad they encountered native the initial contact between native americans and europeans were characterized. Get an answer for 'what effect did the european settlement have on american indians' and find homework help for other native americans questions at enotes enotes home from the time.
For the native peoples of north america, contact with europeans was less dramatic than that experienced by the aztec and inca empires upon the arrival of the sp. Impacts of early european contact with native north americans only available on studymode topic: native have you ever wondered what the impact was on the native american population. In native american culture, it was common to see many women with powerful roles in the community on native american women prior to european contact or any similar topic specifically for.
First, making contact with native americans in 1492, christopher columbus had no idea he had discovered the americas how did the arrival of europeans affect the environment of north. Did early contact between native americans and europeans set the stage for their compare modern scholarship and evidence with traditional views of early encounters contrast first. American indians at european contact many big changes happened to the first americans soon after europeans met them but indian people survived diseases, huge shifts in their cultures.
Native american history native american history only available on studymode native americans essayfrom the 15th to 19th century the the native american way of life changed. Student background essays native americans w hen did people first live in america no one knows exactly native americans the european way of reading, writing and arithmetic many. Impact of europe on native american population paperi was scammed by a student from california: mike huan / demographic shock: european impact on native american populations essay. Europeans and native americans essays: over 180,000 europeans and native americans essays, europeans and native americans term papers, europeans and native americans research paper, book.
Europeans & first contact native american society europeans & first contact europeans first came to north america in the 16th and 17th century, at first it is an event that is. Native american essay historically, relationships between european colonists and their descendants, on the one hand, and the native population of america, on the other, were extremely. |
The Fraser River begins high in the Rocky Mountains of Canada and 1300 kilometers later it spills into the Pacific Ocean near Vancouver in southwest British Columbia. The river’s wide delta attracts millions of migrating birds annually making the Fraser River estuary a premier bird habitat in Canada.
Several million shorebirds pass over the estuary each year in migration and tens of thousands remain in the winter. The mudflats on Roberts Bank in the centre of the estuary harbor the greatest number of shorebirds. Over 500,000 western sandpipers have been estimated to use the mudflat on a single day in spring migration. The mudflats are many kilometers wide during low tide. The mud teams with tiny invertebrates – in some places over 1000 invertebrates have been tallied in a 10 cm diameter core of mud. And on the surface, tiny diatoms and bacteria coat the mud in a greenish hue that western sandpiper dab from the surface with specialized tongues. In the marshes, dowitchers probe for marine worms and yellowlegs dart after small fish. On the sandflats, black-bellied plovers and dunlins pursue marine worms.
High tides push the birds towards the marsh edges and falcons that prey on them. Some dunlins form into tight bundled flocks making hunting difficult for falcons while others join black-bellied plovers in adjacent cultivated fields to seek out terrestrial insects and worms. No month goes by when shorebirds are not on the delta.
The Fraser River estuary is nearly 32,000 ha of intertidal mud and sandflats. The delta includes these intertidal habitats plus cultivated farmlands and suburban and industrial developments behind dykes. The importance of the Fraser River delta is well known to Canadians as a key bird and fish habitat and conservation agencies and organizations have secured many key places for them. The intertidal areas boast various designations listed below. Large bogs have been purchased and the mostly privately owned agricultural lands are maintained for farming uses under legislation. Boundary Bay along the southernmost shore and Sturgeon Banks along the northwest shore, and the South Arm Marshes in the Fraser River mouth are Provincial Wildlife Management Areas. Alaksen National Wildlife Area on Roberts Bank in the center of the estuary is a Ramsar Wetland of International Importance. The George C. Refiel Migratory Bird Sanctuary offers a small amount of protection to some marshes on Roberts Bank. And the entire delta is an Important Bird Area with the greatest combined total number of global, continental and national species in Canada. The WHSRN designation applies to the intertidal portions of the entire estuary and upriver to the South Arm marshes.
Species that use this site
|Maximum single day count|
|Lesser Golden Plover||100|
|Western Sandpiper||500,000 - 1,000,000|
There are records for over 50 species of shorebirds, several of which are rare or casual Asiatic migrants. The regular occurring species on the Fraser River estuary are shown in the table below. Breeding species are indicated with an asterisk. For more information see: Butler, RW and RW Campbell. 1987. The birds of the Fraser River delta: populations, ecology and international significance. Can. Wildl. Serv. Occas. Pap. No. 65, Ottawa.
The intertidal beaches in the Fraser River estuary are largely unused by humans. The exceptions are sand beaches on the east and west shores of Boundary Bay, and on Iona Island on Roberts Bank that are used by sunbathers, swimmers and windsurfers, and two large port developments on Roberts Bank. The British Columbia ferry corporation operates a major ferry terminal and the Vancouver Port Authority operates a coal port and container ship facility on southern Roberts Bank. The delta land behind dikes is home to hundreds of thousands of British Columbians who reside in houses in the Cities of Richmond and Surrey, and Municipality of Delta. Much of the land is cultivated for vegetable crops, pasture and berry crops, and increasingly covered in greenhouses. Freighters, coastal ships and recreational boats ply the Fraser River to docks along the river.
The various provincial, national and international designations provide several layers of protection to the habitats. In addition, the British Columbia Wildlife Act and the federal Migratory Bird Convention Act provide some protection to the birds. Seasonal hunting is permitted in some areas and strictly controlled in the estuary.
Impacts and disturbance
Recreational use of the estuary is growing as more people emigrate to Vancouver. Wind surfing, sail boarding and jet skiing disturbance is localized at the moment. However, conflicts between the conservation and educational uses of estuary will likely continue to grow.
Much of the estuary is protected for wildlife use under various jurisdictions. The greatest threats is to the important shorebird habitat on Roberts Bank where port expansion projects will cover some mudflat habitats. The port authorities have responded to environmental concerns through mitigation and adaptive management proposals to environmental agencies.
The British Columbia government drafted habitat management plans for Boundary Bay, Roberts Bank and Sturgeon Banks. A priority will be to minimize the impact of port development to the shorebird habitat.
- Canadian Wildlife Service
- Ministry of Water, Land and Air Protection, British Columbia
- Centre for Wildlife Ecology at Simon Fraser University
- City of Richmond, British Columbia
- City of Surrey, British Columbia
- Municipality of Delta, British Columbia
- Greater Vancouver Regional District
- Federation of BC Naturalists
- Nature Trust of British Columbia
- Friends of Semiahmoo Bay
- Vancouver Natural History Society
- Boundary Bay Conservation Committee
- Pacific Joint Venture
- Nature Canada
- Nature Conservancy of Canada
Senior Research Scientist
Pacific Wildlife Research Centre
Canadian Wildlife Service
5421 Robertson Road, Delta
British Columbia, V4K 3N2 Canada
Links to Additional Resources |
Film Studies: Animation
Brief Introduction to animation; Past and Present.
Animation is the rapid display of 2-D images or 3-D artwork or model positions in order to create the illusion of movement. Some of the earliest examples of animation trace back to the Paleolithic cave paintings of animals where they have multiple sets of legs in different positions, clearly showing a storyboard of motion.
From cylinder disks (known as the Zoetrope) with a row of images along the inside wall that portrayed the illusion of motion by being spun, the next big development towards motion pictures was the flip book.
The first flip book was invented by John Barns Linnet in 1868. A set of sequential pictures shown at high speed creates the effect of motion. This was used to produce some of the first animated cartoons ever made (see: Gertie the Dinosaur).
Today animation productions use what is called Stop Motion. Stop motion uses physical objects instead of images of people. The object will be photographed, moved slightly then photographed again. These series of photos can be played back at a normal speed and the object will appear to be moving by itself. Clay animations such as Wallace and Gromit, as well as animated movies that use posed figures for instance James and the Giant Peach use the stop motion process.
In 1995, Pixar Animation Studios produced the first fully computer generated feature film called Toy Story, changing animated films forever and proving that companies were slowly making the transition from traditional animation to CGI animation. (CGI animation see: Geri's Game, For The Birds)
There is a very wide variety of films we can watch for this genre. We have decided to spread out the different animation movies and try to watch as many different ones as possible. We will be looking at some of the original cartoon movies produced by the first big Hollywood companies like Walt Disney studios and Warna Bros productions.
Other movies inclued more recent productions by these companys that are now using the more developed technology systems.
Our list of movies that we plan on watching includes:
-Snowhite and the Seven Dwarfs
-The Lion King
-Beauty and the Beast
-James and the Giant Peach
-The Jungle Book
-Wallace and Gromit
-Lord of the Rings (animated version)
-What I think defines the genre?
To start with the obvious, animation is all produced by photographing drawings or arranged objects a frame at a time to create the illusion of movement. There is no 'real life' scenery or people involved in the movie itself. The only part of the movie where people are actually used is the dialogue. However there have been movies made of mixing 'real life' and animation, such as the movie Space Jam where either cartoon characters are edited into the film to look like they are part of the real world, or the actors (in this case Michael Jordan) are edited in using blue screen to make it look like he is in the 'animation world' (see Space Jam 1996).
Although animations often contain genre like elements they are not so much a defined genre category, but more a film technique. Most animations are stop-motion films that are based upon fairy tales and make believe stories. They almost always include the stereotypical characters such as the hero, villein and the damsel in distress. Animation productions often appeal to children and is seen as "childrens entertainment".
However there are some 'adult' cartoon animations that are targeted only for adults, such as Futurama, South Park and Family Guy (see South Park - The "F" Word), but these often are only TV series.
-What similarities are there? (Include Video example)
Like i said, the similarities between the animated films are mainly the fairy tale/ make believe subject.
Almost always appealing towards the children entertainment industry.
Here are a few examples that link to the blog in some way:
Gertie the Dinosaur (Winsor McCay, 1914)
For The Birds
Space Jam (1996)
South Park - The "F" Word
-Other interesting thoughts.
I typically like the category of Animation that we have chosen to focus on because being an art student, i am always interested in the creative side to things. Not only is animation creative because its movie making, but it involves a lot of skills and good ideas to draw the characters and make these movies. |
May is Lyme Disease Awareness Month
Lyme disease is caused by a bacteria (Borrelia burgdorferi) and is transmitted by tick bites. Lyme disease affects both animals and humans, and, according to the CDC, is the most commonly reported vector-borne illness in the United States. The bacteria is carried and transmitted by a tiny black legged tick known as the deer tick.
Deer ticks are very common along the east coast of the United States, including Virginia. They are found in forests, grassy areas, wooded areas, and marshy areas near lakes and rivers. A person or animal can be bitten by a deer tick while hiking, camping, enjoying the outdoors, or even in their own backyard. It is important to check yourself and your pet for ticks after every venture outdoors. But, as the deer tick is so small, it can easily be missed even with regular tick checks.
Symptoms of Lyme disease in animals include fever, lameness, joint swelling, loss of appetite, decreased activity. It can take 2 -5 months for symptoms to present, and it is possible for all, one or no symptoms to show up. Even if the disease is asymptomatic, it can still lead to severe internal damage, ranging from arthritis to kidney failure, in the future. Treatment of Lyme disease includes a long course of antibiotics. It is always important to complete a prescribed course of antibiotics as directed until all of the medicine is gone. It is also important to then retest at the proper interval to ensure the disease is gone.
The best way to help protect your pet from Lyme Disease is to consistently give flea and tick preventatives all year round, regularly have them tested for Lyme Disease during their annual exam, and vaccinate your dog against Lyme Disease.
We all want to protect our pets from the possible dangers they can face every day, please allow us to help you protect yours!
For more information, visit the following links:
AVMA Lyme Disease info Lyme Disease Association CDC Lyme disease Info |
Slovenians emigrated from the Slovenian ethnic territory in three major waves
In the period between 1860 and 1914 there was a significant increase in the population of Europe, including Slovenia, and the land could no longer sustain such great numbers. Moreover, farmers gained personal freedom following the abolition of feudalism and many young men wanted to avoid military service. It is estimated that in this wave, which ended with the First World War, almost one third of the population emigrated from the Slovenian ethnic territory. They most often emigrated to the USA, Germany, Argentina, and other parts of the Austro-Hungarian Empire, and to a lesser extent to Brazil, Venezuela, Canada and elsewhere. A particular group of emigrants were the so-called Alexandrines, mostly young women from the Primorska region and Ziljska Dolina (Ziljska Valley), who would emigrate to Egypt to serve as wet nurses and nannies for rich families.
In the period from 1918 to 1941, Slovenians were abandoning their homes due to a combination of economic (less developed regions) and political reasons (the people of Primorska were fleeing from fascism and Carinthian Slovenians from German nationalism). The tactic of political persecution often included economic neglect of target regions. As the USA eventually placed severe restrictions on immigration, the bulk of emigrants turned to Argentina and, to a lesser extent, to other South American countries, and also to Canada and Australia. There was also considerable emigration to other European countries, mostly France, Belgium and Germany. Furthermore, a part of the population emigrated to other parts of the Kingdom of Serbs, Croats and Slovenians. The emigration of young women to Egypt continued.
In the period after the Second World War and the end of the 1970s emigration was influenced by political and economic reasons. Immediately after the end of the Second World War, there was an exodus of people who did not agree with the communist authorities and whose lives were threatened as a consequence. First they settled in refugee camps in Austria and Italy and from there they migrated around the world. In the following decades, many people who came into conflict with the communist authorities continued to emigrate from Slovenia. They mostly left for Argentina, the USA, Australia, Canada, and to a lesser extent, to the United Kingdom, Sweden and Germany. The 1960s, however, witnessed a wave of economic migrants, which mostly headed to Germany, Sweden, France, Australia, Canada, and the USA, as well as to certain other European countries. |
We need to help the ozone layer recover to concentrations common before 1970 in Antarctica, the Arctic, and throughout mid-latitudes. Here is why.
Global warming of 0.6 oC from 1970 to 1998 was caused by production of chlorofluorocarbon (CFC) gases used widely for refrigerants, spray-can propellants, solvents, and foam-blowing agents. When the Antarctic ozone hole was discovered in 1985, political leaders at the United Nations and scientists worked promptly to pass the Montreal Protocol on Substances that Deplete the Ozone Layer in 1987. This protocol mandated major cutbacks in production of CFCs beginning in 1989. The increase in CFC concentrations in the atmosphere stopped in 1993. The increase in ozone depletion stopped in 1995 and the increase in temperatures stopped in 1998 as shown in the plot in the left part of the header above.
As long as the ozone layer remains depleted compared to levels before 1970, ocean heat content will continue to rise, ocean temperatures will continue to rise, glaciers will continue to be sublimated and to melt, sea level will continue to rise, and storms are likely to become more severe. Air temperatures, however, are not expected to rise significantly because, as shown in this graph, they are observed to increase only as long as ozone depletion is increasing. This is because the primary effect of ozone depletion is to increase the amount of solar ultraviolet-B radiation reaching Earth, where it dissociates ground level ozone pollution, heating the air most efficiently in populated, industrialized regions containing the most ozone pollution. Ultraviolet-B also penetrates oceans hundreds of meters, efficiently increasing ocean heat content.
CFC gases were very safe to use and popular because they do not interact with most substances. But when they rise into the stratosphere, CFCs can be dissociated by very high energy solar ultraviolet radiation, ultimately releasing atoms of chlorine. Under very cold winter conditions in the lower stratosphere, one atom of chlorine can lead to destruction of more than 100,000 molecules of ozone. It takes 5 to 7 years for a molecule of CFC to rise into the stratosphere and it can remain there for as long as a century.
The Antarctic ozone hole is recovering very slowly. It is expected to reach pre-1980 levels by around 2075. While the size of the Antarctic ozone hole was only 9.2 million square kilometers in September 2019, the smallest area since 1984, this is thought to be an unusually low spike similar to 2002 and 1988 shown in the plot in the central part of the header above.
The Montreal Protocol is one of the greatest success stories of international environmental diplomacy, stopping the increases in ozone depletion and global temperatures within a decade after it took effect in 1989. The world would probably be another half degree warmer if it had not been for the Montreal Protocol. But there are significant problems remaining that we need to take care of.
While there were alternative gases that did not deplete ozone, converting existing air conditioners, refrigerators, and freezers to use these gases cost hundreds of dollars for each home-sized unit and for each automobile. A major feature of the Montreal Protocol was to phase out CFCs faster in developed countries. This led to a thriving black market for CFCs legally manufactured in developing countries but illegally diverted to developed countries for maintenance of existing equipment. By the mid-1990s, CFCs were the second largest illegal import via Miami, second only to cocaine. Illicit trade in ozone-depleting substances has been a significant problem, slowing recovery of the ozone layer. More effective enforcement could have major benefits for Earth’s climate.
In 2010, production of CFCs became illegal in China. In 2018, however, scientists measured atmospheric concentrations of CFC-11 in eastern China that suggested a major increase since 2012. The Environmental Investigation Agency traced the source to at least 18 factories producing polyol blend rigid foam used widely for insulation of buildings. The manufacturers admitted that they knew CFC use was illegal, but it was cost effective and it was utilized by all their competitors. After this illegal manufacturing attracted international attention, the Chinese government has improved enforcement of the Montreal Protocol.
There are still large numbers of air conditioners, refrigerators, freezers, and some fire extinguishers that use CFCs, especially in the developing world. The CFCs in these units need to be captured and disposed of safely. Plus new laws go into effect in 2020 to eliminate HCFCs used in refrigeration units built before 2010. CFC and HCFC refrigerants are prolonging the depletion of the ozone layer and increasing the long-term warming of Earth. We need to take enforcement very seriously.
Ozone is also depleted by volcanic eruptions but returns to normal within years after the end of basaltic lava extrusions such as Bárðarbunga in 2015 and within a decade after major explosive eruptions such as Pinatubo in 1991 as shown in the plot in the right part of the header above. There is not much we can do about volcanic eruptions, but luckily those that affect climate in major ways are very, very rare.
There is need for considerable research trying to understand the details of ozone in the atmosphere, whether there is anything we can do to speed recovery from ozone depletion, and whether there is any way to extract CFCs from the atmosphere. These are difficult problems because concentrations of ozone in the atmosphere are only about 0.3 parts per million, while concentrations of CFCs are closer to 0.6 parts per trillion.
As we burn more and more fossil fuels it is very important that we reduce air pollution. The World Health Organization (WHO) estimates that 4.2 million people die prematurely each year due to ambient air pollution, which is worst in China, India, southeast Asia, and Africa. More than 91% of the world’s population already lives in places exceeding WHO air quality guidelines. We know how to minimize air pollution. We have the technology. We just need to consider health a high priority. |
Applications of Newton’s Laws of Motion
When a body applies force on another body then the second body also applies equal and opposite force on the first body. We have learnt about this action and reaction forces from Newton’s third law of motion. In nature forces act in couples. In nature there is not individual separate force. Two forces are complementary to each other. One of these forces is called action force and the other one is reaction force. As long as action force is there, there exists reaction force also. Practical application of Newton’s laws of motion is described below with example.
Exploration in the space:
When a rocket, during exploration in the sky, moves upward, then if you look at the rocket you will see that smoke like white cloud comes out through the backward nozzles. Can you say why such smoke is seen? Due to burning of fuel gas is formed at very high pressure. We often see this coil of gas from the earth. This gas comes out through the small opening at the rear side of the rocket with tremendous velocity. Due to this, a tremendous reaction force is produced which pushes the rocket forward with a very strong velocity [Figure].
Extensive uses of artificial satellites are found in modern telecommunication system and artificial satellites have also great contribution to space research. Behind this successive development of rocket technology is playing a vital role.
Although gas is lighter but due to the very high-velocity momentum of the emitted gas is very high. According to conservation principle of momentum the rocket also acquires equal but oppositely directed momentum and hence rises upward with high velocity. Normally, as fuel rocket uses liquid hydrogen and for ignition liquid oxygen is used. By a special process and controlled rate liquid hydrogen and oxygen are allowed to enter into the combustion chamber. Due to the burning of the fuel high pressure gas is produced that comes out at a very high speed through the opening at the bottom at the rocket.
Let consider that a rocket is in motion in the space. So we can ignore air resistance and influence of gravity. Since due to the emission of gas from the rocket a force or a thrust against the motion of the gas is generated this pushes the rocket ahead with a very high speed. By the help of rocket, it has been possible to acquire the escape velocity (11.2 kms-1) and hence by overcoming the influence of acceleration due to gravity the satellites can be stationed in space and various explorations have been successful.
Let the applied thrust = F
Mass of the rocket = M
Mass of the ejected gas at time ∆t = ∆m
Escape velocity of gas = v,
Change of momentum of the gas in time interval, ∆t = (∆m)v
According to conservation principle of momentum;
Change of momentum in time ∆t = applied impulse force on the rocket
so, (∆m)v = F x ∆t
F = (∆m/∆t) v; here, (∆m/∆t) = rate of using of fuel.
If the instantaneous acceleration of the rocket is a, then F = Ma
a = F/M = 1/M (∆m/∆t).v
From this equation, it is seen that;
- If mass of the rocket decreases the velocity increases.
- To increase the acceleration of the rocket, rate of ejection of gas is to be increased.
- When relative velocity of the gas increases acceleration also increases. |
Seasonal affective disorder (SAD) is a form of depression that becomes more serious during certain times of year. For most people, SAD occurs during the autumn and winter, but it can happen during the spring and summer, too.
Experts believe SAD might be triggered by a lack of sunlight, which causes changes in the production of certain hormones, like melatonin and serotonin.
Common symptoms of SAD include the following:
- Feeling sad and depressed much of the time
- Sleep problems (such as oversleeping or insomnia)
- Lethargy and fatigue
- Changes in appetite (poor appetite or carbohydrate cravings)
- Gaining or losing weight
- Trouble concentrating
- Loss of interest in activities that were previously enjoyed
- Feelings of hopelessness or suicide
Unfortunately, people with SAD may experience sexual problems, too. They may lose interest in sex or feel too tired for sexual activity. Orgasm difficulties are common. Men might have trouble with erections.
Medications for SAD, including selective serotonin reuptake inhibitors (SSRIs), monoamine oxidase inhibitors (MAOIs), serotonin and norepinephrine reuptake inhibitors (SNRIs), and tetracyclic and tricyclic drugs may also have sexual side effects. (To learn more about drug that may lower libido, click here.)
Often, patients see their sexual symptoms improve when they are treated for SAD. Seeing a doctor is an important first step.
SAD treatment options might include:
- Light therapy (phototherapy). Sitting in front of a lightbox for a prescribed amount of time can help bring brain chemicals back to normal levels. A doctor can recommend the best lightbox for this purpose.
- Antidepressants can be an effective treatment for SAD, but there can be sexual side effects. Patients who believe their medication is causing sexual problems should let their doctor know. A change in dose or alternative medication might be appropriate, but patients should always make such changes with a doctor’s guidance.
- Counseling (cognitive behavioral therapy). Talking with a trained professional therapist can help people with SAD learn to cope with their symptoms and manage stress.
The following lifestyle habits might relieve some SAD symptoms, too:
- Exercising regularly
- Socializing with friends
- Communicating with partner
- Continuing to have sex, or staying intimate in other ways (hand holding, cuddling, etc.)
“Seasonal Affective Disorder’s Impact On Sleep & Sex Drive Is More Significant Than You Think”
“Depression and Sexual Health”
(Reviewed: January 11, 2016)
Johns Hopkins Medicine
Payne, Jennifer Lanier, MD (reviewer)
“Low Sex Drive — Could It Be a Sign of Depression?”
“Seasonal affective disorder – Diagnosis & treatment”
(October 25, 2017)
“Seasonal affective disorder – Symptoms & causes
(October 25, 2017)
National Institute of Mental Health
“Seasonal Affective Disorder”
(Last revised: March 2016) |
Wild canids (wolves, coyotes and foxes) are territorial species, and they mark their territories using urine and scat. Since our sense of smell is not nearly as well developed, we rely on snow to be able to track wolves/coyotes and locate their urine marks and scats. Snow also helps keep the DNA intact for lab work.
Like people, wild canids prefer to walk along trails and roads to save energy, especially in winter when snow is deep. Since they want their markings to be easily noticed by other canids to communicate territory boundaries, they often urinate or defecate in noticeable areas.
Look for urine on the side of snowbanks, especially at the intersection of a trail or two roadways, or against the base of a prominent feature on the landscape (a big tree in the woods or a boulder of snow that falls back on to the road after the road has been plowed). If a urine sample smells skunky, it is a fox urine, and should not be collected. Do not collect urine samples in areas where dogs frequent.
Look for scat in the middle or on the side of the road. Sometimes a canid will jump on top of the snowbank and defecate there, or walk along a trail for some time and deposit scat at the top of a slope or on top of a big rock. Generally, wolves do not deposit scat in the middle of the woods, but they may defecate near bodies of water, such as shorelines of a big lake, or on a land bridge between two water bodies. Wild canids and domestic dogs eat different things and usually their scat looks noticeably different, so you can collect scats even in areas where dogs might be present.
Wolf and coyote tracks are bigger and more clearly defined than fox tracks, which often look fuzzy and do not sink deeply into snow. Wolves and coyotes can have tracks that look similar to medium-to -large dogs. However, dogs often travel in a zig-zag pattern, whereas wild canids usually walk in a very straight line, diverting only sometimes to investigate things on a trail. Wolves and coyotes have narrow chests and their tracks can look like a single line of individual paw prints; this pattern is called “direct register” because the rear foot falls directly on the spot where the front foot stepped. Dogs do not often have perfect direct register, being less efficient, “sloppy” walkers. Identifying wolf/coyote tracks apart from dog tracks can be difficult, but remember that dogs are usually accompanied by parallel set of human tracks!
Two tracks intersection: smaller, fuzzier looking fox tracks (horizontal), and larger wolf/coyote tracks (vertical). |
Have you ever stopped to think about how we all feel or experience certain things in the same way as others?
How do you know the color you perceive as being “red” is the same “red” as the person next to you?
What if their red is your green?
While we can’t answer these mind-boggling questions completely, we can explore the brain’s role in processing external stimuli, like colors, textures, sounds, and so on.
This is where your samatosensory cortex (sometimes referred to as the somatosensory cortex, instead) comes into play.
Responsible for processing external stimuli (or sensations), it plays an integral role in our day-to-day lives.
Below, we will explore this cortex in more detail, including how it works and what role it potentially plays in prosocial behavior.
The Location of the Somatosensory Cortex
Before we dive into the important role of the samatosensory cortex, it’s important to understand where it is in your brain and how it contributes toward your brain’s overall anatomy.
It goes without saying that your brain is the central hub of your body. And in order to provide so many different functions, it is a complex structure.
Made up of two sides (or lobes), your brain can be divided into the left- and right-hand side, both of which are connected by the corpus callosum. A different function is performed by each lobe.
The cerebral cortex makes up the outer layer of your brain, acting almost like the skin on a piece of fruit. Its role is to help with processing and more complex thinking skills, like interpreting the environment, language, and reasoning.
Making up part of this cerebral cortex is the somatosensory cortex, which you’ll find in the middle of your brain.
What’s the Role of the Somatosensory Cortex?
The samatosensory cortex receives all of your body’s sensory input. And the cells (or nerves) that extend around your body from the brain are known as neurons.
These neurons sense many different things, including audio, visual, pain, and skin stimuli, and send this information to be processed in the somatosensory cortex. However, the location the neurons send this information to in the cortex isn’t random. Rather, each will have a specific place that’s relevant to the type of information being processed.
When these receptors detect a sensation, they send the information through to the thalamus (the part of your brain that relays receptors’ sensory impulses to the cerebral cortex) before they are passed on to the primary somatosensory cortex.
Once it arrives there, the cortex gets to work interpreting the information. Think of it like any type of data that’s sent to someone for analysis.
Furthermore, some of these neurons are incredibly important, which is why a large portion of this cortex is devoted to understanding and processing all of the information from these neurons. For example, high-level data will be analyzed in more detail and will take more time to interpret, while low-level data will go to a less-equipped analyst, requiring less time to be spent on it.
We can explore this in more detail by using Brodmann’s areas.
Brodmann’s Areas for the Somatosensory Cortex
When examining the brain, Korbinian Brodmann, a German neurologist, identified 52 different regions according to how different their cellular composition was. Today, many leading scientists will still use these areas, hence why they are often referred to as “Brodmann’s areas.”
When it came to the somatosensory cortex, Brodmann divided this into four areas, 1, 2, and 3 (which is further divided into 3a and 3b).
These numbers were assigned by Brodmann based on the order he examined the area, and, therefore, are not indicative of their importance.
After all, area 3 is often seen as the primary area of this cortex.
Area 3 is responsible for receiving the bulk of the input that comes straight from the thalamus, with the information being processed initially in this area.
Area 3b is concerned specifically with the basic processing of things we touch, while 3a responds to the information that comes from our proprioceptors (these are specialized sensors that are located on the ends of your nerves that are found in joints, tendons, muscles, and the inner ear, relaying information about position or motion so you are constantly aware of how your body is moving or is positioned in a space).
Areas 1 and 2 are densely connected to 3b.
Therefore, while the primary location for any information about the things we touch is sent to 3b, it will also be sent to areas 1 and 2 for further in-depth processing.
For example, area 1 appears to be integral to how we sense the texture of something, while area 2 seems to have a role in how we perceive this object’s shape and size. Area 2 also plays a role in proprioception (this enables us to orientate our bodies in a particular environment without us having to consciously focus on where we are).
Should there be any lesions to these areas of the cortex (those that support the roles mentioned above, in particular) then we may notice some deficits in our senses. For example, if there is a lesion to area 1, we will find a shortfall in our ability to distinguish the texture of things, while a lesion to area 3b will affect our tactile sensations.
Each of the four areas we have mentioned are arranged in such a way that a particular area will receive information from a specific part of the body. This is what is known as the somatotopic arrangement, with the entire body being represented within each of the four areas of the somatosensory cortex.
And as some parts of our bodies are more sensitive, e.g. the hands and lips, this requires more cortex and circuitry to be dedicated to processing any sensations that come from these areas. Therefore, if you look at somatotopic maps that depict the somatosensory cortex, you will notice they are distorted, with the areas of the body that are highly sensitive taking up far more space in this area.
How the Samatosensory Cortex May Contribute in Prosocial Behavior
As we now know, when someone experiences pain, this bodily sensation is processed in their brain. It will also switch on an emotional reaction in their brain, too.
However, when we see someone else in this type of pain, many of these same regions are activated in our own brains. But this differs entirely when you are dealing with a convicted criminal with psychopathic tendencies.
When they see someone else in pain, there is less activation in these specific areas of the brain. They will also show disregard and less empathy toward others.
What does this suggest?
That when these “shared activations” are lacking it can cause issues with a person’s empathy.
In fact, over the years, scientists have developed the belief that we are able to feel empathy for others who are in pain because of these shared activations – and this is why we have a desire to help them.
That said, there is still a lack of evidence which helps identify how helpful behavior is influenced by these pain-processing areas of our brain. That’s why some suggest that helpful behavior is contributed to very little by empathy-related processes.
To explore this further, one study looked at participants’ reactions to a video of someone being swatted on their hand by a belt while displaying different levels of pain. The participants could then indicate how much pain they felt this person was in by donating money to them – so the more pain they thought they were in, the more money they donated to try and ease this.
Throughout the study, the participants’ brains (their samatosensory cortex, in particular) were measured. And the results found that the more activated this area was, the more money they donated.
The researchers then interfered with the participants’ brain activity using various techniques that affected how they perceived the sensations in their hand. This altered their accuracy in assessing the pain of the victim, and it also caused disruption to the link between the perceived pain of the victim and the donations. The amount of money being given was no longer correlating to the pain they were witnessing.
A Role in Social Function
These findings suggest that the area of the brain that helps us perceive pain (the somatosensory cortex) plays a role in our social function. It helps us transform the vision of bodily harm into an accurate perception of how much pain the other person is experiencing. And we need these feelings in order to adapt so we can help others.
This also adds to the current argument of what role empathy plays in helping behaviors, with it suggesting that we are indeed promoted to help by brain activity that is empathy-related. It allows us to pinpoint who needs our help.
Putting These Findings into Practice
By understanding this relationship between the activity in our brain and our helping behavior, it may help in the development of treatments for people who are suffering with antisocial behavior. Or for children with unemotional, callous traits – something that’s associated with a general disregard for other people and a lack of empathy. |
Posted by Marta on December 18, 2020 Viewed 1421 times
In this article I will explain how to calculate the absolute value in python, provide you with some examples and some coding problems where you will need to use the absolute value python function.
The abs() function is a built-in function that returns the absolute value for a given number, this basically means removing the negative sign.
In math, the absolute number is described as the distance of a number from zero.
To provide you with some examples, the absolute number of -10 is 10. The absolute value of 80 is 80.
Where value is the number to which you would like to apply the absolute value operation.
This number could be an integer, float a complex number.
In case the value argument is an integer or a float, the abs function will return a number of the same type.
If the argument is a complex number, the function returns the magnitude of the complex number. The magnitude is the number’s distance from the origin in a complex plane.
integer_var1 = 10 absolute_value1 = abs(integer_var1) print("Absolute of "+str(integer_var1)+" = "+str(absolute_value1)) integer_var2 = -10 absolute_value2 = abs(integer_var2) print("Absolute of "+str(integer_var2)+" = "+str(absolute_value2))
Absolute of 10 = 10 Absolute of -10 = 10
float_var1 = 10.5 absolute_value1 = abs(float_var1) print("Absolute of "+str(float_var1)+" = "+ str(absolute_value1)) float_var2 = -10.5 absolute_value2 = abs(float_var2) print("Absolute of "+str(float_var2)+" = " + str(absolute_value2))
Absolute of 10.5 = 10.5 Absolute of -10.5 = 10.5
complex_num1 = 3 + 2j absolute_value1 = abs(complex_num1) print("Absolute( or Magnitude) of "+str(complex_num1)+" = " + str(absolute_value1)) complex_num2 = -3 + 2j absolute_value2 = abs(complex_num2) print("Absolute( or Magnitude) of "+str(complex_num2)+" = " + str(absolute_value2))
Absolute of (3+2j) = 3.605551275463989 Absolute of (-3+2j) = 3.605551275463989
In case you pass an argument that is not valid. Anything that is not integer, float or a complex number, the function returns a bad operand type error, like the one below
string1 = '3' absolute_value1 = abs(string1) print(absolute_value1)
Traceback (most recent call last): File "/Users/martarey/dev_python/python_projects/absolute.py", line 3, in <module> absolute_value1 = abs(string1) TypeError: bad operand type for abs(): 'str'
So far we have seen how to use the absolute function, the arguments, valid input, however how is this useful in real life? Excellent question. Here are some coding exercises where you will need to the absolute value function.
Let’s see one example where you could the absolute value function. Let’s say you need to multiply two numbers. However you can’t use the multiplication or division operator. To solve this problem you will need to use the absolute value. Here is one possible way to solve this problem:
def multiply(multiplicand, factor): result=0 for index in range(1,abs(factor)+1): result = result + multiplicand if factor<0: return -result else: return result print(" 20 x 5 = "+str(multiply(20,5))) print(" -20 x 5 = "+str(multiply(-20,5))) print(" 20 x -5 = "+str(multiply(20,-5))) print(" -20 x -5 = "+str(multiply(-20,-5))) print(" 20 x 0 = "+str(multiply(20,0)))
20 x 5 = 100 -20 x 5 = -100 20 x -5 = -100 -20 x -5 = 100 20 x 0 = 0
To get the result of the multiplication, we basically need to add the multiplicand as many times as the factor indicates. Therefore we can ignore the sign of the factor number.
Since we have to add up the multiplicand a fixed number of times, dictated by the factor, a for loop will be a good fit.
And lastly, before returning the result value, we will to add back the sign of the factor that we ignored at the beginning.
In real life another common use of the absolute value is calculating how much a value deviates from an average value, which can be useful for anomaly detection. Let’s see an example where you could apply this.
For instance, let’s say that you work for an smartphone factory and your role is testing brand new smartphones. You need to find faulty smartphone and return them to be repaired. Your company consider an smartphone faulty when its temperature varies 10 degree above or under the average temperate over 10 hours.
Given the average temperature for the smartphone and the temperature for the last ten hours, we could write a function to indicate if an smartphone is faulty or not. Here is a possible way to achieve this:
def is_smartphone_faulty(average_temp, temp_last_ten_hours): for temp in temp_last_ten_hours: absolute_difference = abs(average_temp - temp) if absolute_difference > 10 : return True return False smartphone1_faulty = is_smartphone_faulty(15.5, [12.5, 16, 20.5, 13.5, 12.5, 5.5, 3.4, 13.5, 13.9, 14.6]) print(smartphone1_faulty) smartphone2_faulty = is_smartphone_faulty(15.5, [12.5, 16, 25.5, 13.5, 12.5, 15.5, 13.4, 13.5, 13.9, 14.6]) print(smartphone2_faulty)
In this case the absolute value function is used in line 3 of the above code snippet. The absolute value is used to calculate the different between the average temperature value received and the different temperature measurements.
To summarise, the absolute value function will remove the sign of the value you pass in. It can used with integer, float and complex numbers. Plus we have seen some examples and some use cases where the absolute function is useful.
Steady pace book with lots of worked examples. Starting with the basics, and moving to projects, data visualisation, and web applications
Unique lay-out and teaching programming style helping new concepts stick in your memory
Great guide for those who want to improve their skills when writing python code. Easy to understand. Many practical examples
Perfect Boook for anyone who has an alright knowledge of Java and wants to take it to the next level.
Excellent read for anyone who already know how to program and want to learn Best Practices
Perfect book for anyone transitioning into the mid/mid-senior developer level
Great book and probably the best way to practice for interview. Some really good information on how to perform an interview. Code Example in Java |
How to Recognize and Manage Allergic Reactions
Recognizing an Allergic Reaction
Allergic reactions happen when a person’s immune system overreacts to something usually harmless, such as food, pollen, bug stings/bites, or medications. Although very common, allergic reactions can be fatal so it is important to recognize the spectrum of symptoms occurring during an allergic reaction, what to do before help arrives and how to prevent incidences in the future.
Mild Allergic Reactions:
- Itchy skin
- Red, itchy eyes
- Nasal congestion, runny nose
Moderate to Severe Allergic Reactions:
- Swelling of the hands, feet, lips, face, or throat
- Nausea, vomiting, diarrhea, or abdominal pain.
- Difficulty breathing, wheezing, or chest tightness
- Throat tightness or hoarseness
- Difficulty swallowing
- Flushing of the skin
- Painful or blistering skin
- Losing consciousness
Severe Allergic Reactions (Anaphylaxis):
Anaphylaxis is a type of shock, meaning that your body is not delivering enough blood to its organs. Symptoms include:
- Drop in blood pressure
- Associated with any of the more severe symptoms above
- Most common symptoms: hives and swelling
What to do Until Help Arrives?
- Immediately call 911
- Ask the person if he or she is carrying an epinephrine auto-injector (EpiPen, Auvi-Q, others) to treat an allergic attack
- Ask whether you should help inject the medication if person has
- Injections typically happen in the individual’s thigh
- Have the person lie still on his or her back
- Loosen tight clothing and cover the person with a blanket. Don’t give the person anything to drink
- If there’s vomiting or bleeding from the mouth, turn the person on his or her side to prevent choking
- If no signs of breathing, coughing or movement, begin CPR
- Get emergency treatment even if symptoms start to improve
*After anaphylaxis, it’s possible for symptoms to recur. Monitoring in a hospital for several hours is necessary.
Knowing the trigger of an allergic reaction is essential for preventing the symptoms from recurring in the future, and it will help your doctor come up with the best treatment plan for you. Most common allergies occur from:
- Recent changes in your diet
- New soaps or detergents
- New medications or supplements
- Environmental exposures
If you happen to have an anaphylactic allergy keep an Epipen on you at all times – make sure it is kept in appropriate conditions and set reminders for yourself on expiry dates. It is also beneficial to keep treatment products on hand such as:
- After Bite Treatment Pads
- After Bite Gel
- Calamine Lotion
- Tecnu Poison Oak & Ivy Cleanser
- After Ivy Poison Ivy Cleanser
- Bite & Sting Extractor Kit
Lastly, be conciensious about those around you as the smallest things such as heavily scented perfumes and dietary restrictions can lead to allergic reactions.
Life is Precious. Be Prepared.
Use the HTML Cheat Sheet for color picker, search for special HTML characters, get examples of the most common tags and generate any tags. |
How size splits cells: Cells measure surface area to know when to divide
One of the scientists who revealed how plants "do maths" can now reveal how cells take measurements of size. Size is important to cells as it determines when they divide.
In a paper published in eLife, Professor Martin Howard from the John Innes Centre and colleagues from the US, Germany and Singapore discovered that cells measure their surface area using a particular protein, cdr2p. The finding challenges a previous model suggesting that another protein called pom1p senses a cell's length.
"Many cell types have been shown to reach a size threshold before they commit to cell division and this requires that they somehow monitor their own size," says Professor Martin Howard from the John Innes Centre.
"For the first time we can show how cells sense size and what aspect of size they measure, such as volume, length, mass or surface area."
The scientists found that as cells grow, the concentration of the cdr2p protein grows. The cells use cdr2p to probe the surface area over the whole cell. Their experimental findings contest a previously suggested model. |
New research suggests 10,000-year-old flint artifacts found at a Neolithic burial site in Jordan may be human figurines used in a prehistoric cult’s funeral rituals. If confirmed, the trove of more than 100 “violin-shaped” objects would be one of the Middle East’s earliest known examples of figurative art, reports Ariel David for Haaretz.
A team of Spanish archaeologists unearthed the mysterious artifacts at the Kharaysin archaeological site, located around 25 miles of the country’s capital, Amman. The layers in which the flints were found date to the eighth millennium B.C., the researchers write in the journal Antiquity.
The study hypothesizes that the flint objects may have been “manufactured and discarded” during funerary ceremonies “that included the extraction, manipulation and reburial of human remains.”
Juan José Ibáñez, an archaeologist at the Milá and Fontanals Institution for Humanities Research in Spain, tells New Scientist’s Michael Marshall that he and his colleagues discovered the proposed figurines while excavating a cemetery.
Crucially, Ibáñez adds, the array of flint blades, bladelets and flakes bear no resemblance to tools associated with the Kharaysin settlement, which was active between roughly 9000 and 7000 B.C. Per the paper, the objects lack sharp edges useful for cutting and display no signs of wear associated with use as tools or weapons.
Instead, the flints share a distinctive—albeit somewhat abstract—shape: “two pairs of double notches” that form a “violin-shaped outline,” according to the paper.
The scientists argue that the artifacts’ upper grooves evoke the narrowing of the neck around the shoulders, while the lower notches are suggestive of the hips. Some of the flints, which range in size from 0.4 to 2 inches, appear to have hips and shoulders of similar widths; others have wider hips, perhaps differentiating them as women versus men.
“Some figurines are bigger than others, some are symmetrical and some are asymmetrical, and some even seem to have some kind of appeal,” study co-author Ferran Borrell, an archaeologist at Spain’s Superior Council of Scientific Investigations, tells Zenger News’ Lisa-Maria Goertz. “Everything indicates that the first farmers used these statuettes to express beliefs and feelings and to show their attachment to the deceased.”
When the researchers first discovered the fragments, they were wary of identifying them as human figurines. Now, says Ibáñez to Haaretz, “Our analysis indicates that this is the most logical conclusion.”
Still, some scientists not involved in the study remain unconvinced of the findings.
Karina Croucher, an archaeologist at the University of Bradford in England, tells Live Science’s Tom Metcalfe that prehistoric humans may have used the flint artifacts to “keep the dead close” rather than as a form of ancestor worship.
Speaking with New Scientist, April Nowell, an archaeologist at Canada’s University of Victoria, says the team’s hypothesis intrigues her but notes that “humans are very good at seeing faces in natural objects.”
She adds, “If someone showed you that photograph of the ‘figurines’ without knowing the subject of the paper, you would most likely have said that this is a photograph of stone tools.”
Alan Simmons, an archaeologist at the University of Nevada, tells Live Science that interpreting the flint pieces as representing the human figure is “not unreasonable” but points out that “the suggestion that these ‘figurines’ may have been used to remember deceased individuals is open to other interpretations.”
Theorizing that the flints might have been tokens, gaming pieces or talismans, Simmons concludes, “There is no doubt that this discovery adds more depth to the complexity of Neolithic life.” |
A life without Google, Facebook, Windows, Amazon, and smartphones—this is a world that seems almost unthinkable today. Technology has transformed society, but how much have software engineers contributed to that?
Software engineers impact society by making life easier and more convenient through the programs and applications they create. They design digital tools to help address problems, improve communication, simplify tasks, and store data. Moreover, they help catalyze society’s development.
In a few decades, the abilities of computers and the benefits they bring to society have more than just doubled. Keep reading this article to understand the role software engineers play in this technological and societal advancement.
Table of Contents
A computer has two types of components – hardware and software. While hardware is the one we can touch, like the mouse or keyboard, it is the software aspect that makes computers truly functional. Software pertains to the programs, applications, and operating systems that keep machines going. The people behind them are software engineers or developers.
Software engineers create and maintain applications and programs such as mobile games, food delivery apps, and business apps. They also build computer networks and systems that allow the creation of applications. They use logic and programming languages to come up with these digital tools.
Computer-related jobs like software engineers or developers are more in demand as our world becomes increasingly digital. This digitization was evident during the pandemic, as everything – jobs, communication, school – transitioned to an online setup. At this time, software was crucial in providing people the means to do online what they used to do face-to-face.
Gadgets and the Internet have become intertwined with our society’s functions, so it would be difficult, if not impossible, to imagine our lives without them. The people behind these gadgets, their programs, and their further development are impactful and beneficial to society.
Software engineers impact society through the applications, programs, and systems they make. These help address problems, make life convenient, improve communication, and enable us to store large amounts of data. The beneficial effect of their work is that it hastens the development of society.
Many software engineers create applications and programs, and some of these goals are to solve pressing problems or issues. For instance, fossil fuel contributes significantly to climate change and other environmental issues.
Thus, people are calling for ways to help reduce its need. Software engineers respond to this call by helping other scientists and inventors to create electric cars that do not need fossil fuels and reduce gas emissions.
During the pandemic, a significant challenge was how to reduce face-to-face interactions to contain the virus better. People shifted to online setups to avoid virus exposure, but they could only successfully transition because of useful and effective software.
ZOOM, Slack, Google Meet, ClickUp, and Google Workspace are just a few of the applications and programs workers use and benefit from as they work online. As more people continue using them, more software developers and engineers are called on to fix bugs, maintain the programs, and add more features.
In one way or another, everyone has experienced how software has automated, simplified, or made life and daily tasks easier. For example, instead of dressing up and going out just to get your favorite meal, it only takes a few taps in the food delivery app to order and have it delivered.
Applications like MS Word or Google Docs also make typing documents and crafting presentations a more effortless and faster task than before. Imagine having to hand-write hundreds of pages!
Another essential product of software development is also artificial intelligence or AI. Virtual assistants such as Google Assistant or Alexa keep your shopping list updated. Not to mention your Spotify playlists. They are products of AI, and the people behind them are software engineers. Although yet to be perfected, they have so much potential to help the world.
Communication has taken a leap from ancient messengers and slow mail. Now, it only takes seconds for a friend to get your Twitter dm or Facebook message. You can find childhood friends and long-lost relatives on social media. One can now hold online classes through Zoom or Google Meet video calls.
Communication has dramatically improved because of these programs and applications, but its engineers and developers continue to work to develop and add more features. Every few months or so, the software engineers are behind all the updates that keep these programs up to date and keep them bug-free.
Businesses and companies rely heavily on data. They use it to analyze trends and make critical decisions. We even benefit from these large troves of information through Google, Facebook, and other platforms that allow us to retrieve data in mere seconds.
We also take advantage of our Cloud storage and Google Drives that can store many gigabytes of files. But these data do not often come in small, 16GB flash drives; they are in large amounts and are stored in large databases and devices. But what makes it all possible?
Large databases are products of the work done by software engineers. The computer systems and networks allow this information to be stored for the long term and easily retrieved when needed.
They also create programs to analyze this information much easier and faster than before. These people also maintain these networks and databases and ensure that they remain optimal.
The work that software engineers do has no doubt helped in the overall development of society. We have come from an era where computers were non-existent to a world where these gadgets have made tasks that take months to happen in a few minutes or years. The software allows for the automation of tasks and speeds up many processes.
Because of software’s convenience and advantages, many businesses and sectors have adapted to it. That is why software engineers are in demand even beyond the technological industry.
They can build codes and programs for hospitals, businesses, schools, and governments. They can also shape these products to fit the sector they are serving. Ultimately, their efforts and the sector’s work allow rapid progression and better development as they compartmentalize and focus.
It has also allowed us to achieve seemingly impossible tasks before – analyzing human genomes, developing medicines and products based on extensive data, and even landing on the moon. Margaret Hamilton, the woman who coined the term “software engineering,” recalls how her program allowed the Apollo 11 astronauts to achieve the victorious moon landing.
The effects and benefits of software that I have outlined above are just a few of the many things made possible. But, we should not place technology on a pedestal and deem them all to be good.
Software can be malicious if created for evil intents. The best example of such are computer viruses. Malicious software, also called malware, has caused extensive damage to individuals and institutions. The infamous 2000 “I LOVE YOU” virus caused billions of damages, affecting even the British Parliament and the Pentagon.
Software developers help society through the positive impact of software programs and applications. However, their work is only beneficial if they create these digital tools to be helpful and not malicious. The impact of software depends heavily on the developer and their ethics.
Software can be regarded as a double-edged sword – something beneficial and harmful, depending on its use. Thus, the Association for Computing Machinery has established a “Software Engineering Code of Ethics and Professional Practice.”
They said code is in place to guide engineers in how they should develop their products, which is for the best of the public, their clients, and their professions. They should maintain integrity and quality in their work.
Software engineers positively impact and help society in a lot of ways. They create helpful digital tools that cater to society’s needs and address challenges. Their work also allows better communication, bringing the community closer than ever before. They make networks and systems that deal with large amounts of data.
The different sectors of society get to benefit from the products of software engineers. Their efforts allow the sectors to improve better and faster, resulting in the more rapid development of society.
Was this article helpful? |
New tools for Inclusive Education
Inclusive Education fosters a learning environment where diversity is celebrated and students learn to respect and support one another. It is about how we develop and design our schools, universities, classrooms, programs and activities so that all students learn and participate together. Teaching people with disabilities and those without in the same classroom helps them all to fulfil their potential and socialise naturally.
Learning institutes, communities and governments should look to move forward with a social model of disability, whereby people with disabilities are held back by barriers in society rather than any intrinsic problem. This model causes a shift away from a harmful narrative that disabilities are a negative thing to be conquered or hidden, being instead another part of neuro-diversity to be celebrated and included. Educational inclusion is the rival to segregated (students housed in separate institutions) or integrated (students in the same building but different classroom for example), by maintaining the right for everyone to have an equal education. Following the principles of inclusive education, students are encouraged to develop individual strengths and work towards appropriate goals, whilst growing in a diverse milieu. When there exists a high value on diversity, everybody experiences the associated benefits such as larger opportunities for personal growth, inspired creativity and additional educational support. In the bigger picture, society profits from increased engagement in civic participation, employment, and community life.
Across the world nearly 50 per cent of children with disabilities are not in school, compared to only 13 per cent of their peers without disabilities1. The barriers that students with disabilities face are highly prevalent and work must be done to secure educational rights. Excluding people based on their perceived disabilities is deeply unjust, and comes at the cost of a loss of potential of the individual and society. Standing in the way of inclusive education is often a lack of consensus and general inertia, and progress can be slow in achieving equality and inclusion. At the school level, teachers must be trained, buildings must be refurbished and students must receive accessible learning materials. At the community level, stigma and discrimination must be tackled and individuals need to be educated on the benefit of inclusive education. At the national level, Governments must align laws and policies with said aims, and regularly collect and analyse data to ensure students are reached with effective services. These steps are vital in changing attitudes that inclusion is vital to a healthy community and everyone has a valuable role to play in society.
The project ‘Tools for Inclusive Education’ (ToFIE) aims to equip schools and teachers with the necessary skills to bring students in especially higher education with special learning needs into class. It moves away from traditional teaching methods which do not recognise, or lack the capacity to cater for, the differing needs of students and seeks to create a diverse learning environment built on excellent teacher competences. ToFIE will enhance knowledge and competences of educators of any discipline in the fields of Learning Disorders, promote their professional development and supply them with tools they can use in an educational context. Together, the partners (Laurea University, Vantaa, Finland; Creative Learning Programmes Ltd., Edinburgh, UK; European Education and Learning Institute, Crete, Greece; Logopsycom, Mons, Belgium; University of Pitesti, Romania; INCOMA, Seville, Spain) will also create a course, which will enhance educators’ skills, improve the support they provide to their students and promote the social inclusion of students with learning disorders.
Research that the partners conducted indicated a high frequency of students with disabilities, but only limited coverage of policies and adjustments available to them. ToFIE’s objective is to change this, and bring inclusive, modern and dynamic learning processes into a multitude of educational settings, shifting the culture towards one of inclusion and respect. To better improve the competences of educators the course created will contain inspiration, new methods, best practices and other weapons for a truly inclusive arsenal. The results will be shared to a wide range of partners to maximise the impact and support as many as possible.
Inclusive education is not a fully accepted part of the mainstream and too often students with disability are sidelined and treated unjustly. However, the improved outcomes for all arising from inclusion are hard to ignore for long. By raising the level of competences that educators possess, tools such as ToFIE hope to bring about an inclusive future, based on excellence and fairness. |
Students will learn about the first celebration of Earth Day on April 22nd, 1970, and why scientists, politicians, farmers, gardeners,teachers, and other concerned citizens came together to CALL ATTENTION TO POLLUTION, WASTE OF NATURAL RESOURCES, and other problems of human activity that are harming our environment. In this lesson, discussions and arts projects will reinforce student learning about the problems of humans' past behavior, and ESPECIALLY help them think of options that are environmentally responsible, both as students right now, and also in the future as adult inhabitants of the Earth.
What are the ways that citizens can be responsible caretakers of the Earth and its ecosystems, which affect our everyday lives as inhabitants of this planet? Which specific steps, and meaningful action can be taken as young students? Also, what other ways can we protect our land, water, and air, as we become responsible adults?
Other Instructional Materials or Notes:
Smartboard and teacher's computer
Student devices for viewing Earth's resources site / 1:1/ before class discussions
*2 sheets of blue and /or green poster board for each project-based group of students, 3/4 students to a group
* disposable gloves to wear during the "playground litter pick-up" activity
* a "reusable" grocery bag to carry litter during playground litter pick-up,or cardboard box
* small pieces of trash [ water bottles, cups, tin foil,small plastic bags, small pieces of plastic grocery bags, etc. that have been found as a part of "playground litter pick-up"]
*nature magazines, such as National Geographic [checked first for cultures who are disrobed above waist], Sierra Club magazine, AAA GO magazine, gardening and travel magazines.
*Markers, glue, stapler, tape, scissors
* Stamp pads of several different colors, if possible [but definitely one working stamp pad]
Lesson Created By: katherinebradley, katherine bradley |
There are many different types of resistors available. In order to identify or calculate the resistance value of a resistor, it is important to have a marking system. Resistor Color Code is one way to represent the value of the resistance along with the tolerance.
Resistor color code is used to indicate the value of resistance. The standards for color coding registers are defined in international standards IEC 60062. This standard describes color coding for axially leaded resistors and numeric code for SMD resistors.
There are several bands to specify the value of resistance. They even specify tolerance, reliability and failure rate. The number of bands vary from three to six. In case of 3 band code, the first two indicate the value of resistance and the third band acts as multiplier.
Three Band Resistor Color Code
- The three band color code is very rarely used.
- The first band from the left indicates the first significant figure of the resistance.
- The second band indicates the second significant number.
- The third band indicates the multiplier.
- The tolerance for three band resistors is generally 20%.
- The color code table corresponding to three band resistors is shown below.
For example if the colors on the resistor are in the order of Yellow, Violet and Red from left, then the resistance can be calculated as
47× 102± 20 %. This is 4.7 KΩ± 20%.
This means the resistance value lies in the region of 3760Ω to 5640Ω.
Four Band ResistorColor Code
- Four band color code is the most common representation in resistors.
- 前两个乐队从左边英蒂cate the first and second significant digits of resistance.
- The third band is used to indicate the multiplier.
- The fourth band is used to indicate tolerance.
- There is a significant gap between third and fourth bands. This gap helps in resolving the reading direction. The color code table for four band resistors is as shown below.
For example, if the colors on a four band resistor are in the order Green, Black, Red and Yellow then the value of resistance is calculated as 50 * 104± 2 % = 500KΩ± 2%.
Five Band ResistorColor Code
High precision resistors have an extra band which is used to indicate the third significant value of the resistance. The rest of the bands indicate the same things as four band color code.
- The first three bands are used to indicate the first three significant values of resistance.
- Fourth and fifth bands are used to indicate multiplier and tolerance respectively.
- There is an exception when the fourth band is either Gold or Silver. In this case, the first two bands indicate the two significant digits of resistance.
- Third band is used to indicate multiplier, fourth band is used for tolerance and fifth band is used to indicate the temperature coefficient with units of ppm/K. The color code table for five band resistors is shown below.
For example, if colors on a five band resistor are in the order Red, Blue, Black, Orange and Gray then the value of resistance is calculated as 260×103± 0.05 = 260 KΩ ± 0.05%.
Six Band ResistorColor Code
- In case of high precision resistors, there is an extra band to indicate the temperature coefficient.
- The rest of the bands are same as five band resistors.
- The most common color used for sixth band is black which represents 100ppm/K.
- This indicates that for a change of 100C in temperature, there can be a change of 0.1% in the value of resistance.
- Generally the sixth band represents temperature coefficient. But in some cases it may represent reliability and failure rate.
The color code table for six band resistors is shown below
For example if the colors on a six band resistor are in the order Orange, Green, White, Blue, Gold and Black then the resistance is calculated as 359 ×106± 5% 100 ppm/K = 359MΩ± 5% 100 ppm/K.
Tolerance Letter Coding for Resistors
The letter code for tolerance is shown below
- B = 0.1%
- C = 0.25 %
- D = 0.5 %
- F = 1 %
- G = 2 %
- J = 5 %
- K = 10 %
- M = 20 %
K and M should not be confused with kilo and mega Ohms.
SMD Resistor Code
There are three types of coding systems used to mark SMD Resistors. They are
- Three digit coding
- Four digit coding
- E96 coding
Three Digit Code
In three digit coding, the first two numbers indicate the significant value of the resistance and the third number indicates the multiplier like 10 in case the digit is 1, 100 in case the digit is 2 or 1000 in case the digit is 3 and so on.
A three digit coded SMD resistor is shown below
Some examples of three digit codes are
450 = 45 * 100 = 45 Ω
221 = 22 * 101 = 220 Ω
105 = 10 * 105 = 1 MΩ
If the resistance is less than 10Ω then the letter R is used to indicate the position of the decimal point. For example
3R3 = 3.3Ω
47R = 47 Ω
Four Digit Code
For more precision resistors, a four digit code is marked on them. The calculation is similar to three digit code. The first three numbers indicate the significant value of the resistance and the fourth number indicates the multiplier.
A four digit coded SMD resistor is shown below
Some examples under this system are
4700 = 470 * 100 = 470 Ω
1001 = 100 * 101 = 1 KΩ
7992 = 799 * 102 = 79.9 KΩ
For resistors less than 100 Ω, R is used to indicate the position of the decimal point.
15R0 = 15.0 Ω
电子ic Industries Association (EIA) specified a standard preferred value system for resistors and is named as E series. IEC 60063 is an international standard which defines the preferred number series in resistors (and also for capacitors, inductors and Zener Diodes). The coding is based on the tolerance values and different E series available are
- E3 50% tolerance
- E6 20% tolerance
- E12 10% tolerance
- E24 5% tolerance
- E48 2% tolerance
- E96 1% tolerance
- E192 0.5, 0.25, 0.1% and higher tolerances
- E3 coding is no longer in use and E6 coding is very rarely used.
- The E96 coding system is used for high precision resistors with a tolerance of 1%.
There is a separate coding system in EIA E96 marking system. This system uses three figures for marking. The first two are numerals which indicate the three significant digits of the value of resistance. The third figure is a letter used to indicate the multiplier.
The EIA E96 markings on an SMD resistor is
The EIA 96 code scheme for multipliers is shown below
The EIA 96 code scheme for significant values of resistance is shown below
92Z = 887 × 0.001 = 0.887 Ω
38C = 243 × 100 = 24.3 KΩ
Color Coding Table
The complete color coding table is shown below |
Computing is an essential part of the curriculum and, as a result of this, the resources available to the children are of the highest quality. In school the children have access to computers and IPads.
The Computing curriculum now focuses on three key strands and encourages children to understand how computers work, how to create and develop software and how to use computers efficiently and safely. These areas are:
•Information Technology: Using applications, sending messages, blogging, photography and film making.
•Computer Science: Coding, programming and control.
•Digital Literacy: Understanding computers, e-safety and cyber bullying.
“I first found using code very tricky, but now I can make all sorts of games and animations, just by setting a few instructions in the right order!” Sam, Year 6
“I really like coding because I’ve never done it before. It is much more exciting than making a PowerPoint.” Harry, Year 3
“When the Police came to school to talk about E-safety and Cyber-bullying, I found it really helpful, because I know what to do now if it happens to me.” Ella, Year 6
“My favourite part of computing is making instructions for my robot to follow.” Daniel, Year 6 |
Lowland Calcareous Grassland
Lowland calcareous grasslands are found on thin base-rich soils derived from underlying limestone rocks at altitudes below 250 m. They are characterized by a variety of lime-loving plant species such as thyme Thymus praecox, quaking grass Briza media, salad burnet Sanguisorba minor and hoary plantain Plantago media.
Calcareous grasslands support an exceptionally rich diversity of plants, including some rare species that are restricted entirely to lime-rich soils. The habitat is also important for many birds and for invertebrates, such as the northern brown argus butterfly Aricia artaxerxes.
The total amount of lowland calcareous grassland within the UK has been estimated at 33 000 - 41 000 ha. The majority of this is chalk grassland, found predominantly in the lowlands of southern England. Lowland calcareous grassland suffered significant losses during the last century. For example, there was a 20% loss in English chalk grassland sites between 1966 and 1980.
Within the Region, calcareous grasslands are mainly found on the Carboniferous limestone which runs north-east from the North Pennines towards the Northumberland coast, on the Magnesian Limestone of County Durham, and in association with the Whin Sill where they often form a mosaic with areas of acid grasslands.
The Magnesian Limestone grasslands found in the lowlands of County Durham and Tyne and Wear are of particular national interest. Magnesian Limestone grassland is unique to Britain. It is one of the UK’s scarcest and most restricted habitat types, of which the North East Region is the stronghold; only 279 ha are represented in the national SSSI series and two thirds of this lies within County Durham and Tyne and Wear. These grasslands are characterised by the presence of blue moor-grass Seslaria albicans and small scabious Scabiosa columbaria.
Lowland calcareous grasslands in the Region have a highly fragmented distribution, due to variation in overlying drift deposits and past losses to agricultural improvement. The most important sites are designated as SSSIs.
- Neglect and absence of grazing can lead to scrub encroachment or bracken invasion and a loss of grassland areas.
- Overgrazing can cause soil erosion and lead to a loss of species-richness and structural diversity within the grassland.
- Agricultural intensification in the form of fertilizer use, herbicide application,ploughing and re-seeding has historically been a major source of losses of grassland sites and may still potentially be damaging and destroying some grasslands.
- The in-filling of abandoned limestone quarries (eg, by use as landfill sites) where calcareous grasslands have become established is a threat in some localities.
- Quarrying may still be a threat in some areas.
- Acidification and nitrogen enrichment caused by atmospheric deposition may have a deleterious effect on calcareous grasslands, but potential impacts have not been fully assessed.
Opportunities for protection and enhancement
- The MAGical Meadows Project, run by the Durham Biodiversity Partnership, aims to protect, enhance and create Magnesian Limestone grasslans within County Durham and Tyne and Wear
- Payments for the sympathetic management of lowland calcareous grasslands are available under the Environmental Stewardship Scheme. Natural England makes payments through its Wildlife Enhancement Scheme for the management of the majority of Magnesian Limestone grassland SSSIs in the Region.
- Restorartion of aggregates sites can provide an opportunity to create new Magnesain Limestone grasslands.
- The Magnesian Limestone grasslands of Thrislington and Cassop Vale are managed as NNRs by Natural England. A number of other lowland calcareous grassland sites are managed as nature reserves by other conservation bodies, eg the Durham Wildlife Trust reserve at Bishop Middleham Quarry. |
“We are living on this planet as if we have another one to go through.” – Terry Swearingen. The main cause for climate change is due to a huge impact from humans. There is evidence that about 8000 years ago the climate of Middle East and Antarctica was much wetter than today (Cam). Large aquifers were plentiful as well as fossil fuels. Serious droughts have become much more serious and common in many parts of Africa. They are now being warned by climate analysts that more torment is expected next year as the effects of climate change start to have effect (Inews). Temperature and rainfall patterns are changing but the changes are unpredictable, and there have been extreme flood and drought. Humans are responsible for climate change because humans are causing pollutions in general that lead to greenhouse gases and deforestation. Air pollution that humans emit leads to climate change. A pollutant associated with climate change is a part of smog called sulfur dioxide. Sulfur dioxide and related chemicals are known as a cause of acid rain. Acid rain caused by humans is changing the soils function all over the world. If there is loss of calcium in soil, calcium depletion will impact the ecosystem and the acid rain will also be part of the problem for climate change. Tiny atmospheric particles called aerosols are a branch of air pollution that gets in our atmosphere. Most aerosols are made by natural processes, such as erupting volcanoes, and human agricultural and industrial occupation. When the soil is at its limit to protect, the acidification makes aluminium accessible to the soil. “Aluminum is harmful to everything, from diatoms to spruce trees,” said Lawrence, a scientist for the U.S. Geological Survey (Cornell). Air pollution includes greenhouse gases and one of these includes carbon dioxide. Carbon dioxide is caused by the exhaust of trucks, buses and cars that humans use. In addition, a researcher named Hallam that is into political campaigning said, “air pollution is killing people now and it is related to wider issues of climate change and our reliance on fossil fuels. We have to take action now and make people aware of what we are facing,” (The guardian). By trapping the earth’s heat in the atmosphere, greenhouse gases lead to warmer temperatures and signs of climate change by rising sea levels, heat-related deaths and severe weather. Scientists agree that today’s warming is mostly caused by humans placing huge amount of carbon that is in greenhouse gases in the atmosphere, such as when we burn fossil fuels, oil and gas. Scientists gathered enough evidence and improved their methods of helping understand natural and human factors and they have confidence in saying that the main problem is heat-trapping gas and humans are responsible for the average warming over the past decades (Ucsusa).Greenhouse gases in the atmosphere have increased since the beginning of the industrial age and almost all of this increase is because of human activities (Epa). Garbage is the major help to climate change because the landfills have powerful methane, a greenhouse gas. Every time someone throws garbage or waste, this is contributing to climate change and it is said that about one third of the greenhouse gases emitted come from food waste alone (Yournec). Architecture McDonough states saying, “the problem of carbon in the atmosphere is not carbon’s fault. Climate change is the result of breakdowns in the carbon cycle caused by us. It is we who have made carbon toxic — like lead in our drinking water or nitrates in our rivers.” On top of this, there has been a major climate disaster in the U.S. in 2017. It has been said that NOAA, National Centers for Environmental Information did not connect human-made climate change to the big disasters that took place the year before. However, burning the greenhouse gases certainly highered the frequency of climate disasters is in controversy among many scientists (Al Jazeera). Deforestation is caused by many reasons including greenhouse gases, forest fires, illegal logging, and mining. The surplus of these greenhouse gases in our atmosphere is harmful to life and the environment. With a huge amount of these gases being released into the atmosphere by human innovations, the greenhouse effect is raising the temperature. Numerous plants and animals cannot adapt to temperature changes in their environment quickly, therefore, this is producing many to become endangered, and extinct for some (Nature). Each year, fires burn millions of hectares of forest all around the world. Forest fires have been exceeded for years allowing huge, unnatural vegetation that makes the fire burn more greatly and this has brought consequences to the climate, biodiversity and economy. Illegal logging happens in all types of forests across all continents destructing wildlife, nature and making community less lively and this harvested wood goes to utilization markets. Also, due to high mineral prices, there has been a lot of mining happening in forests. Since mining projects are supported by physical and organizational structures and facilities such as buildings and roads, this put pressure on freshwater ecosystems and forests (WWF). “The world’s rain forests could completely vanish in a hundred years at the current rate of deforestation,”(National Geographic).
The Best Critical Essays
- Ever since before I entered junior high, given
- (no title)
- Introduction attack can be uniform, localised, or selective.
- According of the resources we have around us.Xenotransplantation
- Agribusiness manufacturing the food products also part of
- According lose track of the good and the
- Michael where he instructed government officials to take
- The 737 jetliner flying passengers, luggage and even |
At their most basic, rivers are geographical phenomena which cause fresh water to move through dry land from one place to another, usually as part of a natural drainage system. Every continent on Earth, including Antarctica, has rivers.
There are no hard and fast rules regarding what actually constitutes a river. The word ‘river’ is, in effect, a catch-all term; creeks, brooks, and streams are, in effect, different names for small rivers.
Throughout history, rivers have played a large part in human development and day to day life – and they continue to do so today. Many of the ancient civilizations including those founded in Egypt, China, India, and Rome grew up along large rivers, which provided what seemed to be an inexhaustible source of fresh drinking water and, later, a convenient means of transportation between various destinations.
Today, hydroelectric generating plants built on rivers which have been dammed account for about 25% of the world’s available electricity; rivers also account for around 30% of the planet’s available fresh water. In the United States alone, inland waterways made up of a complex system of connected rivers are used to transport around 650 million tons of goods worth an estimated $75 billion each year. In the last few years, multi-million dollar industries have grown up around river cruises and whitewater river rafting.
The Basic Anatomy of a River
While individual rivers have their own unique characteristics, all rivers have a couple of things in common. All rivers flow downhill due to gravity (although some very fast flowing rivers will go uphill for very short distances, in much the same way as a fast rolling ball will be able to travel over short inclines due to its speed). All rivers also have a source where it begins (elevated, but sometimes not by much) and a ‘mouth’ into which it drains, such as a sea, ocean, lake, another larger river, or even a desert.
The difference between the elevation of a river’s source and mouth will often be a determining factor in a river’s speed and size. For example, the Amazon River is the largest river in the world (it discharges over 55 million gallons of water per second) and is considered by many to be the fastest-flowing; its source is high in the Andres Mountains of Peru and it flows into the Atlantic Ocean, at sea-level, off the coast of Brazil about 4,000 miles away.
Rivers can have a number of sources including lakes, run-off from melting ice and snow (particularly in mountainous and highland regions), smaller streams and brooks, and glaciers. Many large rivers have their source where two smaller rivers converge: for example, the source of the Ohio River is where the Monongahela and Allegheny rivers meet. Rivers can be short or long, wide or narrow, fast or slow.
There are 165 ‘major’ rivers in the world, and literally tens of thousands of smaller ones. The longest river in the world is the Nile River in the Middle East at just over 4,100 miles, while the shortest is the Roe River in central Montana at just over 200 feet.
A common misconception is that all rivers run north to south (or south to north below the equator); however, this is a myth. Many of the world’s great rivers flow east to west or west to east.
So what are some of the different types of rivers in the world today?
When the average person thinks of a river, they are most probably thinking of a perennial – also sometimes also referred to as a permanent – river. This type of river was perhaps best described by lyricist Oscar Hammerstein II in the song Ol’ Man River: “Long ol’ river forever keeps rollin’ on.”
Simply put, a perennial river is a river that normally never goes dry and continues to flow throughout the year. Although the height and flow-rate can be affected by heavy rains or lengthy periods of drought, in most cases a perennial river has a stable source or flows through areas where the rainfall exceeds the evaporation rate, which ensures its continuous flow.
Many perennial rivers are quite old – predating human beings by tens of millions of years – and over the millennia have cut paths for themselves which have resulted in sometimes radical geological alterations. Perhaps the most familiar and graphic example of this is the Grand Canyon which was carved by the flow of the Colorado River to a depth of over 6,000 feet and a width of about 18 miles in some places over the course of an estimated 70 million years.
While not as dramatic as the Grand Canyon, all perennial rivers have well-established river beds (which are below the water line) and banks (which are above) through which they flow. Unless interfered with by man, perennial rivers are continually deepening and widening (albeit sometimes imperceptibly to the naked eye) as their beds and banks are eroded slowly over time by the continual flow of water.
Perennial rivers are often dammed to restrict their flow for the purpose of creating reservoirs, aiding in irrigation, improving navigation, and building hydroelectric plants. Perennial rivers have been dammed for centuries, with the first known dam dating back to ancient Mesopotamia, having been built around 3,000 BC.
Sadly, the flow-rate and depths of some of the world’s perennial rivers have been decreasing in the last couple of decades due to increasing manipulation of the river by humans for the purposes of consumption and irrigation, and climate change factors.
Periodic, also often referred to as ephemeral or intermittent, rivers differ from perennial rivers in that they do not flow throughout the year. Generally speaking, periodic rivers will only flow above the surface anywhere from one quarter to three-quarters of the year, and some will only flow for a few days at a time.
Regardless of the frequency of above-ground flow, most periodic rivers will have a well-established bed and bank, and will often have water beneath them. The river beds remain dry until significant rain or snowmelt alters the amount of water either at the source or in the level of the groundwater beneath them. Most periodic rivers have predictable seasons of flow which remain more or less constant year after year.
Periodic rivers are most often found in arid areas where there is minimal rainfall. While not flowing consistently throughout the year, it is not uncommon for a periodic river with water beneath it to have pools within its banks even when the river itself is not flowing.
Periodic rivers are often quite short, but can also be relatively long. For example, the Ugab River in Southern Africa is slightly over 310 miles long. Periodic rivers have been increasing in number over the last fifty years or so due to environmental conditions, climate change, human manipulation of perennial rivers, and other factors.
An episodic river is a river that only flows following a particular event (or episode) such as heavy rainfall, early snowmelt, or swollen runoff channels from other rivers. They differ from perennial and periodic rivers in that they usually have no stable source, usually have no groundwater beneath them, and are almost completely dependent on climatic conditions for their water.
Some episodic rivers are thought to be quite old, have well-defined beds and banks, and were once probably perennial or periodic rivers. Most, but not all, episodic rivers do not have predictable times of flow.
Although many episodic rivers will flow for very short periods of time every year or two, it is not uncommon for them to remain completely dry for years and sometimes decades. For example, the 450-mile long Nossob River in the Kalahari region of Southern Africa has not had a significant flow since the late 1980s.
Not named for its vacation or cruise appeal, an exotic river is one that flows through a dry environment in which not much other freshwater exists. Exotic rivers usually have their sources in humid or mountainous areas, and travel through extremely arid environments, such as a desert. Most of the better known exotic rivers are perennial, although some significant examples are periodic.
Exotic rivers have played a crucial part in the formation of many of the world’s most important civilizations both by providing drinking water to large numbers of people and irrigating otherwise barren land to provide food for previously nomadic humans and livestock. Ancient civilizations that grew along the banks of exotic rivers include Egypt (the Nile), Mesopotamia (the Tigris and Euphrates) and China (Yellow or Huang, and Wei).
Today, many of these same rivers (and other exotic rivers) still provide drinking and irrigation water to deserts and areas with little rainfall, as well as providing transportation and electric power to tens of millions of people via hydroelectric generating stations.
Most of the major deserts in the world (except those located in Australia) have at least one exotic river running through it. In the United States, the Colorado River is considered to be an exotic river as it flows though many barren and arid areas of Arizona, Utah, and Nevada.
A tributary (also referred to as an affluent) river is any river that does not flow into an ocean or sea, but rather has its mouth at a lake or another river. Although the name might convey the idea that tributary rivers are small, in reality, many of the major rivers in the world are tributaries of other rivers.
A prime example of this is the Missouri River. The longest river in North America at just over 2,340 miles and one of the most powerful, Missouri flows west to east from the Rocky Mountains in Montana to St. Louis, Missouri, where it empties into the Mississippi River, making it a tributary of the Mississippi. Along with its course, Missouri has hundreds of tributaries of its own.
In some cases, the confluence (or meeting) of two separate tributary rivers will create a new river. For example, the Ohio River is formed in Pittsburg, Pennsylvania by the confluence of the Monongahela and Allegheny rivers – at which point the two tributaries end and the new river starts. The Ohio then, in turn, becomes a tributary of the Mississippi River about one thousand miles later in southern Illinois.
Effectively the opposite of a tributary, a distributary river branches off from another river to form a brand new river. The point at which the distributary branches off from the main river is generally referred to as a fork. The distributary river itself is sometimes referred to as a channel or arm.
Generally speaking, a distributary river will have a steeper grade than the river that feeds it; the force of gravity will cause a percentage of the water in the main river to flow down into the distributary. In some cases, a distributary river will re-join the main river from which it was created further along its route, while in other cases the distributary will have its mouth somewhere else, sometimes traveling all the way to the sea.
Because of their steeper grades, distributary rivers will often grow at a faster rate than the main river that feeds it, drawing off more and more water over time. This can sometimes have disastrous effects when it comes to modern man’s manipulation of a river.
An example of this is the Atchafalaya River in Louisiana, a distributary of the Mississippi. As it has grown, the Atchafalaya has drawn off an increasing amount of the Mississippi’s flow causing experts to worry that eventually the decreased flow of the Mississippi would endanger the ports at New Orleans and Baton Rouge. A dam, called the Old River Control Structure, was completed in the mid-1960s to control the amount of water flowing into the Atchafalaya at the point where the two rivers fork.
An underground (or subterranean) river is, as the name suggests, a river that has its banks and bed beneath the surface of the earth. Naturally occurring underground rivers are usually part of an otherwise above ground river that travels for a distance beneath the earth via caves, or disappearing into sinkholes (also called cenotes) and continuing on underground before re-appearing above ground further on.
Most underground rivers are relatively short, although there are exceptions. For example, the longest underground river in the world is generally considered to be the Sistema Sac Actum, which flows for 95 miles through the Sac Actum cave system in Quinta Roo, Mexico. Perhaps the best known (and certainly most exploited for tourism purposes) is the Puerto Princesa Subterranean River in the Philippines. Named a UNESCO World Heritage Site in 1999, this river is over 5 miles long, much of which is navigable by boat.
Some underground rivers have actually been created by man building cities above a flowing river, effectively making them artificially subterranean. A well-known example of this is the UK’s River Fleet, which travels about four miles under London’s streets before emptying into the River Thames.
An aqueduct (not to be confused with the bridge of the same name) is a manmade river, or watercourse, which conveys water from a source – usually a lake, natural river, or reservoir – to where it is needed. They are most frequently constructed to irrigate large tracts of farmland, or supply cities and their suburbs with drinking water
Aqueducts date back thousands of years when they were most commonly used for irrigation. Examples of ancient aqueducts can be found in Egypt, Rome, India, China, and South America dating back as far as 600 BC. Ancient aqueducts were often simple short ditches, sometimes reinforced with rocks, into which their sources could flow.
Modern aqueduct systems are often quite complex feats of engineering that travel long distances, sometimes across deserts, over valleys, and through mountains. They often utilize reinforced concrete or metal ‘ditches’, above ground pipes, and bridges to carry water over and through various types of terrain.
Modern aqueducts are often massive projects and can be found in many countries the world over. Notable aqueducts in North America include the Colorado River Aqueduct, which travels nearly 250 miles to supply Los Angeles with drinking water, and the Catskill Aqueduct, which supplies New York City with around 400 million gallons of water per day.
Though not very common (and not to be confused with the pipes that supply water to taps in cities and other municipalities) manmade pipeline ‘rivers’ that convey fresh water considerable distances do exist, and so are mentioned here.
The best, and largest, example is the Great Man-Made River in Libya. Beginning at the Nubian Sandstone Aquifer and running underground for over 1,700 miles through the Saraha Desert, this manmade river supplies over 1,500 wells along its route and is reportedly built to transport over 1.5 billion gallons of water a day (although it currently handles far less than that) for irrigation and drinking. It is the main water supply for millions of Libyans in the cities of Tripoli, Sirte, Benghazi, and the surrounding areas, and for dozens of smaller towns and villages in the Saraha.
Rapids (also often called whitewater rapids, or just whitewater) are sections of a river which flow at a much faster speed than other sections due to the steepness (or downward grade) of the river’s bed. The whitewater effect is formed by rapidly flowing water splashing over rocks in the river’s bed; this causes the formation of bubbles, which turns the surface of the river white.
Rapids are classified by the American Whitewater Association using their six-class International Scale of River Difficulty. Class 1 is a relatively mild rapid, which can be swum by strong swimmers, while Class 6 is considered potentially deadly and usually only attempted with the use of rafts by experts. The severity of rapids is determined using a number of factors including water speed, the structure of the banks, and the severity of the drop.
Rapids of varying severity are found in rivers all over the world, and will often run either continuously or intermittently for many miles. It is not uncommon for a single river to have dozens of stretches of rapids which vary in severity and classification.
In the last 60 years or so, a multi-million dollar per year industry has grown up around whitewater rafting and canoeing. Most of these businesses offer trips down Class 2, 3, and some Class 4 rapids. More difficult trips down Class 4 and Class 5 rapids are often billed as extreme whitewater rafting adventures.
Winding (also called meandering or bending) sections of rivers are almost the exact opposite of rapids; they are usually slow-moving parts of a river with a very minor downward grade and are most often found in plains or lowlands. In most cases, winding sections of a river will occur closer to the river’s mouth than to its source (at which point the flow is often heaviest) where the land levels off as it approaches sea level.
The bending of a river from it’s otherwise more or less straight path can have many causes including rocks, minor differences in the downward gradient of the land, and the make-up of the soil and terrain through which the river is flowing.
As the old adage says, “Water seeks the path of least resistance,” and when something obstructs its flow, a river that doesn’t have the power to overcome the obstacle will simple alter its course to an easier route. Over the millennia, the river will cut its way down to bedrock for its bed, establish banks, and wind its way along.
Human management and manipulation of rivers is also sometimes used to bend a river. This is often done by the use dams, sluices, levees and diversion channels in densely populated, low-lying areas to prevent flooding, or for business and agricultural purposes.
Creeks, Brooks and Streams
As previously stated, since there are no hard and fast standards defining what it takes to be a river, creeks, streams, and brooks can also be called rivers. These entities can be perennial, periodic, or episodic; tributary or distributary; and will often have well-defined beds and banks. In some cases, particularly in mountainous regions, the confluence of many creeks, brooks, and streams will serve as the source of a major river.
Although usually quite small, some creeks, brooks, and streams can be quite long and are often very important sources of fresh water. Australia’s Billabong Creek (believed by many to be the longest creek in the world) is around 350 miles long and is a major tributary of the Edward River. In the United States, Lodgepole Creek, at over 270 miles, is considered the longest creek in the country and runs through Wyoming and Nebraska before it empties into the South Platt River in Colorado.
The term inland waterway refers to a river, portion of a river or the combination of several rivers (in which case it is called an inland waterway system) used to transport cargo and people by boat, barge, or other floating vessels. Inland waterways have been used extensively throughout history, and remain important today on most continents.
Inland waterways must be navigable, meaning that they must be sufficiently wide and deep to accommodate the hulls of the vessels used on them, as well as being relatively slow moving. Inland waterways must also not have rapids or waterfalls, although these are sometimes compensated for through the use of locks.
In the United States, the inland waterway system is comprised of over 25,000 miles of navigable rivers and lakes, almost all of which are located in the central and eastern parts of the country. The largest inland waterway in the US is the Mississippi River System which is used extensively for the transportation of about half a billion tons of oil and gas, grain, coal, and other cargo every year.
Lisa has a Bachelor’s of Science in Communication Arts. She is an experienced blogger who enjoys researching interesting facts, ideas, products, and other compelling concepts. In addition to writing, she likes photography and Photoshop. |
Brain hemorrhage is the condition in which the vessels in the brain bursts and there is bleeding inside the brain. Various types of hemorrhage may occur inside the brain. Brain hemorrhage is diagnosed through neurological examination, imaging techniques, and lumbar puncture.
Following are the types of intracranial hemorrhage:
- Epidural hematoma: Epidural hematoma is the condition caused when the blood is collected between the skull and outer covering of the brain. An epidural hematoma is caused when there is an impact on the skull that may lead to a skull fracture.
- Subarachnoid hemorrhage: It is a life-threatening condition that occurs when there is bleeding in the subarachnoid space. Subarachnoid space is the space between the arachnoid and the pia mater. The condition can be caused due to arteriovenous malformation of head injury.
- Subdural hematoma: This condition occurs when the blood is collected between dura mater and arachnoid. It is caused due to a severe head injury. Due to the head injury, the small bridging veins gets treated leading to the collection of blood below the dura mater.
- Intracerebral hemorrhage: This type of hemorrhage occurs within the tissues and ventricles of the brain. The condition may be caused due to head injury or brain tumor. It is also caused due to untreated hypertension. Following are the types of Intracerebral hemorrhage:
- Intraparenchymal hemorrhage: This type of hemorrhage occurs within the brain tissues and usually caused due to hypertension or arteriovenous malformation.
- Intraventricular hemorrhage: This type of hemorrhage occurs in the ventricles of the brain and caused by the vascular abnormality.
- Head injury: Traumatic brain injury is the leading cause of brain hemorrhage. It is caused due to blowing on the head, which results in tearing of the vessels. This is due to the crashing of the brain inside the skulls causing harm to internal vessels.
- Hypertension: Unmanaged hypertension causes severe damage to the vessels resulting in their rupture. People with prolonged untreated hypertension are at higher risk for a brain hemorrhage.
- Vascular malformation: Arteriovenous malformation is the condition present from birth. This condition is characterized by weak blood vessels. This condition is diagnosed at the time of presentation of symptoms.
- Bleeding disorders: Various bleeding disorders causes bleeding inside the brain. This includes hemophilia, leukemia, and thrombocytopenia. However, this cause of hemorrhage is less prevalent t as compared to traumatic injury.
- Medications: Various medications alter the property of the blood and clotting mechanism thereby increasing the risk of bleeding. These medications include aspirin, other blood thinning agents, prolong use of antibiotics and radiation therapy.
- Aneurysm: This condition is caused by wearing and swelling of the arterial walls. The risk of bursting these vessels is high.
- Liver disease: Presence of liver disease increases the risk of brain hemorrhage.
- Brain tumor: Intratumor vascularization and thin-walled vessels also with the necrosis in tumor causes hemorrhage in the brain.
Following are the symptoms associated with brain hemorrhage:
- Moderate to a severe headache
- Nausea and vomiting
- Confusion and disorientation
- Difficulty swallowing and breathing
- Numbness and weakness
- Vision problems
- Lack of coordination
- Loss of consciousness
- Stupor and lethargy
- Poor memory and reduced concentration
How to diagnose
- Physical evaluation: Evaluation of the physical symptoms are done to analyze the neurological symptoms of the patient. The physician will identify the suddenly occurred symptoms including loss of consciousness, neurological deficit, coma, and seizures. The sudden symptoms occurred in the patient are probably due to hemorrhagic shock.
- Imaging techniques: various imaging techniques are used to diagnose brain hemorrhage. CT scan is done to confirm bleeding inside the brain while MRI scan provides the actual cause of bleeding. Aneurysm and vascular malformations can be identified through an angiogram.
- Lab tests: Various lab tests are done which includes prothrombin time, electrolyte level, the presence of infection and complete blood cell count.
- Lumbar Puncture: Lumbar puncture is done to identify the presence of bilirubin in spinal fluid which is also a parameter for diagnosing intracerebral hemorrhage.
Risk neglecting Brain hemorrhage
Following are the complication experienced by the patient with neglected brain hemorrhage:
- Hematoma expansion: Hemorrhage is generally caused due to a hematoma. However, if the hemorrhage is neglected, it leads to the expansion of hematoma. Expansion of hematoma is a factor for poor prognosis of the condition.
- Increased intracranial pressure: There is increased intracranial pressure due to hemorrhage. This leads to symptoms such as a headache. Unmanaged increased intracranial pressure results in further damage in the brain and may also cause death.
- Increased incidence of seizures: Occurrence of seizures is one of the symptoms of brain hemorrhage but when hemorrhage is neglected, the frequency of seizures increases.
- Increased blood pressure: Various mechanisms, such as increased intracranial pressure and activation of neuro-vegetative signaling, increases blood pressure.
- Infection: Infection is generally caused after brain hemorrhage due to pulmonary edema and dysphagia. Invasive procedures are also responsible for causing infection.
Evolution of hematoma is divided into five distinct phases which are revealed through MRI. These stages include:
- Hyperacute: Less than 12 hours after evolution
- Acute: 12 hours to 2 days after evolution
- Early subacute: 2 days to 7 days after evolution
- Late subacute: 8 days to 1 month after evolution
- Chronic: More than 1 month after evolution
The evolution of hematoma depends upon the location, size, and etiology.
- Avocado oil
- Fatty fish containing omega-3 fatty acids
- Pumpkin seeds
- Citrus fruits
- Manage chronic condition such as hypertension.
- Control stress.
- Perform meditation and yoga to improve mental health.
- Avoid smoking.
- Take blood thinning drugs only under a physician’s guidance.
- Avoid foods that increase cholesterol.
When to see a doctor
Call your doctor if:
- You feel disoriented
- You experience a sudden and severe headache
- You sense vision changes
- You feel numbness in muscles
- You have the feeling of nausea and vomiting |
Woburn Challenge 1996
Problem 5: Plentiful Paths
Given is an M by N grid and on each square of the grid there may or may not be an apple on it. Let A be the bottom left square and B be the upper right square of the grid. Find the path from A to B (shown below) going up and right only that passes through the most number of squares with apples in them. For this path output the number of apples on it.
For example, here is a 4 by 4 grid:
.a.a <-B ..aa a.a. A-> ....
Each square can have at most one apple (this includes squares A and B).
Your program should read in the size of the grid, M N, where 1 ≤ M, N ≤ 100. The locations of the apples where A is at (1, 1) and B is at (M, N). Inputs will end with 0 0 and have the same format as the one below. In our example, the input would be
4 4 2 1 2 3 3 3 3 4 4 2 4 4 0 0
Give the number of apples on the path with the most number of them. In this case
Point Value: 10
Time Limit: 2.00s
Memory Limit: 16M
Added: Sep 30, 2008
C++03, PAS, C, HASK, ASM, RUBY, PYTH2, JAVA, PHP, SCM, CAML, PERL, C#, C++11, PYTH3 |
This is 4th year of Spanish you will be required to have Dime book 4
This level of Spanish is an important year for students planning to continue to do CSEC Spanish. It is the beginning of intermediate Spanish. It introduces more complex grammar structures and advanced vocabulary. At the end of this level of Spanish, students should be able to narrate events that occur regularly, in the present and routinely and express a desire to do something in the future.
In Grade 8 Spanish, students are taught the continuation of Basic Spanish. Additionally, students studying Spanish at grade 8 are required to produce a little more than that they would have been required to do. At the end of Grade 8 Spanish, among many other things, students should be able to conjugate regular AR ER and IR verbs and to describe theirs and others houses.
At this level of Spanish, basic Spanish is taught. At the end of this year of Spanish, students should be able to do the following and many others in the target language:
- Say the letters of the alphabet
- Greet friends, adults, and strangers appropriately
- Ask and state others' name
- Ask and state how others are doing
If students study and apply themselves, they will do well. |
As a solitary, highly-efficient predator, the jaguar stands at the top of the Amazon food chain. While visitors to the rainforest may not always see this magnificent big cat, many travelers are fortunate enough to see a jaguar on the banks of the Tambopata River, or when journeying through more remote parts of Tambopata National Reserve.
Tambopata National Reserve is an ideal place to spot the jaguar (Panthera onca) because its forest and wetland systems are home to many of the species that this big cat preys upon, including capybaras, peccaries and tapirs.
Historically, jaguars roamed all the way from the southeastern United States to eastern Argentina. However, over the centuries they have been driven from much of their range by the growing presence of mankind, and they are currently listed as Near Threatened by international conservation groups. Their traditionally broad range has meant that they figure prominently in the mythology of many indigenous peoples, including the Mayas, Aztecs and the diverse ethnic groups of Peru’s Amazon region.
The name “jaguar” is a corruption of the Brazilian Tupi-Guarani people’s name for this big cat. In their language, “yaguara” means “beast”. The largest cat in the Americas, the jaguar is the third largest feline in the world, after the tiger and lion.
The jaguar is an essentially solitary animal. It hunts alone, wandering through the forest across its territory and ambushing prey opportunistically. The bite of the jaguar is unusually powerful even among large felines, and it is capable of perforating the hides of large reptiles and the shells of turtles. Unusually among big cats, it often attacks the head of its prey directly, relying upon its powerful jaws to penetrate the skull and bite into the brain.
Jaguars are adept swimmers and will readily take to the water in pursuit of their prey. They have been spotted crossing the Tambopata River, and will occasionally take down small caiman.
In the wild, jaguars tend to live for between 11 and 15 years, and can live up for to 25 years in captivity. Jaguars are almost always sighted alone. As solitary hunters, they only come together to breed. When seeking a mate, jaguars tend to roam over vast areas of forest, far beyond their normal hunting range. It is believed that jaguars will breed throughout the year, in wet season or dry. Receptive females mark their territory, in addition to becoming increasingly vocal, when seeking a mate. After breeding, the male and female separate, leaving the female to raise her cubs alone. In common with the tiger, female jaguars will not tolerate the presence of any male after the birth of their cubs, given the high risk of infant cannibalism prevalent among these two species. |
Mental health disorders affect one in four people, according to the World Health Organization. The National Council for Behavioral Health reports that 43.8 million people in the United States experience a mental illness each year, and nearly half of all adults will struggle with mental health at some point during their lifetime. What’s even more shocking is that nearly two-thirds of people with a known mental disorder never seek help from a health professional.
Stigma, or negative attitudes and perceptions, often stops people from getting the support they need. One study found the perception of people with mental illness as dangerous has increased over time, despite evidence from the American Psychological Association that mental illness doesn’t contribute to violence. Unfortunately, many people also believe that mental illness is shameful and people with mental disorders are lazy.
When those with mental health disorders internalize society’s negative beliefs, the effects can be toxic, warns the National Alliance on Mental Illness (NAMI), since stigma contributes to greater distress, bullying, isolation and even suicide.
The Power of Compassion
The key to overcoming stigma is compassion. Compassion is defined by researcher Emma Seppala as an emotional response to suffering, which involves an authentic desire to help.
Compassion has many powerful physical and psychological benefits. For instance, meaningful connection with others has been shown to speed up recovery from disease, lengthen one’s lifespan, buffer against stress, and broaden one’s perspective.
In other words, when you act compassionately, you’re able to extend understanding and assistance to others and experience the mood boost that comes with it. If you know or love someone with a mental disorder, here are ways you can treat them with greater compassion and do your part to dismantle mental health stigma.
- Practice empathy. Simply validating a person’s struggles can be a healing experience. Don’t dismiss their feelings and instead focus on listening with your full attention. Swap sayings like “it could be worse,” “cheer up” or “look on the bright side” for more understanding ones such as “it sounds like what you’re going through is really difficult.” Empathy allows a person to feel seen and talk freely without fear of being judged.
- Watch your words. Certain language can be offensive to people who live with mental health conditions, so be mindful not to label others. Strike words like “crazy,” “psycho,” “insane,” “disturbed” and “nuts” from your vocabulary when referring to mental health issues. Referring to someone as being “mentally ill” instead of as “a person living with a mental illness” can also have negative impacts. Encourage your loved one to identify themselves by other roles they play, such as spouse, friend or parent, rather than define themselves solely by their condition.
- Understand the symptoms. Research symptoms of different disorders to gain a better understanding of what the person is going through. For example, a person with depression may struggle to get out of bed. Someone with obsessive compulsive disorder, on the other hand, may have trouble getting places on time because of their anxiety, not because they are irresponsible.
- Promote outside support. One of the best ways you can foster compassion is by encouraging the person to seek treatment and support. Mental Health America points out that there are many options available, including psychotherapy, medication and complementary and alternative Medicine methods like meditation and yoga. NAMI suggests meeting regularly with a mental health professional, such as a psychiatrist or psychologist, versus relying on a primary care doctor. Support the person in following their treatment plan by offering rides to appointments, providing reminders to take medication or simply providing moral support.
- Take care of yourself. As the friend or family member of someone with a mental disorder, it’s important to take care of yourself, too. Tend to your own emotional and mental well-being so that you can be there for your loved ones when they need you the most. |
COVID-19 has now been designated a global pandemic and continues to spread throughout the world. Not getting infected with this potentially lethal virus is at the forefront of many peoples minds, and the very real possibility of extended quarantine has led to shortages of items such as toilet paper, disinfectant and non-perishable food.
While this may seem like a novel experience to most, pandemics are not new to the human condition and have occurred many times throughout history. However, acute illnesses such as the common cold, the flu, or smallpox, are difficult to study in the archaeological record, since they do not leave any visible signs of infection on skeletal remains. These infections have a rapid onset, and those infected by them usually either recover or perish before the disease has a chance to affect their bones.
On the other hand, chronic infections have a slower onset and may last long enough for a person’s skeleton to show signs of the condition. Tuberculosis, leprosy, and syphilis are three infections that were quite common in some populations and are visible in the archaeological record. These diseases can last months to years, and without modern treatments, people can be plagued with these horrendous conditions for the duration of their lives. Although we may think of them as a part of history, these three infections continue to affect people that do not have access to modern healthcare.
Syphilis is caused by Treponema pallidum pallidum, which is a spiral-shaped, mobile bacterium. The infection is sexually transmitted (STI) and has several stages. This infection is sometimes fatal if left untreated and the bacterium can also be transmitted from pregnant women to their unborn children. While not all those with the bacteria will show symptoms (latent syphilis), usually within the first months of exposure, infected individuals will begin to experience skin lesions and rashes.
If left unchecked, the infection will spread throughout the body and sometimes cause massive sores, which may eat away the cranial bones. It is commonly speculated that during the 1500’s, endemic syphilis contributed to the rise in popularity of the powdered wig. Useful for controlling lice populations, powdered wigs were also used to cover up the hair-loss and unsightly sores experienced by those suffering from late stage syphilis.
Leprosy, which is Greek for ‘scaly skin’, has been recognized by humans for thousands of years, but its cause was poorly understood. An infection of Mycobacterium leprae or Mycobacterium lepromatosis causes the condition known as Leprosy (a.k.a. Hansen’s Disease). These bacteria destroy nerves throughout the body, causing infected persons to slowly lose their ability to feel pain. Early symptoms include pale or red rashes on skin, hair loss and numbness. As the disease progresses, secondary infections usually lead to disfigurement of the hands, feet and face.
The transmission of leprosy is still not fully understood, but is generally believe to be passed via the respiratory tract, not via skin contact, which was commonly believed prior to the modern era. Leprosy is usually contracted by close contact with an infected individual, or as the result of living in poverty in tropical climates where the bacteria is naturally prevalent. It is also possible for the bacteria to pass from animals to humans and vice versa, and the leprosy-causing agent has been found in both red squirrels and armadillos.
Most people that are in contact with the bacteria do not contract leprosy, and mothers cannot pass the bacteria to unborn children. However, there is a long history of social stigma attached to leprosy, and in some areas, afflicted individuals are still forced to reside in ‘leper colonies’. The word ‘leper’ is now considered a derogatory term.
Tuberculosis (TB) is a primarily a pulmonary disease caused by an infection of Mycobacterium tuberculosis bacteria. Like leprosy, TB has been around since ancient times, and was referred to as ‘phthisis’ in ancient Greece and ‘consumption’ in English speaking areas throughout history. While TB is primarily an infection of the lungs, it can spread to other regions to the body and can cause identifiable damage to the bones of afflicted individuals.
TB is thought to have originally been transmitted to humans from bovines (cows), and can be passed to humans from close contact with the animals, or consuming meat and unpasteurized milk products. One of the oldest known instance of tuberculosis is from a 17,000 year old bison skeleton. However, TB can also be transmitted from humans to cattle, so it is unclear whether the bacteria is originated in bovids or humans. In humans, the bacteria is transmitted via pulmonary aerosols similar to COVID-19. Also similar to COVID-19, TB is an opportunistic infection and is usually more severe in people with compromised immune systems. Approximately 90% of infected people will experience no symptoms, but if symptoms do develop, 50% of afflicted individuals will die without proper treatment.
A vaccine for TB does exist. However, once symptoms arise, a lengthy treatment using multiple antibiotics is the only cure. Recently there has also been an increase in drug resistant strains of the bacteria, which are much more difficult to treat. Today, TB remains to be one of the most deadly infectious diseases in the world. It is estimated that roughly 1/4 of the world’s human population is infected by the bacteria, and that it causes approximately 1.5 million deaths annually.
Historical Diseases are not History
The long history of these diseases is visible in the archaeological record, yet these afflictions themselves are not history. We may think of leprosy as a medieval, or ancient syndrome, but the truth is that people still suffer from this horrendous diseases. Even though these conditions are treatable, the antibiotics needed are expensive and many people simply do not have access to them. Donate to reputable charities such as, Doctor’s Without Borders, if you wish to help fight these terrible infections. |
The United States is a country with ever-changing morals, social norms, and ideas. Triggered by significant events such as new laws or wars, the changes that occur usually result in altered attitudes towards existing morals, norms, and ideas. One of the country’s most important changes was the huge cultural shift among young people that took place during the 1960s which had an immense influence on society.
The 1960s in my opinion might be the most impactful time period in the United States history for the exception of the United States Independence from England and The Emancipation of Proclamation by President Abraham Lincoln. The 1960s brought a lot of changes for the minorities within the United States and also for the new generation of women. A lot of things were accomplish in the 1960s from minorities like African Americans, Latin Americans and Native Americans finally were given some type of rights in the United States to the men landing in the moon and idea presented by President John F. Kennedy and that many believe that it could not be done, but by the end of the decade it became a reality. John F. Kennedy also was assassinate, but before
As I mentioned previously, the sixties were a time of change. For instance, young people, watching their friends and family drafted into the Vietnam War, began to question traditional society and the government. Additionally, women changed their views on their place and role in the family. Also, new ideas emerged, changing the look of families both then and now. In 1960, more than 70 percent of families still looked much like the family of the 1950s, with a man who brought in the family 's sole income, children and a stay-at-home wife and mother.
The 1950’s was the decade of change. Key events across the decade and the world include the beginning of the Korean War and the Vietnam War, the first ever Organ Transplant and the introduction of Coloured TV. Also Political battles centred around communism and capitalism dominated the decade. In the 1950’s there was more leisure time due to an upgrade in household appliances which improved the likelihood of selling entertainment products such as radios and televisions.
The 1960s in America was a decade where many problems occurred and much change was made. Some of those issues were racial segregation and foreign policy. Two of the most influential and inspirational people then were Martin Luther King Jr., and John F. Kennedy. King was an African American who fought for an end to racial segregation and was committed to this important issue.
Number two, there was a big increase of drug addicts. Number three, high crime and U.S. inner cities were ruin into ghettos. Lastly, the assassination of political and human right leaders. Overall, there were a lot more then these positive and negative changes I have listed that happened during the 1960s. I just listed the few that stuck out to me the
Out of all the decades, there has never been a decade like the sixties. The sixties was filled with diversity, hope, problems, anger and solutions. A lot of the different life-changing events and organizations took place in the sixties. One of the major organizations that took place in the sixties was the Black Panther Party. The main goal for the Black Panthers was not only to protect the African Americans but also to provide them with equal rights and opportunities.
The years of the 1950s and 60s was a time where many hardships occurred as global tension was high and as a result many wars occurred as well as movements. The historical issues and events of the fifties and sixties was often propelled by popular culture through art and media such as television, paintings and music. The civil rights movement succeeded in bringing equal rights to the African American population within the United States in a peaceful manner thanks to meaningful art forms. The Vietnam War was widely seen as a controversial conflict and opened insight to Australians as to what was actually happening through music and television which in turn swayed the public opinion of Australia’s involvement with the war.
2 It is essential to go back to the fifties to be able to understand the sixties historically and sociologically. The fifties brought relief since the Depression and war were over, and now “science was mobilized by industry, and capital was channeled by government as never before.” 3 This new affluence gave the United States the ability to create suburbia and conform to moving in. This affected the sixties because conformity resulted in people rebelling.
The United States had appeared to be dominated by consensus and conformity in the 1950s. The fifties were the decade of reform to the better led by president Eisenhower. The economy was booming. Further, there was a rise in consumerism which resulted in a domino effect on the economy. On the other hand, issues arose during that time as well, such as the fear of communism.
This was also the decade the African-American launched the Civil Rights movement in Nineteen Fifty Four. In the midst of the U.S. working to maintain being the strongest country the african americans became to revolt from the inside, this lasted until Nineteen Sixty Eight. This event was followed by the Vietnam war starting the nineteen fifty five over who was going to take control over the vietnamese government. The Vietnam war last up until nineteen seventy five , ending with fifty eight thousand , two hundred and twenty deaths on record of american soldiers. Aside from the wars and riots going on in the nineteen fifties, life seemed to be easier for people back then to live the life most people would dream to live today.
The 1960’s was a time of great conflict and tension for America. Lyndon B. Johnson was elected president in 1963, and many social issues were dividing the United States at this time. The fight for equal rights for every citizen, not just white males, caused many riots, protests, and distress for the country. The Vietnam War was taking place on the other side of the world, but was severely affecting Americans back in the States. It lead to the Anti-War Movement, which still affects America on foreign relations today.
To what extent do Hollywood films reflect social and cultural behavior of America? Outline: History of Hollywood film industry: 1917—1960: the development of Hollywood film industry and characterized most styles to this day: biography, fiction, action, horror, animated, comedy, etc. After the World War One, the America experienced a cultural boom which resulting different forms of culture appears. In order to make films appeal to the audience, various cultural elements were introduced into the production of films.
Many historians view the 1950s as an era of prosperity, conformity, and consensus, and view the 1960s as turbulences, protest, and disillusionment. I agree with many historians and their point of view to this era. Socially speaking, although the Civil Rights movement had started roughly around 1954, the 1960s was the period where the Civil Rights movement skyrocket. The 1950s were viewed as a prosperous and conformist for the reason of the development of the suburbs. |
Common Themes in African-American Literature
African-American literature starts with narratives by slaves in the pre-revolutionary period focused on freedom and abolition of slavery. The period following the Civil War until 1919 is dubbed the Reconstruction period. Its themes were influenced by segregation, lynching, migration and the women’s suffragette movement. The 1920s saw the Harlem Renaissance and the “flowering of Negro literature,” as James Weldon Johnson called it. African-American literature since World War II has delved into modernist high art, black nationalism and postracial identities.
Grisly Narratives of Slavery
The earliest African-American literature was focused on the “indelible stain” of slavery on American soil. The writers focused on themes of slavery, emphasizing the cruelty, indignity and the ultimate dehumanization of slaves. They were mostly written by slaves who had escaped into freedom. Classic slave narratives include the “Narrative of the Life of Frederick Douglass, an American Slave” by Frederick Douglass and “Incidents in the Life of a Slave Girl” by Harriet Jacobs. Slavery and slave narrative are recurring themes in African-American literature adopted in the modern times by writers like Toni Morrison and Alice Walker.
Alienation by Color-Line
“The problem of twentieth century is the problem of color line,” W.E.B.Du Bois wrote in “The Souls of Black Folk.” African Americans were free from slavery after the Civil War, but the color line kept them segregated and marginalized. Although the white population had a conception of “the Negro” as a group, it seemed to have no conception of it as an individual. Ralph Ellison’s “Invisible Man” is a shining example of this theme. His book is a cerebral account of a black man who, despite considerable efforts to overcome the color line, finds himself alienated from both blacks and whites.
The New, Angry Negro
The dramatic upheaval in material condition of African Americans is reflected in the literature they produced. Rapid industrialization and migration into cities like Chicago and New York created favorable conditions for a reinvented identity. While the theme of servility to dignity was always present in African-American literature, the “New Negro Movement” during the Harlem Renaissance emphasized radicalism verging on militancy in both politics and arts. Writers saw literature as a tool to bring sociopolitical changes, an attitude best expressed by W.E.B. Du Bois’ famous declaration, “all Art is propaganda and ever must be.”
A Journey to Africa
Africa looms large in the imagination of all African-American writers in two ways. Those who crossed the Atlantic on slave ships brought Africa with them to the American soil. This Africa survived orally in music and folklore and was later supplemented by writing. In addition, the descendants of slaves looked at Africa for inspiration and a cure to the trauma of slavery and a permanent sense of nostalgia for the lost homeland. Alex Haley's "Roots" is a classic example of the journey-to-Africa theme.
- “VCCA Journal”; Studying African-American Literature in Its Global Context; Samuel Olorounto; Summer 1992
- W.E.B. Du Bois; "Criteria of Negro Art"; W.E.B. Du Bois
- California State University Stanislaus; Perspectives in American Literature; Paul Reuben
- University of North Carolina at Pembroke: Teaching African American Literature
- Ingram Publishing/Ingram Publishing/Getty Images |
Topics covered: Trigonometric integrals and substitution
Note: This video lecture was recorded in the Fall of 2007 and corresponds to the lecture notes for lecture 26 taught in the Fall of 2006.
Instructor: Prof. David Jerison
Lecture Notes (PDF)
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Well, because our subject today is trig integrals and substitutions, Professor Jerison called in his substitute teacher for today. That's me. Professor Miller. And I'm going to try to tell you about trig substitutions and trig integrals. And I'll be here tomorrow to do more of the same, as well. So, this is about trigonometry, and maybe first thing I'll do is remind you of some basic things about trigonometry.
So, if I have a circle, trigonometry is all based on the circle of radius 1 and centered at the origin. And so if this is an angle of theta, up from the x-axis, then the coordinates of this point are cosine theta and sine theta. And so that leads right away to some trig identities, which you know very well. But I'm going to put them up here because we'll use them over and over again today. Remember the convention sin^2 theta secretly means (sin theta)^2. It would be more sensible to write a parenthesis around the sine of theta and then say you square that. But everybody in the world puts the 2 up there over the sin, and so I'll do that too.
So that follows just because the circle has radius 1. But then there are some other identities too, which I think you remember. I'll write them down here. cos(2theta), there's this double angle formula that says cos(2theta) = cos^2(theta) - sin^2(theta). And there's also the double angle formula for the sin(2theta). Remember what that says? 2 sin(theta) cos(theta). I'm going to use these trig identities and I'm going to use them in a slightly different way. And so I'd like to pay a little more attention to this one and get a different way of writing this one out. So this is actually the half angle formula. And that says, I'm going to try to express the cos(theta) in terms of the cos(2theta). So if I know the cos(2theta), I want to try to express the cos theta in terms of it. Well, I'll start with a cos(2theta) and play with that.
OK. Well, we know what this is, it's cos^2(theta) - sin^2(theta). But we also know what the sin^2(theta) is in terms of the cosine. So I can eliminate the sin^2 from this picture. So this is equal to cos^2(theta) minus the quantity 1 - cos^2(theta). I put in what sin^2 is in terms of cos^2 And so that's 2 cos^2(theta) - 1. There's this cos^2, which gets a plus sign. Because of these two minus signs. And there's the one that was there before, so altogether there are two of them.
I want to isolate what cosine is. Or rather, what cos^2 is. So let's solve for that. So I'll put the 1 on the other side. And I get 1 + cos(2theta). And then, I want to divide by this 2, and so that puts a 2 in this denominator here. So some people call that the half angle formula. What it really is for us is it's a way of eliminating powers from sines and cosines. I've gotten rid of this square at the expense of putting in a 2theta here. We'll use that. And, similarly, same calculation shows that sin^2(theta) = (1 - cos(2theta)) / 2. Same cosine, in that formula also, but it has a minus sign. For the sin^2.
OK. so that's my little review of trig identities that we'll make use of as this lecture goes on. I want to talk about trig identity-- trig integrals, and you know some trig integrals, I'm sure, already. Like, well, let me write the differential form first. You know that d sin theta, or maybe I'll say d sin x, is, let's see, that's the derivative of sin x times dx, right. The derivative of sin x is cos x, dx. And so if I integrate both sides here, the integral form of this is the integral of cos x dx. Is sin x plus a constant. And in the same way, d cos x = -sin x dx. Right, the derivative of the cosine is minus sine. And when I integrate that, I find the integral of sin x dx is -cos x + c. So that's our starting point. And the game today, for the first half of the lecture, is to use that basic-- just those basic integration formulas, together with clever use of trig identities in order to compute more complicated formulas involving trig functions.
So the first thing, the first topic, is to think about integrals of the form sin^n (x) cos^n (x) dx. Where here I have in mind m and n are non-negative integers. So let's try to integrate these. I'll show you some applications of these pretty soon. Looking down the road a little bit, integrals like this show up in Fourier series and many other subjects in mathematics. It turns out they're quite important to be able to do. So that's why we're doing them now. Well, so there are two cases to think about here. When you're integrating things like this. There's the easy case, and let's do that one first. The easy case is when at least one exponent is odd. That's the easy case. So, for example, suppose that I wanted to integrate, well, let's take the case m = 1. So I'm integrating sin^n (x) cos x dx. I'm taking-- Oh, I could do that one. Let's see if that's what I want to take. Yeah. My confusion is that I meant to have this a different power. You were thinking that.
So let's do this case when m = 1. So the integral I'm trying to do is any power of the sine times the cosine. Well, here's the trick. Recognize, use this formula up at the top there to see cos x dx as something that we already have on the blackboard. So, the way to exploit that is to make a substitution. And substitution is going to be u = sin x. And here's why. Then this integral that I'm trying to do is the integral of u^n, that's already a simplification. And then there's that cos x dx. When you make a substitution, you've got to go all the way and replace everything in the expression by things involving this new variable that I've introduced. So I'd better get rid of the cos x dx and rewrite it in terms of du or in terms of u. And I can do that because du, according to that formula, is cos x dx. Let me put a box around that. That's our substitution. When you make a substitution, you also want to compute the differential of the variable that you substitute in. So the cos x dx that appears here is just, exactly, du. And I've replaced this trig integral with something that doesn't involve trig functions at all. This is a lot easier. We can just plug into what we know here. This is u^(n+1) / (n+1) plus a constant, and I've done the integral.
But I'm not quite done with the problem yet. Because to be nice to your reader and to yourself, you should go back at this point, probably, go back and get rid of this new variable that you introduced. You're the one who introduced this variable, you. Nobody except you, really, knows what it is. But the rest of the world knows what they asked for the first place that involved x. So I have to go back and get rid of this. And that's not hard to do in this case, because u = sin x. And so I make this back substitution. And that's what you get. So there's the answer.
OK, so the game was, I use this odd power of the cosine here, and I could see it appearing as the differential of the sine. So that's what made this substitution work. Let's do another example to see how that works out in a slightly different case. So here's another example. Now I do have an odd power. One of the exponents is odd, so I'm in the easy case. But it's not 1. The game now is to use this trig identity to get rid of the largest even power that you can, from this odd power here. So use sin^2 x = 1 - cos^2 x, to eliminate a lot of powers from that odd power. Watch what happens. So this is not really a substitution or anything, this is just a trig identity. This sine cubed is sine squared times the sine. And the sine squared is 1 - cos^2 x. And then I have the remaining sin x. And then I have cos^2 x dx. So let me rewrite that a little bit to see how this works out. This is the integral of cos^2 x minus, and then there's the product of these two. That's cos^4 x times sin x dx.
So now I'm really exactly in the situation that I was in over here. I've got a single power of a sine or cosine. It happens that it's a sine here. But that's not going to cause any trouble, we can go ahead and play the same game that I did there. So, so far I've just been using trig identities. But now I'll use a trig substitution. And I think I want to write these as powers of a variable. And then this is going to be the differential of that variable. So I'll take u to be cos x, and that means that du = -sin x dx. There's the substitution. So when I make that substitution, what do we get. Cosine squared becomes u^2. Cosine to the 4th becomes u^4, and sin x dx becomes not quite du, watch for the signum, watch for this minus sign here. It becomes -du. But that's OK. The minus sign comes outside. And I can integrate both of these powers, so I get -u^3 / 3. And then this 4th power gives me a 5th power, when I integrate. And don't forget the constant. Am I done? Not quite done. I have to back substitute and get rid of my choice of variable, u, and replace it with yours. Questions?
PROFESSOR: There should indeed. I forgot this minus sign when I came down here. So these two gang up to give me a plus. Was that what the other question was about, too? Thanks. So let's back substitute. And I'm going to put that over here. And the result is, well, I just replace the u by cosine of x. So this is - -cos^3(x) / 3 plus, thank you, cos^5(x) / 5 + c. And there's the answer. By the way, you can remember one of the nice things about doing an integral is it's fairly easy to check your answer. You can always differentiate the thing you get, and see whether you get the right thing when you go back. It's not too hard to use the power rules and the differentiation rule for the cosine to get back to this if you want to check the work.
Let's do one more example, just to handle an example of this easy case, which you might have thought of at first. Suppose I just want to integrate a cube. sin^3 x. No cosine in sight. But I do have an odd power of a trig function, of a sine or cosine. So I'm in the easy case. And the procedure that I was suggesting says I want to take out the largest even power that I can, from the sin^3. So I'll take that out, that's a sin^2, and write it as 1 - cos^2. Well, now I'm very happy. Because it's just like the situation we had somewhere on the board here. It's just like the situation we had up here. I've got a power of a cosine times sin x dx.
So exactly the same substitution steps in. You get, and maybe you can see what happens without doing the work. Shall I do the work here? I make the same substitution. And so this is (1 - u (1 - u^2) times -du. Which is u - u^3 / 3. But then I want to put this minus sign in place, and so that gives me -u + u^3 / 3 plus a constant. And then I back substitute and get cos x + cos^3 x / 3. So this is the easy case. If you have some odd power to play with, then you can make use of it and it's pretty straightforward.
OK the harder case is when you don't have an odd power. So what's the program? I'm going to do the harder case, and then I'm going to show you an example of how to integrate square roots. And do an application, using these ideas from trigonometry. So I want to keep this blackboard. Maybe I'll come back and start here again. So the harder case is when they're only even exponents. I'm still trying to integrate the same form. But now all the exponents are even. So we have to do some game. And here the game is use the half angle formula. Which I just erased, very sadly, on the board here. Maybe I'll rewrite them over here so we have them on the board. I think I remember what they were.
So the game is I'm going to use that half angle formula to start getting rid of those even powers. Half angle formula written like this, exactly, talks about-- it rewrites even powers of sines and cosines. So let's see how that works out in an example. How about just the cosine squared for a start. What to do? I can't pull anything out. I could rewrite this as 1 - sin^2, but then I'd be faced with integrating the sin^2, which is exactly as hard. So instead, let's use this formula here. This is really the same as (1+cos(2theta)) / 2. And now, this is easy. It's got two parts to it. Integrating one half gives me theta over-- Oh. Miraculously, the x turned into a theta. Let's put it back as x. I get x/2 by integrating 1/2. So, notice that something non-trigonometric occurs here when I do these even integrals. x/2 appears. And then the other one, OK, so this takes a little thought. The integral of the cosine is the sine, or is it minus the sine. Negative sine. Shall we take a vote? I think it's positive. And so you get sin(2x), but is that right? Over 2. If I differentiate the sin(2x), this 2 comes out. And would give me an extra 2 here. So there's an extra 2 that I have to put in here when I integrate it. And there's the answer.
This is not a substitution. I just played with trig identities here. And then did a simple trig integral, getting your help to get the sign right. And thinking about what this 2 is going to do. It produces a 2 in the denominator. But it's not applying any complicated thing. It's just using this identity. Let's do another example that's a little bit harder.
This time, sin^2 times cos^2. Again, no odd powers. I've got to work a little bit harder. And what I'm going to do is apply those identities up there. Now, what I recommend doing in this situation is going over to the side somewhere. And do some side work. Because it's all just playing with trig functions. It's not actually doing any integrals for a while. So, I guess one way to get rid of the sin^2 and the cos^2 is to use those identities and so let's do that. So the sine is (1 - cos(2x)) / 2. And the cosine is (1 + cos(2x)) / 2. So I just substitute them in. And now I can multiply that out. And what I have is a difference times a sum. So you know a formula for that. Taking the product of these two things, well there'll be a 4 in the denominator. And then in the numerator, I get the square of this minus the square of this. (a-b)(a+b) = a^2 - b^2. = - So I get that. Well, I'm a little bit happier, because at least I don't have 4. I don't have 2 different squares. I still have a square, and want to integrate this. I'm still not in the easy case. I got myself back to an easier hard case. But we do know what to do about this. Because I just did it up there. And I could play into this formula that we got. But I think it's just as easy to continue to calculate here. Use the half angle formula again for this, and continue on your way.
So I get a 1/4 from this bit. And then minus 1/4 of cos^2(2x). And when I plug in 2x in for theta, there in the top board, I'm going to get a 4x on the right-hand side. So it comes out like that. And I guess I could simplify that a little bit more. This is a 1/4. Oh, but then there's a 2 here. It's half that. So then I can simplify a little more. It's 1/4 - 1/8, which is 1/8. And then I have 1/8 cos(4x).
OK, that's my side work. I just did some trig identities over here. And rewrote sine squared times cosine squared as something which involves just no powers of trig, just cosine by itself. And a constant. So I can take that and substitute it in here. And now the integration is pretty easy. 1/8, cos(4x) / 8, dx, which is, OK the 1/8 is going to give me x/8. The integral or cosine is plus or minus the sine. The derivative of the sine is plus the cosine. So it's going to be plus the-- Only there's a minus here. So it's going to be the sine-- minus sin(4x) / 8, but then I have an additional factor in the denominator. And what's it going to be? I have to put a 4 there. So we've done that calculation, too. So any of these-- If you keep doing this kind of process, these two kinds of procedures, you can now integrate any expression that has a power of a sine times a power of a cosine in it, by using these ideas. Now, let's see.
Oh, let me give you an alternate method for this last one here. I know what I'll do. Let me give an alternate method for doing, really doing the side work over there. I'm trying to deal with sin^2 times cos^2. Well that's the square of sin x cos x. And sin x cos x shows up right here. In another trig identity. So we can make use of that, too. That reduces the number of factors of sines and cosines by 1. So it's going in the right direction. This is equal to 1/2 sin(2x), squared. Sine times cosine is 1/2-- Say this right. It's sin(2x) / 2, and then I want to square that.
So what I get is sin^2(2x) / 4. Which is, well, I'm not too happy yet, because I still have an even power. Remember I'm trying to integrate this thing in the end, even powers are bad. I try to get rid of them. By using that formula, the half angle formula. So I can apply that to sin x here again. I get 1/4 of (1 - cos(4x)) / 2. That's what the half angle formula says for sin^2(2x). And that's exactly the same as the expression that I got up here, as well. It's the same expression that I have there. So it's the same expression as I have here. So this is just an alternate way to play this game of using the half angle formula.
OK, let's do a little application of these things and change the topic a little bit. So here's the problem. So this is an application and example of a real trig substitution. So here's the problem I want to look at. OK, so I have a circle whose radius is a. And I cut out from it a sort of tab, here. This tab here. And the height of this thing is b. So this length is a number b. And what I want to do is compute the area of that little tab. That's the problem. So there's an arc over here. And I want to find the area of this, for a and b, in terms of a and b. So the area, well, I guess one way to compute the area would be to take the integral of y dx. You've seen the idea of splitting this up into vertical strips whose height is given by a function y(x). And then you integrate that. That's an interpretation for the integral. The area is given by y dx. But that's a little bit awkward, because my formula for y is going to be a little strange. It's constant, value of b, along here, and then at this point it becomes this arc, of the circle. So working this out, I could do it but it's a little awkward because expressing y as a function of x, the top edge of this shape, it's a little awkward, and takes two different regions to express.
So, a different way to say it is to say x dy. Maybe that'll work a little bit better. Or maybe it won't, but it's worth trying. I could just as well split this region up into horizontal strips. Whose width is dy, and whose length is x. Now I'm thinking of this as a function of y. This is the graph of a function of y. And that's much better, because the function of y is, well, it's the square root of a^2 - y^2, isn't it. That's x x^2 + y^2 = a^2. So that's what x is. And that's what I'm asked to integrate, then. Square root of (a^2 - y^2), dy. And I can even put in limits of integration. Maybe I should do that, because this is supposed to be an actual number. I guess I'm integrating it from y = 0, that's here. To y = b, dy. So this is what I want to find. This is a integral formula for the area of that region.
And this is a new form. I don't think that you've thought about integrating expressions like this in this class before. So, it's a new form and I want to show you how to do it, how it's related to trigonometry. It's related to trigonometry through that exact picture that we have on the blackboard. After all, this a^2 - y^2 is the formula for this arc. And so, what I propose is that we try to exploit the connection with the circle and introduce polar coordinates. So, here if I measure this angle then there are various things you can say. Like the coordinates of this point here are a cos(theta), a-- Well, I'm sorry, it's not. That's an angle, but I want to call it theta_0. And, in general you know that the coordinates of this point are (a cos(theta), a sin(theta)). If the radius is a, then the angle here is theta. So x = a cos(theta), and y = a sin(theta), just from looking at the geometry of the circle. So let's make that substitution. y = a sin(theta). I'm using the picture to suggest that maybe making the substitution is a good thing to do. Let's follow along and see what happens.
If that's true, what we're interested in is integrating, a^2 - y^2. Which is a^2-- We're interested in integrating the square root of a^2 - y^2. Which is the square root of a^2 minus this. a^2 sin^2(theta). And, well, that's equal to a cos theta. That's just sin^2 + cos^2 = 1, all over again. It's also x. This is x. And this was x. So there are a lot of different ways to think of this. But no matter how you say it, the thing we're trying to integrate, a^2 - y^2 is, under this substitution it is a cos(theta). So I'm interested in integrating the square root of (a^2 - y^2), dy. And I'm going to make this substitution y = a sin(theta). And so under that substitution, I've decided that the square root of a^2 - y^2 is a cos(theta). That's this. What about the dy? Well, I'd better compute the dy. So dy, just differentiating this expression, is a cos(theta) d theta. So let's put that in. dy = a cos(theta) d theta. OK. Making that trig substitution, y = a sin(theta), has replaced this integral that has a square root in it. And no trig functions. With an integral that involves no square roots and only trig functions. In fact, it's not too hard to integrate this now, because of the work that we've done. The a^2 comes out. This is cos^2(theta) d theta. And maybe we've done that example already today. I think we have. Maybe we can think it through, but maybe the easiest thing is to look back at notes and see what we got before. That was the first example in the hard case that I did. And what it came out to, I used x instead of theta at the time. So this is a good step forward. I started with this integral that I really didn't know how to do by any means that we've had so far. And I've replaced it by a trig integral that we do know how to do. And now I've done that trig integral. But we're still not quite done, because of the problem of back substituting. I'd like to go back and rewrite this in terms of the original variable, y. Or, I'd like to go back and rewrite it in terms of the original limits of integration that we had in the original problem.
In doing that, it's going to be useful to rewrite this expression and get rid of the sin(2theta). After all, the original y was expressed in terms of sin(theta), not sin(2theta). So let me just do that here, and say that this, in turn, is equal to a^2 theta / 2 plus, well, sin(2theta) = 2 sin(theta) cos(theta). And so, when there's a 4 in the denominator, what I'll get is sin(theta) cos(theta) / 2. I did that because I'm getting closer to the original terms that the problem started with. Which was sin(theta).
So let me write down the integral that we have now. The square root of a^2 - y^2, dy is, so far, what we know is a^2 (theta / 2 + sin(theta) cos(theta) / 2) + c. But I want to go back and rewrite this in terms of the original value. The original variable, y. Well, what is theta in terms of y? Let's see. y in terms of theta was given like this. So what is theta in terms of y? Ah. So here the fearsome arcsine rears its head, right? Theta is the angle so that y = a sin(theta). So that means that theta is the arcsine, or sine inverse, of y/a. So that's the first thing that shows up here. arcsin(y/a), all over 2. That's this term. Theta is arcsin(y/a) / 2. What about the other side, here? Well sine and cosine, we knew what they were in terms of y and in terms of x, if you like. Maybe I'll put the a^2 inside here. That makes it a little bit nicer. Plus, and the other term is a^2 sin(theta) cos(theta). So the a sin(theta) is just y. Maybe I'll write this (a sin(theta)) (a cos(theta)) / 2 + c. And so I get the same thing. And now here a sin(theta), that's y. And what's the a cos(theta)? It's x, or, if you like, it's the square root of a^2 - y^2. And so there I've rewritten everything, back in terms of the original variable, y. And there's an answer.
So I've done this indefinite integration of a form-- of this quadratic, this square root of something which is a constant minus y^2. Whenever you see that, the thing to think of is trigonometry. That's going to play into the sin^2 + cos^2 identity. And the way to exploit it is to make the substitution y = a sin(theta). You could also make a substitution y = a cos(theta), if you wanted to. And the result would come out to exactly the same in the end.
I'm still not quite done with the original problem that I had, because the original problem asked for a definite integral. So let's just go back and finish that as well. So the area was the integral from 0 to b of this square root. So I just want to evaluate the right-hand side here. The answer that we came up with, this indefinite integral. I want to evaluate it at 0 and at b. Well, let's see. When I evaluate it at b, I get a^2 arcsin(b/a) / 2 plus y, which is b, times the square root of a^2 - b^2, putting y = b, divided by 2. So I've plugged in y = b into that formula, this is what I get. Then when I plug in y = 0, well the, sine of 0 is 0, so the arcsine of 0 is 0. So this term goes away. And when y = 0, this term is 0 also. And so I don't get any subtracted terms at all. So there's an expression for this.
Notice that this arcsin(b/a), that's exactly this angle. arcsin(b/a), it's the angle that you get when y = b. So this theta is the arcsin(b/a). Put this over here. That is theta_0. That is the angle that the corner makes. So I could rewrite this as a a^2 theta_0 / 2 plus b times the square root of a^2 - b^2, over 2. Let's just think about this for a minute. I have these two terms in the sum, is that reasonable? The first term is a^2. That's the radius squared times this angle, times 1/2. Well, I think that is exactly the area of this sector. a^2 theta / 2 is the formula for the area of the sector. And this one, this is the vertical elevation. This is the horizontal. a^2 - b^2 is this distance. Square root of a^2 - b^2. So the right-hand term is b times the square root of a^2 - b^2 divided by 2, that's the area of that triangle. So using a little bit of geometry gives you the same answer as all of this elaborate calculus. Maybe that's enough cause for celebration for us to quit for today.
This is one of over 2,200 courses on OCW. Find materials for this course in the pages linked along the left.
MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.
No enrollment or registration. Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.
Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.
Made for sharing. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)
Learn more at Get Started with MIT OpenCourseWare |
Be the first to review this item and earn 25 Rakuten Super Points™
Today''s students all know what the Declaration of Independence is. One of the things they will discover in Understanding the Declaration of Independence is what a beautiful, powerful piece of writing it is, as they read Thomas Jefferson''s rough draft in its entirety. They may also be surprised to learn that its authors intended the document as something like a press release to let people know that the Continental Congress had voted for independence. They never envisioned the document becoming a cornerstone of our modern, democratic nation. Readers will learn what effect the Declaration of Independence had on life at the time and how it polarized the people. A fascinating discussion on how the document is perceived today and its relevancy is also included.
Provides background information on the circumstances that led to the writing of the Declaration of Independence, and discusses its style and literary merit, its effectiveness at the time, and its subsequent influence. |
The Scyphozoa are a class within the phylum Cnidaria, sometimes referred to as the "true jellyfish". The class name Scyphozoa comes from the Greek word skyphos, denoting a kind of drinking cup and alluding to the cup shape of the organism. Most species of Scyphozoa have two life history phases, including the planktonic medusa or jellyfish form, which is most evident in the warm summer months, and an inconspicuous, but longer-lived, bottom-dwelling polyp, which seasonally gives rise to new medusae. Most of the large, often colorful, and conspicuous jellyfish found in coastal waters throughout the world are Scyphozoa. They typically range from 2 to 40 cm (0.79 to 15.75 in) in diameter, but the largest species, Cyanea capillata can reach 2 metres (6.6 ft) across. Scyphomedusae are found throughout the world's oceans, from the surface to great depths; no Scyphozoa occur in freshwater (or on land).
As medusae, they eat a variety of crustaceans and fish, which they capture using stinging cells called nematocysts. The nematocysts are located throughout the tentacles that radiate downward from the edge of the umbrella dome, and also cover the four or eight oral arms that hang down from the central mouth. Some species, however, are instead filter feeders, using their tentacles to strain plankton from the water.
Aurelia aurita (also called the moon jelly, moon jellyfish, common jellyfish, or saucer jelly) is a widely studied species of the genus Aurelia. All species in the genus are closely related, and it is difficult to identify Aurelia medusae without genetic sampling; most of what follows applies equally to all species of the genus. The jellyfish is translucent, usually about 25–40 cm (10–16 in) in diameter, and can be recognized by its four horseshoe-shaped gonads, easily seen through the top of the bell. It feeds by collecting medusae, plankton, and mollusks with its tentacles, and bringing them into its body for digestion. It is capable of only limited motion, and drifts with the current, even when swimming.
The brown-banded moon jelly (Aurelia limbata) is much less common than the moon jelly and is distinguished by its dark brown margin and many-branched subumbrellar canals. Aurelia limbata, an epipelagic species which occurs in arctic waters. The umbrella diameter of this species may vary from 16 to 40 cm. Molecular analyses have demonstrated that all currently recognized morphospecies of Aurelia are polyphyletic, and that A. limbata includes at least two molecular species.
The northern sea nettle (Chrysaora melanaster), also called a brown jellyfish, is a species of jellyfish native to the northern Pacific Ocean and adjacent parts of the Arctic Ocean. (It is sometimes referred to as a Pacific sea nettle, but this name is also used for Chrysaora fuscescens; the name Japanese sea nettle was used for this species, but that name now exclusively means Chrysaora pacifica. This jelly's medusa can reach 60 centimeters in length with tentacles growing up to three meters. The number of tentacles is up to 24 (8 per octant). It dwells at depths of up to 100 meters, where it feeds on copepods, larvaceans, small fish, large zooplankton, and other jellies. The sting is mild, although can cause serious skin irritation and burning.
Phacellophora camtschatica, known as the fried egg jellyfish or egg-yolk jellyfish, is a very large jellyfish, with a bell up to 60 cm (2 ft) in diameter and sixteen clusters of up to a few dozen tentacles, each up to 6 metres (20 ft) long. It has traditionally been included in the family Ulmaridae, but is now considered the only member of the family Phacellophoridae. This cool-water species can be found in many parts of the world's oceans. It feeds mostly on smaller jellyfish and other gelatinous zooplankton, which become ensnared in the tentacles (Strand & Hamner, 1988). Because the sting of this jellyfish is so weak, many small crustaceans, including larval crabs (Cancer gracilis) and Amphipoda, regularly ride on its bell and even steal food from its oral arms and tentacles (Towanda & Thuesen, 2006). The life cycle of this jellyfish is well known (Widmer 2006), because it is kept in culture at the Monterey Bay Aquarium. It alternates between a benthic stage that is attached to rocks and piers that reproduces asexually and the planktonic stage that reproduces sexually in the water column; there are both males and females in the plankton.
The lion's mane jellyfish (Cyanea capillata), also known as hair jelly, is the largest known species of jellyfish. Its range is confined to cold, boreal waters of the Arctic, northern Atlantic, and northern Pacific Oceans. It is common in the English channel, Irish Sea, North Sea and in western Scandinavian waters down to Kattegat andØresund. It may also drift in to the south-western part of the Baltic Sea (where it cannot breed due to the low salinity). Similar jellyfish, which may be the same species, are known to inhabit seas near Australia and New Zealand. The largest recorded specimen found, washed up on the shore of Massachusetts Bay in 1870, had a bell (body) with a diameter of 2.3 metres (7 ft 6 in) and tentacles 37 m (120 ft) long. Lion's mane jellyfish have been observed below 42°N latitude for some time—specifically in the larger bays of the east coast of the United States. |
Full text loading...
n Literator : Journal of Literary Criticism, Comparative Linguistics and Literary Studies - Oor Austro-Nederlands en die oorsprong van Afrikaans
On Austro-Dutch and the origin of Afrikaans
A widely accepted view of the origin of Afrikaans holds that the new language developed autochthonously, after 1652 when the language of the early Cape settlers was influenced by imported slaves speaking Malay and Portuguese, and by the pidgin talk of the Cape Khoikhoi. This "autochthonous hypothesis", however, does not take cognizance of the fact that shortened (deflected) Dutch verb forms found in Afrikaans, for instance, are also found in loan words in the Ceylon-Portuguese creole, as well as in Indonesian, and Malay-influenced languages of Indonesia. Moreover, large numbers of Dutch East India Company sojourners, who had acquired an "adapted" form of Dutch during their stay in the East, spent a significant time at the Cape on their return voyage. The argument is put forward that they brought with them a number of language features clearly comparable with "distinctive features" in incipient (and developed) Afrikaans, such as the shortened verb and the use of the perfect instead of imperfect verb forms to indicate a simple past tense. The variety of Dutch spoken by them is called Austro-Dutch, which, it is argued, forms the basis of an "oceanic hypothesis" to add a new dimension to theories about the formation of Afrikaans.
Article metrics loading... |
A British-led team of astronomers using The University of Manchester’s Lovell Telescope in Cheshire have discovered an object that appears to be an invisible galaxy made almost entirely of dark matter — the first ever detected. A dark galaxy is an area in the universe containing a large amount of mass that rotates like a galaxy, but contains no stars. Without any stars to give light, it could only be found using radio telescopes.
Following its initial detection at the Jodrell Bank Observatory, the sighting was confirmed with the Arecibo telescope in Puerto Rico. The unknown material that is thought to hold these galaxies together is known as `dark matter’, but scientists still know very little about what that is.
The international team from the UK, France, Italy and Australia has been searching for dark galaxies using not visible light, but radio waves. In the Virgo cluster of galaxies, about 50 million light years away, they found a mass of hydrogen atoms a hundred million times the mass of the Sun.
Dr Robert Minchin from Cardiff University is one of the UK astronomers who discovered the mysterious galaxy, named VIRGOHI21, and explains: ‘From the speed it is spinning, we realised that VIRGOHI21 was a thousand times more massive than could be accounted for by the observed hydrogen atoms alone. If it were an ordinary galaxy, then it should be quite bright and would be visible with a good amateur telescope. But, even using the large Isaac Newton Optical Telescope in La Palma, no trace of stars was seen ‘ it must thus contain matter that we cannot see ‘ so called dark matter.
Professor Andrew Lyne, Director of the Jodrell Bank Observatory, commented: ‘We are delighted that the efforts by engineers at the Observatory and Cardiff University in building the Multi-Beam receiver system used for these observations had proved so fruitful. This exciting discovery shows that radio telescopes still have a very major role in helping to understand the Universe in which we live.”
Further details, supporting images and contact details can be found at: http://www.jb.man.ac.uk/news/darkgalaxy/ |
We receive more information about our surroundings through our visual sense than through any of our other senses. We are able to interpret the shapes, colors, and dimensions of objects by the light rays they give off. When light rays enter the eye, they are bent, or refracted, by the cornea (the clear tissue on the front of the eye) and the lens (the transparent structure inside the eye) so that they are focused directly on the retina (the tissue at the back of the eye where visual sensory receptors are located). The retina then transmits these images to the brain, where they are processed.
When the light rays are focused perfectly on the retina, the result is 20/20, or normal, vision.
When light rays are not focused, or refracted, precisely onto the retina, a "refractive error" results.
Myopia is a refractive error that can occur when the curve of the cornea is too steep or the eye is too long in relationship to its corneal curvature. In this case, light rays focus in front of the retina instead of on its surface, which results in blurry distance vision.
Hyperopia can occur if the cornea is too flat or the eye is too short. This results in light rays focusing at a hypothetical point beyond the retina, causing vision blurry near vision.
Astigmatism is a refractive error that results from the cornea being more curved in one direction than the other (like a football). This causes light rays to focus in more than one point on the retina, which makes vision blurry and often distorted, at all distances. Astigmatism often occurs in combination with myopia and hyperopia. (Photo courtesy of EyeSmart: http://www.geteyesmart.org/eyesmart/diseases/astigmatism/index.cfm)
Presbyopia is the normal process of having more difficulty focusing at near distance that occurs with advancing age. In order for our eyes to focus on objects at a near distance, the lens in the eye needs to change shape. However, as we approach middle age, the lens begins to thicken and loses its flexibility. This results in blurred vision at near distance.
Glasses or contact lenses often help correct these refractive errors. LASIK and other vision correction procedures are designed to help eliminate or reduce the need for eyeglasses or contact lenses. |
Sleep is a vital part of health and wellness, but it is also something that seems to keep slipping from our increasingly busy lives. Getting enough good quality sleep is important in allowing the body to grow and recover from daily stressors as well as illness and disease. Chronic lack of sleep can lead to more than just fatigue during the day. It may be the root cause of a number of issues, including the following:
1. Brain: Sleep is when your brain rests its neurons and strengthens newly formed neural pathways. Without adequate sleep, your brain becomes exhausted. You will feel sleepy, or have trouble concentrating during work or school. Lack of sleep can also put you at increased risk for mood swings, depression and impulsive (and possibly) risky decision-making. Your alertness can also be affected, putting yourself and others at risk if you are driving or operating heavy machinery.
2. Immune System: During sleep is an optimal time for your body to produce antibodies and other cells that can fight infections. If your body is not given enough time to do this, your immune response will be weakened and you may be more susceptible to infection or illness. Also, without enough sleep, it is likely that an illness or infection might take longer to be eradicated from your system.
3. Digestive System: Lack of sleep causes various hormones to be released in non-optimal quantities. For example, if your body is sleep deprived, it will release an increased amount of Ghrelin, a hormone that stimulates appetite. It will also release a decreased amount of Leptin, which is the hormone that tells your brain you are full. So, you now feel more hungry, but also don't feel full with the same amount of food that would normally be enough for you. Lack of sleep can also cause your body to crave simple carbohydrates: sugars and processed bread products. These foods provide your body with a quick burst of energy it feels it needs when you are sleep-deprived.
4. Cardiovascular System: In addition to the above digestive hormones, your body will also release elevated amounts of the stress hormone, cortisol when you have not gotten enough quality sleep. An increase in cortisol can cause added stress to the cardiovascular system, resulting in hypertension or high blood pressure, and increasing the risk of heart disease and even stroke.
So, there are multiple issues that can occur with sleep deprivation, but not everyone has an easy time falling and staying asleep. Here are some tips to help you fall asleep quicker and stay asleep longer:
1. Don't take long naps too late in the afternoon.
2. Avoid alcohol, caffeine, and nicotine too close to bed time.
3. Save high intensity exercise for the morning or early afternoon. Gentle yoga or similar exercises may be beneficial closer to bed time to help you relax. A regular exercise schedule can help regulate your sleep quality.
4. Avoid large meals too close to bed time.
5. Avoid blue light from phone, computer or tablet screens one hour before bed time. If you must look at your screens, you can download a blue light filter such as f.lux to help decrease the effects of blue light on your sleep schedule, or use the night mode setting.
6. Use your bed for sleep. Try to avoid doing work in bed if at all possible.
7. If you're still having trouble sleeping, try massage therapy or physiotherapy to uncover underlying causes of aches and pains that may be aggravated by your sleep position. |
Difference Between Further and Farther
Further vs Farther
‘Further and ‘farther’ are two similar sounding words that have similar connotations of usage too. The difference in a first glance is only with respect to the second letter of the two words, and they are used almost interchangeably in the daily parlance. The fact is that there is a world’s difference between the two words. This is a clear instance underscoring the difference a single letter can make when used in two words.
Farther is primarily an adverb. Further too is an adverb; but it is also used as an adjective. Both mean the same when used to refer to the concept of distance. However, there are some shades of difference in the meaning of these words and it is this that makes the speaker of English confused about the words.
The words ‘farther’ and ‘further’ are two words in every day usage and it is very easy to get confused. These two words are simple too. However, an erroneous usage of the two might result in a serious derailment of the whole script affecting the very quality of the writing. Hence, it is imperative that one should be proficient enough in using these terms.
The term ‘farther’ denotes something that is distant in space. It notates physical advancement. However, ‘further’ as an adverb means an advancement in time. It also means something additional to the present. When you use ‘further’ as a verb, it refers to the act of advancing something. ‘Further’ is more abstract and cannot be considered as describing a physical or tangible entity.
As mentioned earlier, ‘farther’ refers to an actual physical distance. When you use farther, you can actually measure the extent of its distance. It can be felt and observed. It can be seen and felt.
‘Further’ does not give any definite concept of distance that can be seen and learned. Because of its very abstract nature, the term can be used figuratively too. This can be explained with the example of a sentence. ‘She moved further in her career path with countless victories to her credit.’ In this sentence, the distance that the term ‘further’ is referring to cannot actually be measured in terms of centimeters or kilometers as it has been used metaphorically.
Now consider this sentence, ‘He furthered his education.’ The word is used as a verb, denoting an action. Further here means to add up, to increase or to enhance. As a verb, it is easy to discern.
1.Both ‘farther’ and ‘further’ are adverbs. But ‘further’ is also used as a verb.
2.’Farther’ denotes a physical entity. Thus when we describe something using the word, we can really observe, feel and measure for ourselves.
3.’Further’ is an abstract concept. It refers to distance when used as the adverb. However sometimes, the word is used figuratively or metaphorically. In such cases, the distance that the word is referring to cannot be really observed and measured. Hence the word encapsulates an element of abstractness in it.
4.The word ‘further’ as a verb means additional. It means adding up to something.
Search DifferenceBetween.net :
Email This Post : If you like this article or our site. Please spread the word. Share it with your friends/family.
Leave a Response |
While MS-DOS and Windows are both Microsoft operating systems, MS-DOS uses a command line interface, while Windows uses a graphical user interface. This basically reflects the evolution of computer interfaces from text only to the manipulation of both text and icons.Continue Reading
The original MS-DOS operating system, first released in August 1980, used a solely text-based programming language to allow users to work with, or interface with, their PC. Commands were typed into computers at a specific command prompt location on the computer screen using a standard keyboard. Commands had to be precise. Users had to specify what command they wanted, how they wanted it to run, and what program or system on the computer they wished to use. This required users to learn specific language and syntax rules to use their computer properly.
The Windows operating system, released in November 1985, used a graphical user interface instead. Input from the user usually came from using a computer mouse, and commands were run by clicking on representative icons with the virtual pointer controlled by the mouse. There was a small learning curve required to use Windows properly, but it was much easier to interact with graphical representations than text lines and commands, and no special programming language needed to be learned.Learn more about Software |
Mollusks in the Pacific Northwest have recently been found to be possible candidates for the Endangered Species Protection Act by officials from the U.S Fish and Wildlife Service. There are 26 species of snails and slugs that will be investigated over the next 12 months to determine if federal protection is necessary for the little shelled creatures.
This decision comes three years after conservation groups sought protection for 32 different mollusk species that have been threatened by the loss of old-growth forests, where they reside.
"We're really pleased that these under-appreciated species have advanced toward the Endangered Species Act protection they need to survive," said Tierra Curry, a conservation biologist at the Center for Biological Diversity. "Mollusks may be small and often slimy, but they're a vital food source for other animals, key to nutrient cycling and important indicators of forest and watershed health. Saving them will help keep the Pacific Northwest's nature intact for future generations."
Snails and slugs are an integral part of the food chain in this region. They provide food for birds, reptiles, fish and other animals, and the shells of snails also serve as homes for other small creatures. However, their habitat is diminishing due to logging and water pollution. Their sensitivity to pollutants also indicates that the region may be in even more trouble if nothing is done to save animals
in the Pacific Northwest. |
- Original Caption Released with Image:
|This is an image of the Kliuchevskoi volcano, Kamchatka, Russia, which began to erupt on September 30, 1994. Kliuchevskoi is the bright white peak surrounded by red slopes in the lower left portion of the image. |
The image was acquired by the Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar aboard the space shuttle Endeavour on its 25th orbit on October 1, 1994. The image shows an area approximately 30 kilometers by 60 kilometers (18.5 miles by 37 miles) that is centered at 56.18 degrees north latitude and 160.78 degrees east longitude. North is toward the top of the image. The Kamchatka volcanoes are among the most active volcanoes in the world. The volcanic zone sits above a tectonic plate boundary, where the Pacific plate is sinking beneath the northeast edge of the Eurasian plate. The Endeavour crew obtained dramatic video and photographic images of this region during the eruption, which will assist scientists in analyzing the dynamics of the current activity. The colors in this image were obtained using the following radar channels: red represents the L-band (horizontally transmitted and received); green represents the L-band (horizontally transmitted and vertically received); blue represents the C-band (horizontally transmitted and vertically received).
The Kamchatka River runs from left to right across the image. An older, dormant volcanic region appears in green on the north side of the river. The current eruption included massive ejections of gas, vapor and ash, which reached altitudes of 20,000 meters (65,000 feet). New lava flows are visible on the flanks of Kliuchevskoi, appearing yellow/green in the image, superimposed on the red surfaces in the lower center. Melting snow triggered mudflows on the north flank of the volcano, which may threaten agricultural zones and other settlements in the valley to the north.
Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrte.v. (DLR), the major partner in science, operations and data processing of X-SAR.
- Image Credit:
Image Addition Date: |
Giant German hippopotamuses wallowing on the banks of the Elbe are not a common sight. Yet 1.8 million years ago hippos were a prominent part of European wildlife, when mega-fauna such as woolly mammoths and giant cave bears bestrode the continent. Now palaeontologists writing in Boreas, believe that the changing climate during the Pleistocene Era may have forced Europe's hippos to shrink to pygmy sizes before driving them to warmer climes.
"Species of hippo ranged across pre-historic Europe, including the giant Hippopotamus antiquus a huge animal which often weighed up to a tonne more than today's African hippos," said lead author Dr Paul Mazza from the University of Florence. "While these giants ranged across Spain, Italy and Germany, ancestors of the modern Hippo, Hippopotamus amphibius, reached as far north as the British Isles."
Hippos were a constant feature of European wildlife for 1.4 million years, during the climatically turbulent time of the Pleistocene era, which witnessed 17 glacial events. The experience of such environmental changes would not have been without cost, and Dr Mazza and co-author Dr Adele Bertini, also from Florence, investigated the impact this changing climate may have incurred.
The research focused on fossils from across Europe, ranging from the German town of Untermafeld in Thuringia, to Castel di Guido, North of Rome, and Collecurti and Colle Lepre in Italy's Central Eastern Marche province. The fossils were compared to a database of measurements taken from modern African and fossil European hippos.
"The German fossil from Untermafeld is the largest hippo ever found in Europe, estimated to weigh up to 3.5 tonnes," said Mazza. "The Collecurti specimen was also large, but interestingly even though it was close in both time and distance to the Colle Lepre specimen the latter specimen was 25% smaller. A final specimen, an old female from Ortona in central Italy, was smaller again. It was 17% smaller than the Collecurti fossil and approximately 50% lighter."
The authors found that a clear size threshold separated hippo specimens which heralded from different parts of the Pleistocene age. The hippos from the early Pleistocene were the largest ever known while smaller specimens emerged during the middle Pleistocene. Larger specimens briefly reappeared during the late Pleistocene.
"We believe the size difference was connected to the changing environmental conditions throughout the Pleistocene," said Mazza. "The Ortona hippo, the smallest of the specimens, lived in a climate where glacial cycles turned colder, while cold steppes replaced warm ones across the Mediterranean."
The drop in temperature and rainfall during the Pleistocene caused significant changes to plant life across Europe resulting in an expansion of grassy steppes. Being grazers hippos may have been expected to thrive in this new environment. Unexpectedly they appeared to shrink, only re-attaining their past size during the warm periods of the late Pleistocene, when forests and woodland re-colonised the steppes.
During their time in Europe hippos were forced to live in habitats influenced by a general environmental trend towards cooler and drier conditions. In response hippos achieved giant sizes during warmer and relatively more humid stages, but became smaller, and even very small, under non-ideal environmental conditions.
"While hippos are normally considered indicators of warm, temperate habitats this research shows that temperature was not only the controlling factor for their ancient ancestors," concluded Mazza. "Our research suggests other factors, such as food availability, were equally important. Appreciating the importance of factors beyond temperature is of great significance as we consider how species may adapt to future ecological and environmental changes."
|Contact: Ben Norman| |
15 July 2005
15 July 2005
Scientists have shown for the first time that carbon nanotubes make an ideal scaffold for the growth of bone tissue.
The new technique could change the way doctors treat broken bones, allowing them to simply inject a solution of nanotubes into a fracture to promote healing.
The success of a bone graft depends on the ability of the scaffold to assist the natural healing process. Artificial bone scaffolds have been made from a wide variety of materials, such as polymers or peptide fibres, but they have a number of drawbacks, including low strength and the potential for rejection in the body.
“Compared with these scaffolds, the high mechanical strength, excellent flexibility and low density of carbon nanotubes make them ideal for the production of lightweight, high-strength materials such as bone,” says Robert Haddon, Ph.D., a chemist at the University of California, Riverside, and lead author of the paper. Single-walled carbon nanotubes are a naturally occurring form of carbon, like graphite or diamond, where the atoms are arranged like a rolled-up tube of chicken wire. They are among the strongest known materials in the world.
“This research is particularly notable in the sense that it points the way to a possible new direction for carbon nanotube applications, in the medical treatment of broken bones,” says Leonard Interrante, Ph.D., editor of Chemistry of Materials and a professor in the department of chemistry and chemical biology at Rensselaer Polytechnic Institute in Troy, N.Y. “This type of research is an example of how chemistry is being used everyday, world-wide, to develop materials that will improve peoples’ lives.”
The researchers expect that nanotubes will improve the strength and flexibility of artificial bone materials, leading to a new type of bone graft for fractures that may also be important in the treatment of bone-thinning diseases such as osteoporosis.
The new technique may someday give doctors the ability to inject a solution of nanotubes into a bone fracture, and then wait for the new tissue to grow and heal.
Simple single-walled carbon nanotubes are not sufficient, since the growth of hydroxyapatite crystals relies on the ability of the scaffold to attract calcium ions and initiate the crystallization process. So the researchers carefully designed nanotubes with several chemical groups attached. Some of these groups assist the growth and orientation of hydroxyapatite crystals, allowing the researchers a degree of control over their alignment, while other groups improve the biocompatibility of nanotubes by increasing their solubility in water.
“Researchers today are realizing that mechanical mimicry of any material alone cannot succeed in duplicating the intricacies of the human body,” Haddon says. “Interactions of these artificial materials with the systems of the human body are very important factors in determining clinical use.”
The research is still in the early stages, but Haddon says he is encouraged by the results. Before proceeding to clinical trials, Haddon plans to investigate the toxicology of these materials and to measure their mechanical strength and flexibility in relation to commercially available bone mimics. |
Children can practice comparing numbers with this fun and easy to make math game for kids using an egg carton and ping pong balls. This math game can also be adapted for younger children to use for one-to-one counting practice, and it’s a great rainy day activity too!
Follow our Math Games for Kids Pinterest board!
Children always enjoy learning through play, so what better way to practice comparing numbers than with a fun ping pong tossing game! Each child will take turns tossing a ping pong ball into a numbered egg carton and comparing numbers using the terms “greater than” and “less than”. It’s a fun and easy gross motor activity that can be used at home or in the classroom!
We used our trusty old snowman ping pong balls from our snowman launching lever activity, but you can just use plain ping pong balls for the activity too. You’ll also find an optional free printable recording sheet near the end of the post you can use to keep score. (This post contains affiliate links.)
Comparing Numbers: Ping Pong Ball Toss Math Game
Materials for Math Game
- Old egg carton
- A few ping pong balls
- Wooden cube
- Colored markers
- Tempera paint and paintbrush (optional)
- Free printable scoresheet and pencil (optional)
How to Make the Math Game
1. Grab an old egg carton and cut off the top. (I painted mine blue with tempera paint to make it winter themed, but painting it is not necessary.)
2. Use a marker to number each compartment of your egg carton. I numbered my compartments one through twelve, but you could do larger numbers for kids that are ready to compare greater numbers.
I wanted to make sure the numbers showed up well so I tried to use contrasting colors for the numbering. For my egg carton painted in the darker blue, I used a white Sharpie paint marker for the numbers, and I just used a plain, black Sharpie for the egg carton in light blue.
3. Make your die using a wooden cube. On three of the sides write the word “greater” with a marker, and on the other three sides write the word “less”.
4. Optional: Decorate your ping pong balls with some colored Sharpies. I drew some snowmen faces on ours since it’s winter. (One of my friends suggested calling the game ‘Snowball Fight’ which is perfect with our snowmen ping pong balls!) You could also choose to decorate them as insects for Spring, fish for summer… the possibilities are endless!
5. Optional: Print out the free printable scoresheet.
How to Play the Comparing Numbers Math Game
1. Place the numbered egg carton on the floor. Each child gets his/her own ping pong ball. The players take turns tossing or bouncing their ping pong ball into the egg carton. (Kids can keep trying until the ball actually lands in one of the compartments.)
2. Once both ping pong balls have landed in the egg carton, children say their numbers out loud.
3. One child rolls the greater than/less than die. If it lands on the ‘greater’ side, the child with the greater number gets the point. If it lands on the ‘less’ side, the child whose number is less wins the point.
4. The balls are then retrieved from the egg carton and tossed again!
How to Use the Scoresheet
If using the scoresheet, children write their egg carton compartment numbers in the first boxes under ‘Player 1’ and ‘Player 2’. After rolling the die, the kids circle either the word ‘greater’ or ‘less’ depending on which is rolled. An ‘X’ is placed in the bottom box of the player who gets the point for that round. After 10 rounds, the number of wins for both players is added up and recorded in the bottom total boxes to see who is the winner.
How to Adapt the Math Activity for Younger Children
- Children can bounce or toss ping pong balls into the egg carton and practice saying the numbers the balls landed in.
- For one-to-one correspondence practice, children can place one ping pong ball at a time into each compartment of the egg carton while counting to ten. |
Differences Between a Monomer and a Polymer
Monomer vs Polymer
In chemistry classes, we are always taught the basics first – the atoms and molecules. Do you remember that atoms and molecules can be classified as monomers or polymers? In this article, we will be tackling the differences between a monomer and a polymer. There are only little differences that exist between a monomer and a polymer. For a quick overview, a monomer is made up of atoms and molecules. When monomers combine, they can form a polymer. In other words, a polymer consists of monomers which are bound together.
“Monomer” comes from the Greek word “monomeros.” “Mono” means “one” while “meros” means “parts.” The Greek word “monomeros” literally means “one part.” For monomers to become polymers, they undergo a process called polymerization. The process of polymerization makes the monomers bond together. An example of a monomer is a glucose molecule. However, when several glucose molecules bond together, they become starch, and starch is already a polymer.
Other examples of monomers come about naturally. Aside from the glucose molecule, the amino acids are other examples of monomers. When the amino acids undergo the process of polymerization, they can turn into protein, which is a polymer. In the nucleus of our cells, we can also find monomers which are the nucleotides. When nucleotides undergo the process of polymerization, they become nucleic acid polymers. These nucleic acid polymers are important DNA components. Another natural monomer is isoprene, and it can polymerize into polyisoprene which is a natural rubber. Since the monomers have the ability to bond molecules together, chemists and scientists can discover new chemical compounds which can be useful for society.
We have mentioned earlier that a polymer consists of several monomers combined. A polymer is less mobile than a monomer because of its larger load of combined molecules. The more molecules combined, the heavier the polymer would be. A good example would be ethane gas. At room temperature, it can travel anywhere because of its light composition. However, if the molecular composition of ethane gas is doubled, it will become butane. Butane comes in the form of a liquid so it will not have the same freedom of movement unlike the ethane gas. If you add up another group of molecules to the butane fuel, we can have paraffin which is a waxy substance. As we add more molecules to a polymer, the more solid it becomes.
When polymers become solid enough, they have several applications in industries like the automobile industry, sports industry, manufacturing industry, and others. For example, polymers can be used as adhesives, foams, and coatings. We can also find polymers in several electronic devices and optical devices. Polymers are also useful in agricultural settings. Since polymers are composed of several chemical compounds, they can be used as fertilizers in order to stimulate plant growth better.
Since monomers continuously combine to form polymers, there are endless uses of polymers in our society. With the formed chemicals and materials, we can discover and develop more usable materials.
A monomer is made up of atoms and molecules. When monomers combine, they can form a polymer.
A polymer consists of monomers which are bound together.
The process of polymerization makes the monomers bond together.
Examples of monomers are glucose molecules. If they undergo the process of polymerization, they become starch, which are polymers.
A polymer is less mobile than a monomer because of its larger load of combined molecules. The more molecules combined, the heavier the polymer would be.
And as we add more molecules to a polymer, the more solid it becomes.
Search DifferenceBetween.net :
Email This Post : If you like this article or our site. Please spread the word. Share it with your friends/family.
Leave a Response |
The Native Languages of the Southeastern United States
by Nicholas A. Hopkins
Ver este informe en Español.
Table of Contents
The Southeast as a Cultural and Linguistic Area
The Native Languages of the Southeast
Algonquian, Iroquoian, and Siouan Languages
The Languages of the Lower Mississippi
The Languages of Peninsular Florida
The Prehistory of the Languages of the Southeast
The Comparative Method of Historical Linguistics
The Southeastern Linguistic Area
Muskogean and the Southeast
List of Figures
The Southeast as a Cultural and Linguistic Area
The Southeastern region of the United States is an area within which the aboriginal cultures and languages were quite similar to one another, as opposed to cultures and languages which lay outside the area. Within such a "culture area", languages and cultures have developed along similar lines due to shared circumstances and intergroup contact, and it is possible to make general statements which apply to all of the native groups, as opposed to groups which lie outside the area. Other such "culture areas" of North America include the Pacific Northwest and the Southwest (Kroeber 1939).
The core of the Southeast culture area (Kroeber 1939: 61-67, Swanton 1928) is the region that stretches from the Mississippi River east to the Atlantic, from the Gulf Coast to the border between Kentucky and Tennessee (or North Carolina and Virginia). The periphery of the Southeast includes territory as much as 200 miles west of the Mississippi (into Arkansas, Oklahoma, and East Texas), and as far north as the Ohio and Potomac rivers (including Kentucky, West Virginia and Virginia).
In archaeological terms (Willey 1966: 246 ff), the Southeast is part of the Eastern Woodlands area, which includes most of North America east of the Great Plains. This area is (or was) generally wooded, with mixed oak-pine woods predominating in the Southeast. Soils are reasonably good, and rivers and streams abound. The climate is temperate, even subtropical in its southern extremes (e.g., south Florida).
The Eastern Woodlands experienced some four major cultural traditions before European contact: Big-Game Hunting, the Archaic, the Woodland, and the Mississippian cultural traditions. Big Game hunting prevailed at the time of the earliest known human remains, and involved a dependence on Pleistocene mega-fauna (mammoths, etc.), as its name implies. In the early post-Pleistocene (after about 8000 BC), with the extinction of the mega-fauna, economies shifted to greater reliance on small game and increased utilization of wild plant foods (the Archaic tradition). About 1000 BC, the appearance of pottery and ceramic figurines, mortuary mounds and other earthworks, and especially plant cultivation (including maize) marks the transition to the Woodlands cultural tradition. Finally, around 500 AD, maize agriculture intensified large permanent towns were established, and the construction of organized complexes of large mounds around plazas-along with new vessel forms and decorations-marks the onset of the Mississippian culture. This tradition originated along the central and lower Mississippi River Valley (hence its name), and spread out from there over the next 1000 years, so that by 1400 AD centers of Mississippian culture were found throughout the Eastern Woodlands.
Not all of these cultural traditions were manifested in exactly the same way across the entire region, and at a given time neighboring societies might be practicing different traditions. One population might already have adopted Mississippian culture, but its neighbors had not. Since the cultural traditions do not define strict chronological periods, archaeologists prefer to use a distinct set of terms for the chronology of the region (Willey 1966):
Paleo-Indian (before 8000 BC), Big-Game Hunting;
Archaic Period (8000-1000 BC):
Early (8000-5000 BC), transition to Archaic culture;
Middle (5000-2000 BC), only Archaic culture;
Late (2000-1000 BC), only Archaic culture.
Burial Mound Period (1000 BC-AD 700):
Burial Mound I (1000-300 BC), transition to Woodland culture;
Burial Mound II (300 BC-AD 700), Woodland culture except in marginal areas.
Temple Mound Period (AD 700-AD 1700):
Temple Mound I (AD 700-1200), transition to Mississippian culture;
Temple Mound II (AD 1200-1700), Mississippian except in marginal areas.
The last of these archaeological stages, Temple Mound II, includes the period of early European contact, which begins in the Southeast in the first half of the sixteenth century with the expeditions of Ponce de León (1513), Narváez (1528), and Hernando de Soto (1539-1542). By 1700, the native societies of Florida and the Gulf Coast had been transformed by contact with the Spanish and the French, and English colonization had disrupted much of the rest of the Southeast. Some of the Europeans who visited Indian societies during this contact period left detailed accounts of Indian cultures (e.g., Le Page du Pratz published an eye-witness account of a Natchez funeral in 1758; du Pratz 1956). However, the rapid spread of Old World diseases even ahead of the visitors had altered many societies well before they were observed by Europeans, and even the earliest reports apparently do not do justice to the nature of aboriginal society.
Millennia of shared cultural development had resulted in fairly uniform culture across the Southeast by 1700 (except that there was a distinction between the culture of Mississippian towns and isolated rural populations that still followed Woodlands ways). There was no corresponding linguistic convergence. The known populations of the Southeast spoke languages of at least six distinct language families, as different from each other in their structures as English and Chinese. The core of the Southeast was occupied by speakers of the Muskogean languages, but other languages were spoken around the periphery and along major trade routes. The language families reported are the following (Crawford 1975: 5-6; locations given here are grossly simplified):
Pamlico (northern Virginia)
Powhattan (Tidewater Virginia)
Shawnee (Kentucky and Tennessee)
Caddo (Oklahoma, Arkansas, and East Texas)
Cherokee (western North Carolina)
Nottoway (southeastern Virginia)
Tuscarora (North Carolina)
Alabama (central Alabama)
Apalachee (Tallahassee area)
Chickasaw (northern Mississippi, western Tennessee)
Choctaw (central Mississippi)
Creek (central Alabama and Georgia)
Hitchiti (central Georgia)
Koasati (northern Alabama)
Mikasuki (southern Georgia)
Seminole (central Georgia)
Biloxi (Gulf Coast Mississippi)
Catawba (South Carolina)
Ofo (western Mississippi)
Quapaw (eastern Arkansas)
Tutelo (western Virginia)
Woccon (tidewater North Carolina)
Atakapa (Texas-Louisiana coasts)
Chitimacha (Mississippi delta, Louisiana)
Natchez (western Tennessee)
Tunica (northwestern Mississippi)
Yuchi (Georgia-North Carolina border)
Languages spoken in adjacent areas could be very different from one another, to the point of mutual unintelligibility, and it is surely the case that many dozens of languages died out before they were reported. To compensate for this great diversity in languages, there were several widely used trade languages, spoken as a second (or third) language by many people. The most famous of these is Mobilian (or Mobilian Jargon), a trade language based on Choctaw-Chickasaw, used up the Mississippi River and across the Gulf Coast as the language of commerce and travel. In the inland Southeast, Creek was the language preferred for the same purposes, and speakers of other Muskogean languages were likely to be bilingual in Creek. Around the Chesapeake Bay, other trade languages probably existed; Jersey and Delaware jargons developed to deal with the incoming Europeans, and something similar may have been used before contact.
Despite their gross differences, the languages of the Southeast share many characteristics which lead linguists to treat the area as a "linguistic area," analogous to a "culture area" (Campbell 1997: 341-344), and similar in nature to other linguistic areas, such as Mesoamerica, or the Indian subcontinent. Some of the features that define this area are phonological, having to do with the pronunciation of the languages. Some are grammatical (verb conjugations, etc.) and some are lexical (similar vocabularies and patterns of word formation). In any case, the defining features of the area are common to most of the languages within the area, and rare elsewhere in North America.
In phonology, bilabial and labiodental fricatives ([Φ] and [f]), and the lateral spirant or "voiceless l" ([£]) are characteristic Southeastern markers of speech. In grammar, "classificatory" verbs abound; a verb like to lie down, for instance, would have many distinct forms, one used for long, stick-like objects, another for round objects, another for flat sheet-like objects, and so on. Nouns are divided between those that are inalienably possessed (like body parts) and those that are not, and the inflection of these nouns for possession has parallels in verb conjugations that distinguish between degrees of "control" by the subject over the action. Some of these features are reported from other North American Indian languages, but the predominance of their presence and the specific ways they are manifested in the languages is typically Southeastern. Linguists have been able to pinpoint the areas of origin of some of these features, and treat their widespread occurrence across the area as the result of diffusion, the result of borrowing of language patterns, a process similar to the development of shared culture that is seen in the archaeological and ethnographic evidence.
In summary, the Southeast is an area of rather similar geography that has been occupied for a long time by societies that have developed along the same lines, in direct or indirect contact with one another. These societies speak a large number of languages that were originally much less like each other than they are now. In both language and culture, then, it is proper to treat the Southeast as a distinct area, within which societies all share a large number of traits that collectively distinguish them from the societies of other areas.
Click to download the report in PDF format:
The Native Languages of the Southeastern United States (1.03 MB)
by Nicholas A. Hopkins
The PDF files require Adobe Acrobat Reader.
To download the latest version, click the Get Acrobat Reader button below. |
Before Sputnik - The First Man-made Object in Space
Technology Articles 9/2/11
By: Chris Capps
When we look at the incredible history of human space travel, the humble beginnings of human objects being propelled into space have a very surprising beginning - and it doesn't just start with the Soviet Sputnik program. In fact, the first object ever launched into space was actually launched entirely by accident. And not only is it still up there, it has actually traveled further than any other known man-made object out of our solar system - and it bears with it evidence of mankind's most terrifying technological achievement.
An obscure urban legend has come up around the incident. The unofficial version of the story suggests it was a manhole cover that had accidentally been left in place during a nuclear test. In reality it was a metal plate and it had been left on purpose. But that's where the urban legend ends and the truly strange story begins. And the effects of this experiment may very well remain long after humanity has left Earth behind for better or worse.
It all began with a massive nuclear explosion during an underground test. The incredible force of the blast immediately vaporized a column of water that sent the metal plate forcefully up at such an incredible speed that it was estimated to be moving more than four times the speed required to reach the Earth's escape velocity. At this speed not only would the device be able to make it into space, it did so with unprecedented speed. And since the void of space offers virtually no resistance, the metal plate was estimated to keep going long afterward as well. Currently most people still imagine that the furthest man-made object to ever be fired into space is the Voyager Space probe, but this simple metal plate is expected to have not only exceeded the distance of the Voyager, but may outrun any spacecraft or probe fired into space for several centuries. And it was fired into the void entirely by accident.
Operation Plumbob, when the nuclear test was conducted, occurred in 1957 - the same year as the Soviet Union's Sputnik launch. But while Sputnik was launched using the conventional rockets that would later become the staple of the space program, Plumbob fired the metal plate into space in a way more akin to a cannon being fired.
If the almost 2,000 lb metal plate survived the Earth's atmosphere as well as it survived the force of the blast itself, then it is not only serving as an unintentional message in a bottle into the stars, but it may also have evidence of mankind's nuclear ambitions as well in the form of stowaway ionizing radiation. If ever the straight line the object travels in is traced back to its source, a passing alien race may find the beings that mastered the destructive force of the atom. But will they find that the same technology that sent the object into space in the first place eventually destroyed us as well? The choice is ours. |
|No image is available for this pest...|
- Pest Type: Insect
- Crops Affected: Corn
- Scientific Name: Anuraphis maidiradicis
- Pest Order: Homoptera
Adults are typically wingless, blue-green aphids with black heads and black or reddish-brown eyes. During the egg-laying period, the female has a gray body with a pink abdomen and a white, powdery coating. The various adult forms range from 1.5-2 mm long. Eggs are dark green, oval, and elongate; and less than 1 mm long. Nymphs are pale green with red eyes and resemble the adult shape; and they measure 0.3-2 mm long.
The corn root aphids pierce roots with their needle-like mouthparts and extract sap. As a result of this feeding, the foliage soon develops a characteristic yellowish to reddish tinge. Heavily infested seedlings become stunted, rarely growing taller than 25 cm. In addition, infested fields are likely to harbor many anthills. Damage is most likely in dry years.
The life cycle for these aphids consists of egg, nymph, and adult. Throughout their life cycle, corn root aphids are highly dependent upon ants, especially cornfield ants. In most areas, the aphids overwinter as eggs deep within the ant nest. In March or April, ants carry newly hatched nymphs to the roots of corn or weeds, particularly dock and smartweed. If corn seedlings are available, aphids are transferred to them. Later the ants feed on the aphids' honeydew secretions. First generation aphid nymphs feed on roots for 2-3 weeks before developing in to wingless female adults. By-passing the egg-laying stage, these mature aphids soon give birth to 40-50 live nymphs. After several generations, winged female aphids appear and fly to nearby fields, especially corn or cotton. After landing on ant hills, they are carried to the roots by ants. Eggs are laid for overwintering and are carried by ants deep into their nests. In no-till corn, 10-22 generations of corn root aphids are common per year.
Corn root aphid infestations can be prevented by a variety of cultural practices. Control of weedy hosts in the spring eliminates breeding and feeding sites for some of the first aphid generation. Cultural practices which stimulate rapid corn root growth greatly reduce early stunting by aphids. Deep tillage at least every other year weakens ant colonies and thereby decreases the chances that overwintering aphid eggs will survive. Crop rotation also prevents the buildup of large ant and aphid populations in any one field. If ant hills are present prior to planting, an insecticide application at planting may be advisable. If corn root aphids are found, consult local advisers for local thresholds and control recommendations. |
Science for a Sustainable Future
Creating knowledge and understanding through science equips us to find solutions to today’s acute economic, social and environmental challenges and to achieving sustainable development and greener societies. As no one country can achieve sustainable development alone, international scientific cooperation contributes, not only to scientific knowledge but also to building peace.
UNESCO works to assist countries to invest in science, technology and innovation (STI), to develop national science policies, to reform their science systems and to build capacity to monitor and evaluate performance through STI indicators and statistics taking into account the broad range of country-specific contexts.
Science policies are not enough. Science and engineering education at all levels and research capacity need to be built to allow countries to develop their own solutions to their specific problems and to play their part in the international scientific and technological arena.
Linking science to society, public understanding of science and the participation of citizens in science are essential to creating societies where people have the necessary knowledge to make professional, personal and political choices, and to participate in the stimulating world of discovery. Indigenous knowledge systems developed with long and close interaction with nature, complement knowledge systems based on modern science.
Science and technology empower societies and citizens but also involve ethical choices. UNESCO works with its member States to foster informed decisions about the use of science and technology, in particular in the field of bioethics.
Water is fundamental for life and ensuring water security for communities worldwide is essential to peace and sustainable development. The scientific understanding of the water cycle, the distribution and characteristics of surface and groundwater, of urban water all contribute to the wise management of freshwater for a healthy environment and to respond to human needs.
Scientific knowledge of the Earth’s history and mineral resources, knowledge of ecosystems and biodiversity, and the interaction of humans with ecosystems are important to help us understand how to manage our planet for a peaceful and sustainable future. |
The condition of stored grain is determined (Lacey, 1988) by a complex interaction between the grain, the macro- and micro-environment and a variety of organisms (including microorganisms, insects, mites, rodents and birds) which may attack it.
Grain provides an abundant source of nutrients, and the natural consequence of the type of stable ecosystem described above will normally be spoilage (biodeterioration) of the grain, caused by the organisms.
The extent of contamination by moulds is largely determined by the temperature of the grain and the availability of water and oxygen. Moulds can grow over a wide range of temperatures, from below freezing to temperatures in excess of 50°C. In general, for a given substrate, the rate of mould growth will decrease with decreasing temperature and water availability. Moulds utilise intergranular water vapour, the concentration of which is determined by the state of the equilibrium between free water within the grain (the grain moisture content) and water in the vapour phase immediately surrounding the granular particle. The intergranular water concentration is described either in terms of the equilibrium relative humidity (ERM, %) or water activity (aw). The latter describes the ratio of the vapour pressure of water in the grain to that of pure water at the same temperature and pressure, while the ERH is equivalent to the water activity expressed as a percentage. For a given moisture content, different grains afford a variety of water activities and, consequently, support differing rates and type of mould growth. Typical water activities which are necessary for mould growth range from 0.70 to 0.90.
The interaction between grain temperature and moisture content also affects the extent of mould colonisation. The passage of water from the grain into the vapour phase is encouraged by an increase in temperature. Consequently, for a given moisture content, the water activity, and the propensity for mould growth, will increase with temperature. Maize, for example, can be relatively safely stored for one year at a moisture level of 15 per cent and a temperature of 15°C. However, the same maize stored at 30°C will be substantially damaged by moulds within three months.
Insects and mites (arthropods) can, of course, make a significant contribution towards the biodeterioration of grain, through the physical damage and nutrient losses caused by their activity. They are also important, however, because of their complex interaction with moulds and, consequently, their influence on mould colonisation.
In general, grain is not infested by insects below a temperature of 17°C whereas mite infestations can occur between 3 and 30°C and above 12 per cent moisture content. The metabolic activity of insects and mites causes an increase in both the moisture content and temperature of the infested grain. Arthropods also act as carriers of mould spores and their faecal material can be utilised as a food source by moulds. Furthermore, moulds can provide food for insects and mites but, in some cases, may may also act as pathogens.
Another important factor that can affect mould growth is the proportion of broken kernels in a consignment of grain. Broken kernels, caused by general handling and/or insect damage, are predisposed to mould invasion of the exposed endosperm. It has been estimated, for example, that increasing the proportion of broken grains by five per cent will reduce the storage-life of that consignment by approximately one order of magnitude; that is from, say, 150 to 15 days.
Mould growth is also regulated by the proportions of oxygen, nitrogen and carbon dioxide in the intergranular atmosphere. Many moulds will grow at very low oxygen concentrations; a halving of linear growth, for example, will only be achieved if the oxygen content is reduced to less than 0.14 per cent. Interactions between the gases and the prevailing water activity also influence mould growth.
Moulds and mycotoxins
The interactions described above, within granular ecosystems, will support the growth of a succession of micro-organisms as the nutrient availability and microenvironment changes with time. In the field, grains are predominantly contaminated by those moulds requiring high water activities (at least 0.88 aw) for growth, whereas stored grains will support moulds which grow at lower moisture levels. The rate of mould growth is also determined by the ability of the micro-organism to compete with other species. Some species, including those of Aspergillus, Penicillium and Fusarium, can occur both in the field and in storage.
Secondary metabolites are those compounds, produced by living organisms, which are not essential for growth. Some secondary metabolites produced by moulds are highly toxic to animals, humans and plants. These so-called ‘mycotoxins’ have been extensively studied since 1961, when a group of highly toxic Aspergillus flavus toxins – the aflatoxins – were isolated from a consignment of groundnut meal which had been imported into the UK (Coker, 1979).
Any activity which disturbs the stability of an ecosystem will increase the production of secondary metabolites, including mycotoxins. Such activities include the widespread use of fertilizers and pesticides, high yielding plant varieties and the cultivation of a limited number of plant species with restricted genetic variation. The normal practices of harvesting, drying and storage also, of course, significantly disturb the ecosystems of grains established before harvest.
The major mycotoxin-producing moulds include (Miller, 1991) certain Aspergillus, Fusarium and Penicillium species. Toxigenic (mycotoxin-producing) Aspergillus moulds can occur both before and after harvest, whereas Fusarium and Penicillium moulds occur predominantly before and after harvest respectively. In general, Aspergillus is associated with the tropics and Penicillium with temperate climates, whereas Fusarium moulds occur world wide. However, because of the complexity and variety of ecosystems supporting mould growth in grains, the nature and extent of the worldwide occurrence of moulds and mycotoxins cannot, as yet, be confidently defined. About 300 mycotoxins have been reported, produced by a wide variety of moulds. A few of the major moulds and mycotoxins are listed in Table 2.1 and discussed in the following sections of this Chapter.
Table 2.1. The major moulds and mycotoxins.
Aflatoxins B1, B2, G1, G2
Aflatoxins B1, B2
The significance of mycotoxins
Mycotoxins have been implicated in a range of human and/or animal diseases and occur in a variety of grains. The ingestion of mycotoxins can produce both acute (short-term) and chronic (medium/long-term) toxicities ranging from death to chronic interferences with the function of the central nervous, cardiovascular and pulmonary systems, and of the alimentary tract. Some mycotoxins are carcinogenic, mutagenic, teratogenic and immunosuppressive. Aflatoxin B.
for example, is one of the most potent hepatocarcinogens known.
The mycotoxins have attracted worldwide attention, over the past 30 years, firstly because of their perceived impact on human health, secondly because of the economic losses accruing from condemned foods/feeds and decreased animal productivity and, thirdly, because of the serious impact of mycotoxin contamination on internationally traded commodities. It is estimated, for example, that the cost of managing the mycotoxin problem on the North American continent is approximately $5 billion.
The aflatoxin-producing moulds Aspergillus flavus and A. parasiticus occur widely, on inadequately dried food and feed grains, in sub-tropical and tropical climates throughout the world. Pre-harvest mould growth, and aflatoxin production, is encouraged by insect damage, mechanical damage, drought stress and excessive rainfall. The aflatoxins may occur, both before and after harvest, on virtually any food or feed which supports fungal growth, including cereals, oilseeds and edible nuts. Maize, groundnuts, cottonseed, oil-palm kernels and copra are particularly associated with the occurrence of the aflatoxins. The very substantial international trade in these commodities serves to amplify the worldwide nature of the aflatoxin problem.
The ingestion of aflatoxin B1-contaminated animal feed, by dairy cattle, can result in the presence of aflatoxin M1 (Figure 2. 1e) – a metabolite of aflatoxin B1 – in milk. This is an issue of considerable importance to public health, given the frequent consumption of milk and dairy products by infants.
Aflatoxin B. has been confirmed as a highly potent human carcinogen, whereas the carcinogenicity of the aflatoxins G1 (Figure 2.1c) and M, has been confirmed only in experimental animals.
The acute toxicity of the aflatoxins has been demonstrated in both animals and man. The outbreak of ‘Turkey X’ disease in the UK, in the early 1960s, was associated with the death of thousands of turkeys, ducklings and other domestic animals which had received a diet containing aflatoxin-contaminated groundnut meal. Many human fatalities occurred (Anon, 1993(a)) in India, in 1974, when unseasonal rains and a scarcity of food prompted the consumption of heavily aflatoxin-contaminated maize. Acute aflatoxicosis, also caused by the consumption of contaminated maize, caused fatalities in Kenya in 1982.
The chronic effects, caused by the consumption of low dietary levels (parts per billion) of the aflatoxins, on the health and productivity of domestic animals are well established. Reduced weight gain has been reported (Anon, 1989), for example, in cattle, pigs and poultry; reduced milk yield in cows; and reduced feed conversion in pigs and poultry. Low levels of aflatoxin have been associated with an increased susceptibility to disease in poultry, pigs and cattle. Vaccine failures have also been reported. If similar immunosuppressive effects are manifested in humans, it is possible that the aflatoxins (and other mycotoxins) could be significantly enhancing the incidence of human disease in developing countries.
The trichothecenes comprise a large group of mycotoxins, produced by a variety of Fusarium moulds. The current discussion will be limited to the two trichothecenes – T-2 toxin and deoxynivalenol – which occur naturally, in significant quantities, in cereal grains.
(i) T-2 toxin
F. sporotrichioides, the major producer of T-2 toxin, occurs mainly in temperate to cold areas and is associated with cereals which have been allowed to overwinter in the field (Anon, 1993(b)). T-2 toxin has been implicated in two outbreaks of acute human mycotoxicoses. The first occurred in Siberia (in the former USSR), during the Second World War, producing a disease known as ‘alimentary toxic aleukia’ (ATA). Thousands of people, who had been forced to eat grain which had overwintered in the field, were affected and entire villages were eliminated. The symptoms of ATA included fever, vomiting, acute inflammation of the alimentary tract, anaemia, circulatory failure and convulsions. Trichothecene poisoning also occurred in Kashmir, India, in 1987 and was attributed to the consumption of bread made from mouldy flour. The major symptom was abdominal pain together with inflammation of the throat, diarrhoca, bloody stools and vomiting. T-2 toxin was isolated from the flour together with other trichothecenes, namely deoxynivalenol, nivalenol and deoxynivalenol monoacetate (Figures 2.2b, 2.2c and 2.2d respectively).
T-2 toxin has been implicated with the occurrence of haemorrhagic toxicoses (mouldy maize toxicoses) in farm animals. Oral lesions, severe oedema of the body cavity, neurotoxic effects and, finally, death have been reported in poultry, after the ingestion of feed contaminated with T-2.
The most significant effect of T-2 toxin, and other trichothecenes, may be the immunosuppressive activity, which has been clearly demonstrated in experimental animals. The effect of T-2 toxin on the immune system is probably linked to the inhibitory effect of this toxin on the biosynthesis of macromolecules.
There is limited evidence that T-2 toxin may be carcinogenic in animals.
(ii) Deoxynivalenol (Figure 2.2b)
F. graminearum occurs worldwide and is the most important producer of deoxynivalenol (DON) (Anon, 1993(c)). The outbreaks of emetic (and feed refusal) syndromes in farm animals, produced by the presence of DON in their diets, has resulted in the trivial name, vomitoxin, being attributed to this mycotoxin.
DON is probably the most widely distributed Fusarium mycotoxin occurring in a variety of cereals, particularly maize and wheat. As stated above, DON has been implicated in a human mycotoxicosis, in India, in combination with T-2 toxin and other trichothecenes. Other outbreaks of acute human mycotoxicoses, caused by the ingestion of DON and involving large numbers of people, have occurred in rural Japan and China. The Chinese outbreak, in 1984-85, resulted from the ingestion of mouldy maize and wheat. The onset of symptoms occurred within five to thirty minutes and included nausea, vomiting, abdominal pain, diarrhoea, dizziness and headache. Another F. graminearum toxin, zearalenone (see below), was also isolated from the mouldy foodstuff.
The immunosuppressive effect, of those concentrations of DON which are naturally ocurring, has been reported. There is inadequate evidence in humans and experimental animals, however, for the carcinogenicity of DON. DON is not transferred into milk, meat or eggs.
F. graminearum is also the most important producer of zearalenone, a widely-occurring mycotoxin which is responsible for many outbreaks of oestrogenic syndromes amongst farm animals (Maracas, 1991).
The occurrence of zearalenone in maize has been responsible for outbreaks of hyperestrogenism in animals, particularly pigs, characterised by vulvar and mammary swelling, uterine hypertrophy and infertility.
As described above, zearalenone was isolated from mouldy cereals involved in an outbreak of acute human mycotoxicosis in China.
There is limited evidence in experimental animals, and inadequate evidence in humans, for the carcinogenicity of zearalenone. It is not transmitted from feed to milk to any significant extent.
The fumonisins are a group of mycotoxins which have been characterised comparatively recently (Anon, 1993(d). They are produced by F. moniliforme which occurs worldwide and is one of the most prevalent fungi associated with maize.
To date, only the fumonisins FB1 and FB2 appear to be toxicologically significant. The occurrence of FB1 in cereals, primarily maize, has been associated with serious outbreaks of leukoencephalomalacia (LEM) in horses and pulmonary oedema in pigs. LEM is characterised by liquefactive necrotic lesions of the white matter of the cerebral hemispheres and has been reported in many countries, including the USA, Argentina, Brazil, Egypt, South Africa and China. FB1 is also toxic to the central nervous system, liver, pancreas, kidney and lung in a number of animal species. FB2 is hepatotoxic in rats.
The incidence of F. moniliforme in domestically-produced maize has been correlated with human oesophageal cancer rates in the Transkei, southern Africa and in China. The levels of fumonisins in domestically-produced maize have been reported as similar to those levels which produced LEM and hepatotoxicity in animals.
Currently, there is inadequate evidence for the confirmation of the carcinogenicity of the fumonisins in humans. There is limited evidence, in animals, for the carcinogenicity of FB1 but inadequate evidence for the carcinogenicity of FB2. Data are not available for the transmission of these toxins into milk, meat and eggs.
Ochratoxin A is produced (Pitt and Leistner, 1991) by only one species of Penicillium, P. verrucosum, probably the major producer of this mycotoxin in cooler regions. Amongst the aspergilli, Aspergillus ochraceus is the main source of ochratoxin A.
Ochratoxin A has been mainly reported in wheat and barley growing areas in temperate zones of the northern hemisphere. It does, however, occur in other commodities including maize, rice, peas, beans and cowpeas; developing country origins of ochratoxin A include Brazil, Chile, Egypt, Senegal, Tunisia, India and Indonesia.
A correlation between human exposure to Ochratoxin endemic’nephropathy (a fatal, chronic renal disease occurring in limited areas of Bulgaria, the former Yugoslavia and Romania) has been suggested. A causative link, however, has yet to be confirmed.
Ochratoxin A produces renal toxicity, nephropathy and immunosuppression in several animal species.
Although there is currently inadequate evidence in humans for the carcinogenicity of ochratoxin A, there is sufficient evidence in experimental animals. Ochratoxin A has been found in significant quantities in pig meat, as a result of its transfer from feeding stuffs.
The interaction of mycotoxins
The complex ecology of mould growth and mycotoxin production can produce mixtures of mycotoxins in food and feed grains, particularly in cereals. The co-occurrence of mycotoxins can arise through a single mould producing more than one toxin and simultaneous contamination by two or more moulds, from the same or different species.
The co-occurrence of the Fusarium graminearum toxins deoxynivalenol and zearalenone with the F. moniliforme toxins fumonisin B1 and B2, for example, has been reported (Miller, 1991) in southern Africa. Other naturally occurring combinations of Fusarium mycotoxins include T-2/diacet-oxyscirpenol (DAS)
deoxynivalenol/DAS and DAS/fusarenone (Figure 2.6b). Naturally occurring combinations of mycotoxins produced by more than one genus include aflatoxins/trichothecenes (Argentina), aflatoxins/zearalenone (Brazil, Indonesia), aflatoxins/ Ochratoxin A and aflatoxins/cyclopiazonic acid (Figure 2.6c)/zearalenone (Indonesia), afla-toxins/fumonisins (USA). Given the worldwide distribution of the Fusariummoulds, the presence of combinations of Fusarium mycotoxins and aflatoxins in food and feeds of developing country origin should be expected.
The co-occurrence of mycotoxins can affect both the level of mycotoxin production and the toxicology of the contaminated grain. The presence of trichothecenes may increase the production of aflatoxin in stored grain, for example, whereas some naturally occurring combinations of Fusarium toxins are synergistic in laboratory animals. To date, little is known about this particularly important area of mycotoxicology. The significance of mycotoxins in human disease will become more clearly defined through the continued identification of biomarkers, present in blood and/or urine, which reflect the levels of recent dietary exposure to mycotoxins. Aflatoxin, covalently bound to albumin in peripheral blood, and the urinary aflatoxin B1-guanine adduct have both been used, for example, to monitor aflatoxin ingestion.
Studies using the aflatoxin-albumin adduct have demonstrated the significantly higher exposure that occurs in Gambia, Kenya and the Guangxi region of China, compared with Thailand and Europe. In Europe, the levels of biomarker were below the detection limit.
The control of mycotoxins
Since the occurrence of mycotoxins is a consequence of biodeterioration, it follows that the mycotoxin problem is best addressed by controlling those agents – temperature, moisture and pests – which encourage spoilage.
The pre-harvest control of the agents of biodeterioration is somewhat compromised by Man’s inability to control the climate! Both insufficient and excessive rainfall during critical phases of crop development can, for example, lead to mould contamination and mycotoxin production. The very substantial economic losses attributed to mycotoxins, on the North American continent, clearly illustrates the difficulties associated with the prevention of contamination, even in wealthy, developed nations.
Considerable effort has been expended on the development of crop strains which are resistant to mould growth and/or mycotoxin production. Breeding programmes have focused, for example, on the development of Aspergillus/aflatoxin resistant varieties of maize and groundnuts, with limited success. It has been suggested that wheat has three types of resistance to Fusarium graminearum; resistance to the initial infection, resistance to the spread of the infection and resistance to mycotoxin (deoxynivalenol) production. Attempts to exploit the resistance to mycotoxin production (through either the inhibition of synthesis or chemical degradation) may hold the most potential because of the limited number of genes which control this process.
The post-harvest handling of grains does, however, present many more opportunities for controlling mycotoxin production. Although many small farmers will not have access to artificial drying equipment, the importance of the utilisation of effective drying, and storage regimes cannot be overemphasised, and is covered extensively in later chapters. Drying to moisture levels which will ensure safe storage in tropical climates is especially important when grains are shipped from temperate to tropical climates.
However, despite the best efforts of the agricultural community, mycotoxins will continue to be present in a wide range of foods and feeds. Consequently, strategies are required for the removal of mycotoxins from grains. Currently, two approaches are utilised; namely, the identification and segregation of contaminated material and, secondly, the destruction (detoxification) of the mycotoxin(s).
The Segregation of Contaminated Grains
In the first instance, the identification and segregation of contaminated consigments is pursued through the implementation of quality control procedures by exporters, importers, processors and regulators. The consignment is accepted or rejected on the basis of the analysis of representative samples of the food or feed. Acceptable levels of mycotoxin contamination are specified by individual customers, commercial agreements and regulators. Currently, over 50 countries now regulate against the aflatoxins; 5 parts per billion (µg/kg) is the most common maximum acceptable level. Aflatoxin M1 in dairy products is regulated in at least 14 countries, the tolerances for infant diets being 0.05-0.Sppb milk. Regulations exist for other mycotoxins including, for example, zearalenone (1mg/kg in grains; the former USSR), T-2 toxin (0.1mg/kg in grains; the former USSR) and ochratoxin A (150ppb food, 100-1000ppb feed; numerous countries). Guidelines, advisory levels and ‘official tolerance levels’ for deoxynivalenol also exist in some countries. The guideline in Canada, for example, refers to 2mg/kg in uncleaned soft wheat, 1mg/kg in infant foods and 1.2mg/kg in uncleaned staple foods calculated on the basis of flour or bran. In the USA, 4mg/kg is advised for wheat and wheat products used as animal feeds.
The mycotoxin content of grains can be further reduced during processing. Automatic colour sorting, often in combination with manual sorting, is widely used to segregate kernels of abnormal appearance (which are considered more likely to contain aflatoxin) during the processing of edible grade groundnuts. Mycotoxins can also be concentrated in various fractions produced during the milling process. Zearalenone and deoxynivalenol, for example, are reportedly concentrated in the bran fraction during the milling of cereals. It can be argued, however, that all fractions will contain mycotoxins if the original grain is heavily contaminated. Ochratoxin A appears to be reasonably stable to most food processes. In general, the stability of mycotoxins during processing will depend upon a number of factors including grain type, level of contamination, moisture content, temperature and other processing agents.
A further segregation process involves the removal of aflatoxin, from animal feeds, after ingestion. Here, mycotoxin binding agents – hydrated sodium calcium aluminosilicate, zeolite, bentonite, kaolin, spent canola oil bleaching clays – included in the diet formulation, reportedly remove aflatoxin, by adsorption from the gut.
The Detoxification of Mycotoxins
Ammonia, as both an anhydrous vapour and an aqueous solution, is the detoxification reagent which has attracted (Park et al, 1988) the widest interest and which has been exploited commercially, by the feed industry, for the destruction of aflatoxin. Commercial ammonia detoxification (ammoniation) facilities exist in the USA, Senegal, France and the UK, primarily for the treatment of groundnut cake and meal. In the USA, cottonseed products are treated in Arizona and California whilst maize is ammoniated in Georgia, Alabama and North Carolina. Commercial ammoniation involves the treatment of the feed, with ammonia, at elevated temperatures and pressures over a period of approximately 30 minutes. Onfarm procedures, as practiced with cottonseed in Arizona, involve spraying with aqueous ammonia followed by storage at ambient temperature, for approximately two weeks, in large silage bags.
The nature of the reaction products of the ammoniation of aflatoxin is still poorly understood. However, many studies have been performed, on both isolated ammoniation reaction products and treated feedingstuffs, in an attempt to define the toxicological implications of ammoniation. Very extensive feeding trials have been performed with a variety of animals including trout, rats, poultry, pigs and beef and dairy cattle. The effect of diets containing ammoniated feed has been determined by monitoring animal growth and organ weights together with haematological, histopathological and biochemical parameters. The results of these studies, combined with the practical experience of commercial detoxification processes, strongly indicate that the ammonia detoxification of aflatoxin is a safe process. However, the formal approval of the ammoniation process by the USA Food and Drug Administration is still awaited.
Commercial processes have not been developed for the detoxification mycotoxins.
Sampling and analysis
The control of the mycotoxin problem comprises (a) the identification of the nature and extent of the problem (by the implementation of surveillance studies), (b) the introduction of improved handling procedures, which address the identified problems, and (c) the regular monitoring of foods and feeds as part of a quality control programme.
The operation of both surveillance studies and quality control programmes requires efficient sampling and analysis methods.
Since the distribution of aflatoxins (and, presumably, other mycotoxins) in grains is highly skewed, it is important that great care is taken to collect a representative sample (Coker and Jones, 1988). There is still considerable debate as to the appropriate size of such samples. In general, the sample size should increase with increasing particle size; samples of whole groundnuts, maize and rice, for example, should be of the order of 20, 10 and 5kg respectively. Samples of oilseed cake and meal should be approximately 10kg in weight. For whole grains, each sample should be composed of about 100 incremental samples, collected sytematically from throughout the batch, whereas samples of cake and meal require approximately 50 increments. It is important to remember that the collection of samples from the surface of a large, mature stack of grains will only reflect the quality of the outer layers. The mycotoxin content of the grain in the interior of the stack can only be monitored during the break-down of the stack. Needless to say, an incorrectly collected sample will invalidate the final analysis result.
The sampling of grain shipments, normally involving tens of thousands of tonnes of material, poses a particularly difficult sampling problem. Representative samples should be collected from carefully defined 500 tonne batches, using the methods described above. Potential sampling points include weighing towers, conveyor belts, and trucks and barges receiving the discharged material. The sampling of fast moving grain is a hazardous operation; automatic, on-line sampling equipment should be used wherever possible.
The reduction of the sample, for analysis, should also be performed so as to ensure the representative nature of the laboratory sample. It is imperative that the complete sample is comminuted prior to subdivision. Ideally, the comminution and subdivision of whole grains should be performed simultaneously, using a subsampling mill. Alternatively, the comminuted sample should be subdivided using a mechanical riffle. Manual coning and quartering procedures should only be used as a last resort.
Equipment available for the collection of representative samples is discussed in detail in Chapter 3.
High performance liquid chromatography (HPLC) has been used for the analysis of a wide range of mycotoxins including the aflatoxins, ochratoxin A, zearalenone, deoxynivalenol (DON) and the fumonisins. To date, high performance thin layer chromatography (HPTLC) has been applied mainly to the aflatoxins whereas gas liquid chromatography (GLC) has been utilised for the quantification of DON, T-2 toxin and zearalenone. Enzyme-linked immunosorbent assays (ELISA) have also been applied to many mycotoxins including the aflatoxins, ochratoxin A, deoxynivalenol, T-2 toxin and zearalenone. Despite the utilisation of sophisticated, expensive HPLC, HPTLC, GLC and ELISA procedures, agreement between laboratories is invariably poor, when identical samples are analysed (Coker, 1984)!
Quality control programmes require simple, rapid, efficient analysis methods which can be handled by relatively unskilled operators (Coker, 1991). Recently developed rapid methods include those that utilise immunochemistry technology or selective adsorption agents. A rapid ELISA method for estimating aflatoxin in groundnuts, cottonseed, maize, rice and mixed feeds has been subjected to a collaborative study and recommended for First Action Approval by the Association of Official Analytical Chemists (AOAC). Solid phase ELISA kits have been developed for the aflatoxins, ochratoxin A, zearalenone and T-2 toxin in a variety of commodities. An ‘immunodot’ cup test, where the antibody is immobilised on a disk in the centre of a small plastic cup, has been approved by the AOAC as an Official First Action screen for aflatoxin in groundnuts, maize and cottonseed. Card tests have also been developed where the antibody is immobilised within a small indentation on a card similar in size to a credit card. Such tests have been developed for the aflatoxins, ochratoxin A, T-2 toxin and zearalenone in maize. The reported analysis (extraction, filtration and estimation) time for solid phase ELISA kits is 5-10 minutes. ELISA kits, however, are relatively expensive and suffer reduced shelf-lives at elevated temperatures.
Minicolumns (small glass columns) containing selective adsorption agents have been developed for aflatoxin/zearalenone (single test) and deoxynivalenol.
There is an urgent need for simple, robust, low-cost analysis methods, for the major mycotoxins, which can be routinely used in developing country laboratories.
The mycotoxins described in this chapter, as symptoms of biodeterioration, are acutely toxic, carcinogenic, immunosuppressive and oestrogenic; and have been the cause of serious human and/or animal diseases. The potential immunosuppressive role of mycotoxins in the aetiology of human disease is an especially important issue which requires further careful study. Every effort must be made to minimise the occurrence of mycotoxins in food and feed grains.
Undoubtedly, the implementation of improved handling and quality control procedures will have a significant effect on the incidence of mycotoxins in important foods and feeds throughout the world.
Edited by D.L. Proctor, FAO Consultant
FAO AGRICULTURAL SERVICES BULLETIN No. 109
GASCA – GROUP FOR ASSISTANCE ON SYSTEMS RELATINGTO GRAIN AFTER HARVEST |
Puffy debris disks around three nearby stars could harbor Pluto-sized planets-to-be, a new computer model suggests.
The "planet embryos" are predicted to orbit three young, nearby stars, located within about 60 light years or less of our solar system. AU Microscopii (AU Mic) and Beta Pictoris (Beta Pic) are both estimated to be about 12 million years old, while a third star, Fomalhaut, is aged at 200 million years old.
If confirmed, the objects would represent the first evidence of a never-before-observed stage of early planet formation. Another team recently spotted "space lint" around a nearby star that pointed to an even earlier phase of planet building, when baseball-sized clumps of interstellar dust grains are colliding together.
The new finding will be detailed in an upcoming issue of the Monthly Notices of the Royal Astronomical Society.
Using NASA's Hubble Space Telescope, the researchers measured the vertical thickness of so-called circumstellar debris disks around the stars, and then used a computer model to calculate the size of planets growing within them.
The thickness of a debris disk depends on the size of objects orbiting inside it. The ring of dust thins as the star system ages, but if enough dust has clumped together to form an embryonic planet, it knocks the other dust grains into eccentric orbits. Over time, this can puff up what was a razor-thin disk.
The new model the researchers created predicts how large the bodies in a disk must be to puff it up to a certain thickness. The results suggest that each of the three stars studied is harboring a Pluto-sized embryonic planet.
"Even though [the disks] are pretty thin, they turn out to be thick enough that we think there's something in there puffing them up," said study team member Alice Quillen of the University of Rochester in New York.
At least one of the stars is thought to contain at least one other planet in addition to the circling Pluto-sized planet. The circumstellar disk of Fomalhaut contains a void that scientists think is being cleared out by a Neptune-sized world. The researchers think the embryonic planets predicted by their model are too small to clear gaps like this in the disk.
"If you think of water flowing over pebbles, if the pebbles are very small at the bottom of the water, it doesn't make a good ripple," Quillen told SPACE.com.
All of the embryonic planets predicted to exist in the three systems are located far away from their parent stars. Au Mic's budding planet is estimated to lie about 30 AU from its star, or about the same distance that Pluto is from our sun. One AU is equal to the distance between Earth and the sun. The embryonic planets of Beta Pic and Fomalhaut are thought to lie even farther, at 100 and 133 AU, respectively.
It is the large distances separating the planet embryos and their stars that have drawn the most criticism by colleagues, Quillen said. Many find it hard to believe that any planet, even a diminutive Pluto-sized one, could form at such a far distance.
According to the standard theory of how our solar system formed, Pluto formed much closer to the sun but was then knocked out to its current orbit due to instability in the inner solar system. However, there are objects in our solar system that are located even further from our sun and are difficult to explain by this theory. Sedna, for example, is about three-fourths the size of Pluto and is located about three times farther from the sun.
Mordecai-Marc Mac Low, an astrophysicist at the American Museum of Natural History in New York City who was not involved in the study, said the new model should be viewed as a plausibility argument for the presence of Pluto-sized objects rather than proof of their existence.
“The work presented here shows that Pluto-sized objects stirring disks are consistent with the observed disk thicknesses and other properties,” Mac Low said.
James Graham, an astronomer at the University of California, Berkeley, who was also not involved in the study, expressed a similar sentiment. “This calculation is making a bold extrapolation,” Graham said in an email interview. “It’s bit like describing an elephant given a single cell from that animal. With enough knowledge, this is possible—if you know enough about microbiology and genetics and could read the DNA in the cell and in principle envision the entire creature.”
Quillen is now looking for more young stars to investigate with her model, but the criteria to be a candidate are strict. The systems have to be young enough to still have their circumstellar disks, but old enough to be forming embryonic planets. They must also appear edge-on as seen from Earth and be near enough that Hubble can accurately discern the thickness of their disks.
At the moment, the three stars Quillen has already observed appear to be the only candidates that meet all the standards.
- Top 10 Most Intriguing Extrasolar Planets
- Major Planet Formation Mystery Solved
- VIDEO: Planet Hunter |
Tree Infestations With Webs
Web-producing pest infestations are an unsightly problem for trees. Webs in trees are caused by caterpillars or mites that spin silken structures on the underside of tree leaves, as well as in and around tree branches. While this problem is generally more of an aesthetic concern, it also poses a health risk depending on the type of infestation. Pests that produce webs have the ability to defoliate and weaken trees, thereby increasing their vulnerability to other pest and disease invasions.
Spider Mite Infestation
Spider mites are eight-legged arachnids that spin fine, silk-like webs. Commonly affecting fruit trees, these pests infest trees in colonies, with hundreds of mites per colony. They are difficult to see with the naked eye and are usually identified by the webs they spin on the underside of leaves.
In addition to leaving a web calling card, spider mites also remove vital nutrients from the leaves through feeding, which leads to discoloration and leaf loss. Heavy infestations lead to defoliation and eventually impacts fruit production. Spider mites are more likely in warm, dry dusty conditions. In Mediterranean climates, spider mites have the ability to reproduce throughout the year.
Fall Webworm Infestations
The fall webworm lays its eggs on the leaves of deciduous trees. As many as 1,500 larvae hatch during the summer and begin feeding on leaves, spinning and encasing themselves in web-like structures called "tents." As the fall webworm moves and continues to feed, the tents increase in size, causing an unsightly problem. Some tents cover entire branches.
The mature fall webworm caterpillar is approximately 1 inch long with fine, gray-orange hair covering a yellow or greenish striped body. The most common trees affected include birch, maple, mulberry, willow, walnut, crabapple and chokecherry. Fall webworms pose more of an aesthetic problem than a health risk for landscape trees.
Eastern Tent Caterpillars
Also affecting deciduous trees, the Eastern tent caterpillar prefers fruit trees, trees in the rose family, as well as ornamental trees as a host. Hatching by the hundreds in the early spring, it spins web-like tents in the forking sections of branches and twigs. The caterpillars inhabit these tents at night, and on rainy or cloudy days, emerging on sunny days to feed on tree leaves.
Leaf loss and defoliation is imminent, as this infestation takes over the tree. Trees infested with Eastern tent caterpillars are more likely to suffer invasion from other pests and diseases due to loss of vigor. The Eastern tent caterpillar is approximately 2 1/2 inches long with black and white stripes, light blue spots and fine hairs covering its body.
Treatment and Control
It is important to treat pest infestations as soon as you notice web-like structures on your tree. Delaying treatment allows time for the infestation to worsen, or the pest to mature, making it harder to control. Carefully remove the tents from leaves and branches. Use an insecticidal oil or soap to smother the remaining caterpillars or mites on the tree.
Remember that chemical insecticides, while effective, weaken the tree, making it more vulnerable to other pests and diseases. Additionally, chemical insecticides often deter helpful predators that control mite and caterpillar populations. |
China and India, the two most populous countries in the world, are making our planet greener through land management, a recent study by NASA reveals. With their aggressive afforestation and agricultural expansion, both countries lead the world in ‘greening’– a term used by climate scientists to describe Earth’s increasing vegetation cover in recent decades.
The two countries have created one-third of the word’s new forests, croplands, and other forms of vegetation in the last two decades, the study published in Nature Sustainability revealed. China, which has 6.3% of the globe’s landmass, alone accounts for 25% of the global net increase in new vegetation.
The data collected by the researchers using Moderate Resolution Imaging Spectroradiometer (MODIS)– a NASA satellite imaging sensor- shows that 42% of the greening in China comes from afforestation while 32% comes from newly cultivated farmlands. In India, 82% of the greening comes from new croplands and 4.4% from new forests.
The research also revealed that besides land management strategies adopted by China and India, indirect factors like climate change, CO2 fertilization, nitrogen deposition, and recovery from natural disturbances also lead to the greening. However, they could be a less prominent driver of global greening as compared to human-driven land use.
A group of 15 scientists from various universities of China, France, Germany, the US, and India collected satellite data from MODIS, which mapped Earth’s surface from 2000 to 2017 onboard NASA’s Terra and Aqua satellites. Their analysis revealed the exponential growth of vegetation in China and India.
China’s massive-scale tree plantations on the low-productive regions of the country lead to a 16% global net greening. In just a single decade–2000-2010, the country increased its total forest area by 19% covering 434,000 Sqkm of land.
China achieved this remarkable feat by implementing a series of programmes to conserve forests, mitigate soil erosion, air pollution, and climate change. Since the 1990s, China has invested more than $100bn in afforestation programmes and, according to its government, planted more than 35bn trees across 12 Chinese provinces. China’s forestry expenditure per hectare is over three times higher than the global average and has long exceeded that of the US and Europe.
The rapid agricultural growth in China with the help of hybrid cultivars, multiple cropping, irrigation, fertiliser use, pest control, better quality seeds, farm mechanisation, credit availability, and crop insurance programmes also paved the way for a greener nation.
The study concludes that China’s man-made vegetation in the last two decades is equal to that of the greening of Russia, the US and Canada combined.
“China engineered ambitious programmes,” the author says, “to conserve and expand forests with the goal of mitigating land degradation, air pollution, and climate change.”
The research revealed that India accounts for 6.8% of the global net increase in vegetation cover, with the new croplands contributing the most. This is roughly equal to that in the US or Canada – which both had three times more vegetated areas in 2000.
India’s green revolution which brought massive changes in agricultural production is attributed to a rapidly grown harvesting area throughout the country.
Human Land Use
The data analysis has revealed that the role of human land use in increasing vegetation on a global scale is much more important than previously understood.
Downplaying the contribution of human land use, climate scientists had previously argued that greening was a result of increased CO2 concentration in the atmosphere. But a key finding of the research paper debunks the argument and states “human land use” has been “a dominant driver” of global greening since 2000l. The greening on Earth in recent decades has had more to do with direct human interference than the indirect effect of climate change or the CO2 fertilisation effect.
Six out of seven “greening clusters” found by the research team “overlap” with regions known to have highly intensive agriculture and human-land use. But regions like Amazone, where human land use is notably low, the rate of greening is much lower. |
There have always been children who were able to grasp complex subjects at a young age, and they are now labeled as gifted. For some parents and students, this label can cause issues with other children in the school. Less academically gifted children see them as nerds or geeks and tease them, and it has been a common problem throughout many school districts. Educators realize the downside, but they want to be able to find and encourage these students to learn as much as possible.
Every child learns at a different level in each subject, and gifted children are as normal as any other students in this regard. They might excel at mathematics or language, but their academic achievements in spelling, history or even geography might be lacking. Most of them will place near the top of their class in all subjects, and they will be many levels ahead of their age group in the subjects where they are considered gifted.
Labeling students is a complex process, and those who are far ahead of their peers find it a challenge. If they are left in all their regular classes, they tend to daydream or lose focus quickly. Education is not a challenge for them, and they can be seen by other students as failing a subject until it is time for a test. That is when they show their ability to the teacher, but discipline can become an issue if they are not challenged to reach their full potential.
Separating children within a grade into classes that will help them stretch their mental muscles has become common in many districts, and it helps those who need more challenges. For those who lag behind, there is extra assistance without being compared to those who might have mastered a subject in the first week of school. |
The intersection of aesthetics and education offers space to understand how the study of perception, sensuous experience, beauty, and art provide the potential for learning and human emancipation. These domains have been persistently understood as necessary to cultivate democratic societies by shaping citizens’ moral, ethical, and political sensibilities. Aesthetics is often considered a dangerous and paradoxical concept for educators because it offers the means for both political transformation as well as political manipulation through disruptive, engrossing, all-consuming aesthetic experiences. In short, aesthetic experiences are powerful experiences that make one think, interpret, and feel beyond the certainty of facts and the mundane parts of existence. Aesthetics offers humans the means to heighten our awareness of self and other. Thus, the study of aesthetics in education suggests there is a latent potential that exists in learning beyond simply acquiring objective information to logically discern reality. Defining aesthetics, a complicated task given the nature of aesthetics across disciplines, is achieved by taking the reader through three perennial debates within aesthetics that have education import: the trouble with human passions, the reign of beauty, and aesthetic thought beyond beauty. In addition, the influence of aesthetics and imagination on experience and education as articulated most notably by Maxine Greene and John Dewey offers the obvious entry point for educators seeking to understand aesthetics. Looking beyond the philosophical literature on aesthetics and education, new directions in aesthetics and education as seen in the growing literature traced through the study of cognition, behavior, biology, and neuroscience offers educators potentially new sites of aesthetics inquiry. However, the overwhelming trajectory of the study of aesthetics and education allows educators to move beyond the hyper-scientific study of education and alternatively consider how felt experiences—aesthetic experiences—often brought about when fully engaged with others and one’s environment, are sites of powerful learning opportunities with moral, ethical, and civic consequences.
Jessica A. Heybach
Middle- and high-school English classrooms have incorporated literature in their curriculums for decades. Literature has been used for many purposes: to provide exemplary models for student writing, to serve as texts for honing interpretive skills, to expand vocabulary, to provide cultural insight, and to contribute to student’s cultural engagement and appreciation. Many of the literary texts used in classrooms in the past continue to be used, including Julius Caesar, The Adventures of Huckleberry Finn, Lord of the Flies, To Kill a Mockingbird, and The Great Gatsby. These books continue to be used in part because there are many resources available that help teachers implement them in their curriculum but also because a lot of school districts do not have the funding to continually update the texts used in English classes. Today, however, there is another body of literature that teachers can draw from to meet curricular goals: young adult literature. |
Isolation has led to the appearance of life forms that are unique in the world, as a result of which the Serra de Tramuntana mountain range – an island within an island – stands out for its significant endemic flora and singular plants, essential for the development of diverse plant communities. In fact, the mountain range holds 65 of the 97 endemic species described on the Balearic archipelago, and 65 of the 68 endemic plants in Mallorca. To name one example, nine species of orchids and more than ten species of ferns live in the Serra alone, and nowhere else in the world. The holm-oak grove, that ancient indigenous forest, has its main strongholds here, and the mountains are the only refuge of trees typical of cold climates, like the yew.
Broadly speaking, the vegetation of the Serra de Tramuntana is divided into four plant communities:
- Balearic holm-oak woodland. This is the climatic forest community that would occupy most of the territory if there were no human intervention. In the Serra, the location of this type of woodland has been reduced and contains two sub-groups – the mountain holm-oak woodland and that of the lowlands and coastal areas.
- Wild olive scrub (garrigue). A plant formation typical of warm areas which predominates in lower altitudes. It appears as a consequence of maximum drought conditions that prevent the holm-oak woodland from developing. This garrigue led to the expansion of the agricultural olive tree.
- Calcicole shrubland. The two most representative shrubs of this community are rosemary and heather. It is found in both coastal and mountain areas. The presence of Aleppo pine cover is visible. In the Balearic Islands, pine forests are an entity in their own right and comprise the most extensive tree formation, thanks to their swift growth and opportunism.
- Communities in the highest Balearic vegetation belt. These plant communities develop particularly on terrain where the strength of the wind or the absence of soil prevents the growth of other communities. They are concentrated above all in the highest section of the mountains, near summits and crags. They consist of a very low formation of thorny bushes with rounded forms (cushion-type plants) with a discontinuous incidence and reduced surface cover.
The Botanical Garden of Sóller reproduces the flora typical of the mountains, among others, and a visit is therefore an excellent way of discovering their peculiarities, characteristics and endemic plants.
The trees of the Serra de Tramuntana include specimens that are extraordinary for their size or age, or for their cultural value, and as such they are protected and form part of the Catálogo de Árboles Singulares de les Illes Balears (Catalogue of Singular Trees of the Balearic Islands). Some are several hundred years old, located in extremely beautiful enclaves, like the Pi de sa Pedrissa pine tree (Deià), and Cedre de Massanella the cedar. |
According to the World Health Organization, cataracts are responsible for 51% of cases of blindness worldwide - although this blindness is preventable with treatment. In fact, research shows that in industrialized countries about 50% of individuals over the age of 70 have had a cataract in at least one eye. This is partially because cataracts are a natural part of the aging process of the eye, so as people in general live longer, the incidence of cataracts continue to increase.
What are Cataracts?
Cataracts occur when the natural lens in the eye begins to cloud, causing blurred vision that progressively gets worse. In addition to age, cataracts can be caused or accelerated by a number of factors including physical trauma or injury to the eye, poor nutrition, smoking, diabetes, certain medications (such as corticosteroids), long-term exposure to radiation and certain eye conditions such as uveitis. Cataracts can also be congenital (present at birth).
The eye’s lens is responsible for the passage of light into the eye and focusing that light onto the retina. It is responsible for the eye’s ability to focus and see clearly. That’s why when the lens is not working effectively, the eye loses it’s clear focus and objects appear blurred. In addition to increasingly blurred vision, symptoms of cataracts include:
“Washed Out” Vision or Double Vision:
People and objects appear hazy, blurred or “washed out” with less definition, depth and colour. Many describe this as being similar to looking out of a dirty window. This makes many activities of daily living a challenge including reading, watching television, driving or doing basic chores.
Increased Glare Sensitivity:
This can happen both from outdoor sunlight or light reflected off of shiny objects indoors. Glare sensitivity causes problems with driving, particularly at night and generally seeing our surroundings clearly and comfortably.
Often colours won’t appear as vibrant as they once did, often having a brown undertone. Colour distinction may become difficult as well.
Compromised Contrast and Depth Perception:
These eye skills are greatly affected by the damage to the lens.
Often individuals with cataracts find that they require more light than they used to, to be able to see clearly and perform basic activities.
Early stage cataracts may be able to be treated with glasses or lifestyle changes, such as using brighter lights, but if they are hindering the ability to function in daily life, it might mean it is time for cataract surgery.
Cataract surgery is one of the most common surgeries performed today and it involves removing the natural lens and replacing it with an artificial lens, called an implant or an intraocular lens. Typically the standard implants correct the patient’s distance vision but reading glasses are still needed. However as technology has gotten more sophisticated you can now get multifocal implants that can reduce or eliminate the need for glasses altogether. Usually the procedure is an outpatient procedure (you will go home the same day) and 95% of patients experience improved vision almost immediately.
While doctors still don’t know exactly how much each risk factor leads to cataracts there are a few ways you can keep your eyes healthy and reduce your risks:
- Refrain from smoking and high alcohol consumption
- Exercise and eat well, including lots of fruits and vegetables that contain antioxidants
- Protect your eyes from UV radiation like from sunlight
- Control diabetes and hypertension
Most importantly, see your eye doctor regularly for a comprehensive eye exam. If you are over 40 or at risk, make sure to schedule a yearly eye exam. |
World Red Cross Day is an international day that is dedicated to alleviating human suffering, upholding human dignity, protecting life, and preventing emergencies and natural disasters such as flood, epidemics, and earthquakes. The fundamental principles recognized during World Red Cross Day are impartiality, humanity, independence, neutrality, voluntary, universality, and unity. All components of Red Cross organizations uphold and respect these principles.
Every year May 8 is celebrated as World Red Cross Day to honor the International Red Cross Crescent Movement founder Henry Dunant, who was born on this day in 1828. National Societies that are affiliates to ICRC celebrates the World Red Cross Day in their countries to raise the need of protecting life. This event highlights international services such as the reunion of separated family members through the Red Cross. All attendees of this even learn about the founder of the Red Cross, Henry Dunant, and are motivated and encouraged to take part in life saving activities.
During World Red Cross Day local heroes that have made an invaluable impact on life protection are recognized in various categories such as law enforcement, military, fire and rescue, community champion, and humanitarian awards. Various organizations are recognized based on the effort in life protection. Local communities are inspired to take part in voluntary activities to protect human beings and animals. Various Red Cross movements visit people affected by natural disasters or armed conflicts.
This celebration is also marked by visiting detainees to show them that, regardless of the reasons of their incarceration, they are still treated with dignity in line with the international norms. The main message delivered during World Red Cross Day is treat all people with hospitality, dignity and humanity.
World Red Cross Day is also celebrated through blood donation. People are encouraged to donate blood to save the lives of people in need of blood. This is another way in which people show humanity. Red Cross officials assemble in various local towns or schools and encourage locals to voluntarily donate blood.
In addition, World Red Cross Day is also celebrated through fund raising and donations. Local natives, societies, non-governmental organizations, companies, and many other types of organizations are encouraged to make their contribution to a certain account to raise money for assisting a dying life. Every person is encouraged to contribute money based on their capacity. The money is used to pay health bills of needy patients and providing food to people affected by natural disasters such as floods or earthquakes. Religious organizations are taking part in World Red Cross Day by visiting sick people and making contributions. |
The sundial is the earliest known timekeeping device. The first known sundial dates to over five thousand years ago, and consisted of only a vertical stick. The shadowcasting part of a sundial is known as the gnomon; in traditional sundials, the gnomon tends to be a triangle, such that its hypotenuse is parallel to the Earth’s axis.
The earliest mention of the candle clock dates back to the 6th century, but it is usually attributed to Alfred the Great who lived a few centuries later. His candle clock comprised of six candles, each twelve inches high. One inch of the candle would burn in approximately 20 minutes such that six candles would burn in 24 hours.
A clepsydra measures time through the gradual flow of liquid. The oldest specimens found were Egyptian, dating back to the 14th century BC. These used water. However, since water freezes at 0°C, later versions used mercury instead, which freezes at a much lower temperature of -38°C. Galileo used a mercury clepsydra in as late as the 16th century in his experiments with falling bodies.
Mechanical clocks have been used since the 8th century. They tended to be driven by water, and later by weights. Soon after 1600, Galileo discovered that the motion of a pendulum could be used to regulate clocks, but it wasn’t until 1656 that Chistiaan Huygens built the first pendulum clock. These increased timekeeping accuracy from around 15 minutes per day to 15 seconds per day.
The first pocket watch, invented around the 15th century, had an accuracy of only several hours per day. However, in the 17th century, Robert Hooke and Christiaan Huygens developed the balance spring. The balance spring, which may be thought of as a watch’s equivalent of a clock’s pendulum, decreased inaccuracy from several hours to just ten minutes per day
The first atomic clock was designed by Louis Essen and built in 1955. These clocks keep time using the properties of a caesium atom and are accurate to one second every few hundred million years. The invention of the atomic clock resulted in a new time standard, one that we still use today. According to this time standard, one second is equal to 9,192,631,770 cycles of a light wave emitted by a caesium atom.
The first quartz clock was built in 1927. These clocks use a quartz crystal, made from silicon dioxide, to keep time. An electric current, from a battery, makes the quartz crystal oscillate at 32,768 times per second. These oscillations regulate the gears that make the clock tick. They are accurate to one second per two days.
Scientists have succeeded in building quantum clocks, which keep time by counting the vibrations of an electrically charged aluminium atom. This atom vibrates at 1.1 quadrillion times per second, meaning the clock is accurate to one second per few billion years. The quantum clock offers the potential for a new time standard, more precise than ever before. However its design is complicated and needs further development before it can be considered a serious contender.
Illustrations by Rosie Woodcock
Kruti Shrotri is studying for an MSc in Science Communication; Rosie Woodcock is studying for an MSc in Science Media Production |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.