content
stringlengths
275
370k
You may not have to wait for April showers to bring May flowers. A recent scientific study has found that the recent record warm temperatures are contributing to the earliest spring flowers in American history. Specifically, the study compared data from over 160 years ago and found that several species of flowers bloomed at least 3 weeks earlier than ever before during the record warm springs in 2010 and 2012. Among others, the pink lady’s slipper orchid (pictured right) is one of these early blooming flowers. Researchers found that for every degree of Fahrenheit the temperature increases, flowers would bloom approximately 2.3 days earlier. Looking at the historical data, there has been a pattern for flowers to bloom earlier, as the climate warms. Researchers are suggesting this is evidence that the plants are adapting to climate change. For now, the plants are flowering without any major issues. Some plants might even benefit from the longer growing season. However, the trend is worrisome for some scientists. The early blooming times will have implications for other organisms in the ecosystem. For instance, pollination and reproduction could be affected. Early blooming and a late spring frost could destroy the flower buds before the bees and birds have a chance to pollinate the plant. At this time, the full impact is still unknown and requires further research. It is important that we keep an eye on the effects of climate change or the ecosystem as we know it could fail. The full research article can be read here: PLOSone.org
So, you’ve genetically engineered a malaria-resistant mosquito, now what? How many mosquitos would you need to replace the disease-carrying wild type? What is the most effective distribution pattern? How could you stop a premature release of the engineered mosquitos? Releasing genetically engineered organisms into an environment without knowing the answers to these questions could cause irreversible damage to the ecosystem. But how do you answer these questions without field experiments? Applied mathematicians and physicists from Harvard and Princeton Universities used mathematical modeling to guide the design and distribution of genetically modified genes that can both effectively replace wild mosquitos and be safely controlled. The research was recently published in the Proceedings of the National Academy of Sciences. In the normal course of evolution, any specific trait has only a modest chance of being inherited by offspring. But, with the development of the CRISPR-Cas9 gene editing system, researchers can now design systems that increase the likelihood of inheritance of a desired trait to nearly 100 percent, even if that trait confers a selective disadvantage. These so-called gene drives could replace wild-type genes in short generations. Those powerful systems raise serious safety concerns, such as what happens if a genetically-engineered mosquito accidentally escapes from a lab? “An accidental or premature release of a gene drive construct to the natural environment could damage an ecosystem irreversibly,” said Hidenori Tanaka, first author of the paper and graduate student in the Harvard John A. Paulson School of Engineering and Applied Sciences and the Physics Department. To protect against such releases, Tanaka, along with co-authors David Nelson, the Arthur K. Solomon Professor of Biophysics and Professor of Physics and Applied Physics and Howard Stone of Princeton, proposed a narrow range of selective disadvantages that would allow the genes to spread, but only after a critical threshold had been reached. The researchers used nonlinear reaction-diffusion equations to model how genes would move through space. These models provided a framework to develop socially responsible gene drives that balance the genetically-engineered traits with embedded weaknesses that would protect against accidental release and uncontrollable spreading. “We can, in effect, construct switches that initiate and terminate the gene drive wave,” said Tanaka. “In one, carefully chosen regime, the spatial spreading of the wave starts or progresses only when the parameters of the inoculation exceed critical values that we can calculate.” To reach that critical mass, the researchers found that genes needed to be released intensely in a specific region — like a genetic bomb — rather than spread thinly throughout larger regions. The genes spread only when the nucleus of the genetic explosion exceeds a critical size and intensity. The researchers also found that by making gene drives susceptible to a compound harmless to wildtype genes, the spread of gene drives can be stopped by barriers like pesticides. “This research illustrates how physicists and applied mathematicians can build on results of biological experimentation and theory to contribute to the growing field of spatial population genetics,” said Nelson. Next, the researchers hope to understand the impact of genetic mutations and organism number fluctuations on gene drives.
The way the Northern Light or aurora works is a lot like a neon sign, except that in the aurorae the conducting gas is in the ionosphere, instead of a glass tube, and the current travels along magnetic field lines instead of copper wires. The power source for aurorae is Solar Wind. The Earth's magnetic field looks a lot like the magnetic field of a bar magnet with field lines going into and out of the Earth's magnetic poles, where the magnetic field is strongest. Solar Wind from the Sun is always pushing on the Earth's magnetic field (the magnetosphere), deforming and stretching it into a long leeward tail like the wake of a ship, which is called the magnetotail The aurora happens when energetic particles (protons and electrons) from the Sun enter the Earth's magnetosphere and are captured in the magnetotail. When disturbances occur in the Solar Wind or there is an energetic Solar Flare, the particles are accelerated along the field lines, becoming more dense near the magnetic poles and eventually precipitating into the Earth's ionosphere. As they hit the ionosphere the particles will inevitably collide violently with gas atoms. This adds energy to the gas atoms which in turn will release light and more elec trons - the ionosphere starts to glow! The different colours in an aurora are depending on the distribution of different gases at different altitudes. Very high in the ionosphere (above 300km) oxygen is the most common gas resulting in reddish colours when exited. Other gases like nitrogene and helium produce blue or purple colours.
There's no escape from plastic pollution for earthlings. Researchers found microplastics, or bits of plastic waste less than 5-mm long, in the air at 2.8 km above sea level in a 'clean station' up in the French Pyrenees. Plastics are one of the building blocks of modern human civilisation and can be found everywhere from our smartphones to medical supplies, to automobiles and more. But while plastics have enabled humans to create things that would have been unimaginable a short time ago, they also present a large waste problem when they reach the end of their life. Unlike organic products, plastics take centuries to decompose and unlike other materials like steel, they are not easy to recycle either. Now, a recent study has found microplastics in the unlikeliest of places. What did the study find? Scientists from CNRS, Université Grenoble Alpes and the University of Strathclyde, Scotland tested 10,000 cubic meters of air every week between June and October of 2017. The samples of air were collected at the height of 2,877 meters above sea level at the Pic du Midi Observatory in the French Pyrenees. The observatory is known as a "clean station" as it is thought that the local environment and climate has very little to no effect on the station. Microplastics were found in all of the samples. Also read: Explained: What are nurdles, the tiny plastic bits polluting oceans? "Mathematical models of air mass trajectories used by the scientists indicate that the particles originated in Africa, North America, or the Atlantic Ocean, which indicates intercontinental atmospheric transport of microplastic," the researchers said in a statement. Microplastics were found to be reaching the planetary boundary layer (PBL), according to the researchers. This allowed the small pieces of plastics to travel great distances. "Once it hits the troposphere, it's like a superfast highway," said the study's main author Steve Allen of Dalhousie University in Canada. What are microplastics? Microplastics are fragments of any type of plastic less than 5 mm in length, according to the US National Oceanic and Atmospheric Administration and the European Chemicals Agency. There are two types of microplastics -- primary and secondary. Primary microplastics are those which are already less than 5 mm (0.20 in) in length and secondary microplastics are those which are reduced in size due to the degradation of larger plastic pieces. These microplastics escape into the environment through common sources like plastic nets, plastic bottles, microbeads, nurdles, microwave containers, tea bags and tire wear. It is estimated that 35 percent of all ocean plastic pollution comes from the erosion of polyester, acrylic, or nylon-based clothing, which often happens as a result of washing. "Plastic leaving the ocean into the air that high -- it shows there is no eventual sink for this plastic. It's just moving around and around in an indefinite cycle," said Allen. How damaging are microplastics? The microplastics can also be eaten by wildlife without killing them immediately due to their small size. This allows the deadly plastics to bioaccumulate in animal species, which is dangerous for other species that are higher up the food chain, including humans. Every species slowly starts accumulating these nano plastics in their bodies, to toxic after-effects. Microplastics also act as buoys for other hazardous toxins and chemicals that are polluting the environment. These chemicals can then result in various diseases and can cause cancer, liver failure and kidney failure. They can also carry deadly bacteria like E. coli and cholera if they come near contaminated water. (Edited by : Shoma Bhattacharjee)
A coding standard is a set of rules that programmers can adopt to make their code more readable, uniform, and robust. For example, the Jet Propulsion Laboratory has a coding standard for C that includes recommendations like - Compile with all warnings enabled; use static source code analyzers. - Use verifiable loop bounds for all loops meant to be terminating. - Do not use direct or indirect recursion. - Check the validity of values passed to functions. JPL considers these rules to be essential to producing safe and reliable code for space missions. Many of the rules specifically target memory safety issues in C, and they are helpful. However, the idea that coding standards can make C/C++ memory safe is, unfortunately, incorrect. Consider what the Standard C++ Foundation has to say about coding standards: The main point of a C++ coding standard is to provide a set of rules for using C++ for a particular purpose in a particular environment. It follows that there cannot be one coding standard for all uses and all users. For a given application (or company, application area, etc.), a good coding standard is better than no coding standard. On the other hand, we have seen many examples that demonstrate that a bad coding standard is worse than no coding standard. Please choose your rules with care and with solid knowledge of your application area. Some of the worst coding standards (we won’t mention names “to protect the guilty”) were written by people without solid knowledge of C++ together with a relative ignorance of the application area (they were “experts” rather than developers) and a misguided conviction that more restrictions are necessarily better than fewer. A word of warning: Nearly every software engineer has, at some point, been exploited by someone who used coding standards as a “power play.” Dogmatism over minutiae is the purview of the intellectually weak. Don’t be like them. These are those who can’t contribute in any meaningful way, who can’t actually improve the value of the software product, so instead of exposing their incompetence through silence, they blather with zeal about nits. They can’t add value in the substance of the software, so they argue over form. Just because “they” do that doesn’t mean coding standards are bad, however. Another emotional reaction against coding standards is caused by coding standards set by individuals with obsolete skills. For example, someone might set today’s standards based on what programming was like N decades ago when the standards setter was writing code. Such impositions generate an attitude of mistrust for coding standards. As above, if you have been forced to endure an unfortunate experience like this, don’t let it sour you to the whole point and value of coding standards. It doesn’t take a very large organization to find there is value in having consistency, since different programmers can edit the same code without constantly reorganizing each others’ code in a tug-of-war over the “best” coding standard. This little excerpt tells you the only two things you need to know about coding standards: - Programmers are arrogant - Coding standards are power struggles The upshot is that it is impossible to get all programmers in the world to follow a consistent coding standard. It’s possible to get fairly large groups of programmers to follow a coding standard when imposed from above (works best if you are Bill Gates), but there’s no authority that can force all C/C++ programmers in the world to abide by a voluntary set of restrictions. The only practial way to ensure that every program in a language is memory safe is to build that into the language itself: it cannot be optional.
In the early spring, the surfaces of the tree leaves develop small spherical growths called galls. The small special growths are often a light yellow, green, red, or brownish colour and are normally on the top surface of the leaves. Depending on the variety of gall, they can be created as a egg nest or be the result of a toxin injected into the leaf. Leaf galls are identifiable by small round balls or bumps that grow on the leaves, twigs, and leaf stems of trees. They can also appear as a wide variety of abnormal growth in a variety of shapes on the leaves, twigs, or branches. Infected branches may be discoloured or distorted and drop prematurely. In some cases, the infected branches die. In many cases, Galls are created by tiny mites or other insects that bite into the underside of the leaf and then inject the leaf with a growth promoting substance that creates the spherical shaped growth. The circular ball or gall encloses the mite and the female mites lay their eggs in the gall. Recommended Steps to Control Galls Once the gall is formed, there is no way to eliminate the balls in the current growing season, however, to help control gall outbreaks, severely infected branches can be removed. Branches and leaves that fall to the ground should be collected and removed. During the growing season, the tree can be sprayed with Bug Buster Pyrethrin Insect Spray or Take Down Garden Spray to help reduce the mite population and to prevent the tree being attacked by other insects drawn to weakened trees.
Healthy Communication Overview: The Importance of Communication Your conversations can enhance the quality of your relationship with your children and the degree to which your children grow up with a sense of safety and security. At other times, the words and messages sent are harmful and destructive. However, even when you are angry or need to discipline your children, you can communicate respectfully. By consciously choosing words that do not blame or shame, you can help your children see themselves as competent and capable members of your family. Through the effective use of words, parents can create a climate of love, acceptance, hope and support that can inspire children to reach their potential and can sustain them during times of stress. These messages are communicated verbally through your words, as well as non-verbally through your body language, facial expressions, and tone of voice. So remember, how you say things is as important, or more so, than what you say. Parents can send these healthy messages to their children through listening, talking about their own feelings, teaching, praising, and problem solving. There is no right technique to use in any particular situation. You can use any of these methods in healthy ways that will improve your relationship with your children, either in one–on-one communications, family conversations, or during more formal family meetings. Language that supports children is: non-judgmental and provides objective information. tentative and flexible to allow for mistakes, differing opinions and possibilities. specific to the situation and does not include words such as “always” and “never”. finds the positive in a difficult trait, behavior or situation. Language Shapes our Attitudes Being aware of the language they use allows parents to determine what “mirrors” they hold up to their children about how lovable and how capable their children are. These simple but very powerful tools positively influence how parents view their children and how their children think about themselves. Using effective language can promote an attitude of resilience and optimism in their children. The more children feel good about themselves, the more likely they are to be motivated to learn and incorporate necessary changes into their life. To test how healthy your communication is, ask yourself afterwards: “Do I feel good about myself?” “Do my children feel good about themselves?” “Is our relationship preserved?” If the answer to all three questions is “yes,” then you are putting the power of your words to good use. Overview of Healthy Communication Techniques: Parents act as teachers, mentors, and models. When parents use a respectful approach, children are better able to understand and learn from them. Be clear and specific with children when explaining new things. Break tasks down into smaller, more manageable tasks. Show them what to do. Work with them until you and they know what needs to be done. Encourage them to ask questions so that they can get clear about what you are asking them to do. Talk about consequences of behaviors. For example, what happens when chores are not completed? Invite children to provide feedback on how things are going. The goal is to work together as a team, as a family. One of the most effective skills you can learn to use as a parent is that of listening to your children. Listening involves really hearing what is going on for your children and even “mirroring” back those feelings to show that you understand what they are feeling. “It looks like you are really angry about having to wash the dishes.” “You are having a hard time deciding who to invite to your party.” “You are disappointed that you did not get a better grade on your project.” In each of the above examples, the parent simply stated what she heard going on for the child by listening to the child’s words and by seeing the child’s body language. One benefit of “listening” to your children is that you can slow down the urge to jump in and immediately “solve” your children’s problems. By listening, you can allow yourself the time to get clear on what the next step might be in handling a situation. Sometimes you may decide that a situation needs nothing more than for you to listen and acknowledge feelings. By listening, you allow children the opportunity to vent their often intense feelings and you send a very powerful message of compassion. When you allow those feelings without judgment or criticism, children feel valued. One of the best ways to motivate children to learn and become more responsible is by praising and affirming what they do. Ideally, you can identify the specific things that they have worked on or accomplished. Rather than just saying “Good job,” you can say, “I appreciate your so including your younger sister in the game that you were playing with your friends today. That is what I call being kind.” A child will be more internally motivated if you give him the words to help him identify the feelings associated with the action. For a child who is having trouble starting and completing a job, you can help motivate him to do more by praising the process and by acknowledging even small steps that are taken in the right direction. With young children, praise should be immediate to be most effective. “You were such a helper putting the dishes in the sink.” With older children, praise can come either immediately or later during a quiet moment, such as at bedtime. “That was such a grown-up thing you did earlier today by donating your allowance to the hurricane victims. It must make you feel proud to be able to help those in need.” I-message – What to Say when You are Upset Using an “I” Message is a way to express your own needs, expectations, problems, feelings or concerns to your children in a respectful way that does not attack them. They also model for your children a healthy way to express strong feelings. This communication skill is often a good first response to your children when you do not like their behavior; although it does not necessarily result in them changing their behavior, it does give you time to get clear about what is upsetting to you and why. An “I” Message consists of three main parts: Describe the specific behavior Describe how you feel Describe the tangible and specific effect of the behavior on you. “When you won’t leave Billy’s house when I say it is time to go, I get upset because I have to get home to cook dinner before I go to my meeting tonight.” deal with some of the struggles you have with your children help your children to feel that they can contribute to solutions to problems rather than seeing themselves as being the problem strengthen the relationship you have with your children. Before you meet with your child Check your attitudes toward problems. Clarify the situation Determine whether the situation/behavior is the result of a challenging, but normal developmental stage. Decide if it is your problem – not your child. Be sure the behavior is unacceptable to you. When you meet with your child Select a calm time to discuss the situation with your child. Discuss the situation from each of your perspectives. Brainstorm possible solutions. Create a plan from the ideas generated that addresses the problem and includes such specifics as what is to happen and what is the consequence for non-compliance. Set a date to meet and evaluate the effectiveness of the plan. Make changes as needed. You can model and teach your children how to approach and solve problems with confidence. In the end, they learn an essential life skill that they will be able to use in all facets of their life as they grow into adulthood. Often when parents think of things that their children need, they focus on the material things in life that are tangible and concrete: they need clothes, they need school supplies, they need medical care. Or they think of less tangible things such as sports and socialization options and activities. But actually, one of the most significant things that a parent can give to a child is his time and attention. Spending time with your children is the factor that will have the greatest impact on their feelings about themselves, on their self-esteem and on the health of the connection you have with your children. For more information about healthy communication, check out the following books. Purchasing from Amazon.com through our website supports the work we do to help parents do the best job they can to raise their children.
Batteries used in electric and hybrid cars are rather unusual: they are called lithium-ion (Li-ion) batteries. Their mode of action is different from classical batteries of petrol or diesel engines. Components of Lithium-ion batteries can present a chemical hazard. In a few words: how does a Li-ion battery work? A Lithium-ion battery works with the following reaction: a chemically reversible lithium ion exchange between two electrodes. The positive electrode is usually made of a lithiated transition metal oxide (cobalt dioxide, manganese dioxide…); the negative electrode is usually made of graphite. This reaction needs both electrodes to be immersed into a liquid electrolyte. Most of the time, the electrolyte is a solution of lithium hexafluorophosphate salts into a mixture of ethylene carbonate and propylene carbonate or tetrahydrofurane. Leaking Li-ion battery = production of hydrofluoric acid As the electrolyte is a liquid, it can leak from the inside of the battery and come into contact with air moisture or water. Two chemical reactions can provoke the production of hydrofluoric acid: Hydrolysis of PF6- ions of the electrolyte in the presence of water Combustion of those PF6- ions. Hydrolysis of PF6- ions occurs only in the presence of water in a medium which is not too acidic or basic (pH between 1 and 12) However, the kinetics of this hydrolysis are not favorable. The reaction is slow and the quantities of released hydrofluoric acid will not be very important. When in contact with skin or eyes, hydrofluoric acid can provoke severe chemical injuries and is toxic. To understand Hydrofluoric acid health hazard, see “Mechanism and specificities of hydrofluoric acid Lesions ” Lithium-ion batteries and combustion: a real hazard lithium-ion batteries for electric and hybrid cars The Lithium-ion battery also presents a risk of degradation by a violent and dangerous combustion reaction in case of misuse. This combustion can occur spontaneously as soon as the batteries intern temperature reaches 65 °C (149 °F) and is very likely to occur above 75 °C (167 °F). In case of burning of the battery, hydrofluoric acid is produced and released by thermal decomposition of the PF6- ions of the electrolyte contained inside the battery. A French INERIS report on electric cars batteries describes this risk. Moreover, the INERIS studies show that: “From a metrological point of view, measuring fluoride ions produced during a fire remains a delicate operation” Concentration of released hydrofluoric acid is variable and depends on the quantity of electrolyte burnt in the combustion process and the combustion temperature. Other toxic gases are also produced and released during the electrolyte combustion (carbon oxides from combustion of ethylene and propylene carbonates). To prevent leaking or burning of the battery, very cautious manipulation of Li-ion batteries is recommended What to do in case of Leaking Li-ion battery? When a leak is observed from a Li-ion battery, the leaking liquid may contain hydrofluoric acid. Absorption of the liquid residue with an adapted absorbent is necessary. The use of a neutralizing absorbent for acidic chemicals such as neutralizing absorbent ACICAPTAL® or polyvalent neutralizing absorbent TRIVOREX® is recommended. Personal protective equipment is also recommended. In the event of a cutaneous or ocular exposure to a liquid from a Li-ion Battery, an optimized decontamination is necessary. Exposure to hydrofluoric acid requires adapted decontamination and medical advice. Hexafluorine® solution is an emergency washing solution specially designed to answer to hydrofluoric hazard: discover Hexafluorine® solution What to do in case of combustion of a Li-ion battery? Li-ion batteries from electric or hybrid cars are usually isolated and protected so that they do not release hydrofluoric vapors in case of combustion. During a car accident, if the battery ignites, contact with released vapors should be avoided as much as possible. In the event of a cutaneous or ocular exposure to a liquid from a Li-ion Battery, an optimized decontamination is necessary. Exposure to hydrofluoric acid requires adapted decontaminationand medical advice. Hexafluorine® solution is an emergency washing solution specially designed to respon dto hydrofluoric hazard: discover Hexafluorine® solution Overview of Lithium-ion batteries, Panasonic, 2007 The analytical and descriptive inorganic chemistry of the hydrolysis of hexafluoropnictate ions PnF6-, M.Ponikvar, B. Zemva, J.F. Liebman, J. Fluor. Chem. 2003, 123, 217-220. Wang, Q., Sun, J. and Chu, G., 2005. Lithium Ion Battery Fire And Explosion. Fire Safety Science 8: 375-382. doi:10.3801/IAFSS.FSS.8-375 https://www.ineris.fr/centredoc/dossier-inerismag30-1347611239.pdf https://www.ineris.fr/centredoc/ineris-rapport-scientifique-2011-2012-bat-1353422460.pdf
A new cleaning method could remove dust on solar installations in water-limited regions, improving overall efficiency Solar power is expected to reach 10 percent of global power generation by the year 2030, and much of that is likely to be located in desert areas, where sunlight is abundant. But the accumulation of dust on solar panels or mirrors is already a significant issue—it can reduce the output of photovoltaic panels by as much as 30 percent in just one month—so regular cleaning is essential for such installations. But cleaning solar panels currently is estimated to use about 10 billion gallons of water per year—enough to supply drinking water for up to 2 million people. Attempts at waterless cleaning are labor intensive and tend to cause irreversible scratching of the surfaces, which also reduces efficiency. Now, a team of researchers at MIT has devised a way of automatically cleaning solar panels, or the mirrors of solar thermal plants, in a waterless, no-contact system that could significantly reduce the dust problem, they say. The new system uses electrostatic repulsion to cause dust particles to detach and virtually leap off the panel’s surface, without the need for water or brushes. To activate the system, a simple electrode passes just above the solar panel’s surface, imparting an electrical charge to the dust particles, which are then repelled by a charge applied to the panel itself. The system can be operated automatically using a simple electric motor and guide rails along the side of the panel. The research is described in the journal Science Advances, in a paper by MIT graduate student Sreedath Panat and professor of mechanical engineering Kripa Varanasi. Despite concerted efforts worldwide to develop ever more efficient solar panels, Varanasi says, “a mundane problem like dust can actually put a serious dent in the whole thing.” Lab tests conducted by Panat and Varanasi showed that the dropoff of energy output from the panels happens steeply at the very beginning of the process of dust accumulation and can easily reach 30 percent reduction after just one month without cleaning. Even a 1 percent reduction in power, for a 150-megawatt solar installation, they calculated, could result in a $200,000 loss in annual revenue. The researchers say that globally, a 3 to 4 percent reduction in power output from solar plants would amount to a loss of between $3.3 billion and $5.5 billion. “There is so much work going on in solar materials,” Varanasi says. “They’re pushing the boundaries, trying to gain a few percent here and there in improving the efficiency, and here you have something that can obliterate all of that right away.” Many of the largest solar power installations in the world, including ones in China, India, the U.A.E., and the U.S., are located in desert regions. The water used for cleaning these solar panels using pressurized water jets has to be trucked in from a distance, and it has to be very pure to avoid leaving behind deposits on the surfaces. Dry scrubbing is sometimes used but is less effective at cleaning the surfaces and can cause permanent scratching that also reduces light transmission. Water cleaning makes up about 10 percent of the operating costs of solar installations. The new system could potentially reduce these costs while improving the overall power output by allowing for more frequent automated cleanings, the researchers say. “The water footprint of the solar industry is mind boggling,” Varanasi says, and it will be increasing as these installations continue to expand worldwide. “So, the industry has to be very careful and thoughtful about how to make this a sustainable solution.” Other groups have tried to develop electrostatic based solutions, but these have relied on a layer called an electrodynamic screen, using interdigitated electrodes. These screens can have defects that allow moisture in and cause them to fail, Varanasi says. While they might be useful on a place like Mars, he says, where moisture is not an issue, even in desert environments on Earth this can be a serious problem. The new system they developed only requires an electrode, which can be a simple metal bar, to pass over the panel, producing an electric field that imparts a charge to the dust particles as it goes. An opposite charge applied to a transparent conductive layer just a few nanometers thick deposited on the glass covering of the the solar panel then repels the particles, and by calculating the right voltage to apply, the researchers were able to find a voltage range sufficient to overcome the pull of gravity and adhesion forces, and cause the dust to lift away. Using specially prepared laboratory samples of dust with a range of particle sizes, experiments proved that the process works effectively on a laboratory-scale test installation, Panat says. The tests showed that humidity in the air provided a thin coating of water on the particles, which turned out to be crucial to making the effect work. “We performed experiments at varying humidities from 5 percent to 95 percent,” Panat says. “As long as the ambient humidity is greater than 30 percent, you can remove almost all of the particles from the surface, but as humidity decreases, it becomes harder.” Varanasi says that “the good news is that when you get to 30 percent humidity, most deserts actually fall in this regime.” And even those that are typically drier than that tend to have higher humidity in the early morning hours, leading to dew formation, so the cleaning could be timed accordingly. “Moreover, unlike some of the prior work on electrodynamic screens, which actually do not work at high or even moderate humidity, our system can work at humidity even as high as 95 percent, indefinitely,” Panat says. In practice, at scale, each solar panel could be fitted with railings on each side, with an electrode spanning across the panel. A small electric motor, perhaps using a tiny portion of the output from the panel itself, would drive a belt system to move the electrode from one end of the panel to the other, causing all the dust to fall away. The whole process could be automated or controlled remotely. Alternatively, thin strips of conductive transparent material could be permanently arranged above the panel, eliminating the need for moving parts. By eliminating the dependency on trucked-in water, by eliminating the buildup of dust that can contain corrosive compounds, and by lowering the overall operational costs, such systems have the potential to significantly improve the overall efficiency and reliability of solar installations, Varanasi says.
The Importance of Exercise for Kids is more important today than ever. Exercise for kids is important to prevent childhood obesity and optimize cognitive development. The latter is a complex process that depends on several factors, including nutrition, social interaction, and physical activity. This multifactorial process is primarily reliant on the parents’ commitment, as they can make a big difference in how to raise their child, and consequently, how well their cognitive abilities develop. The good news is that parents don’t need any special talent or financial investments to achieve the best possible results. All that they have to do is provide a loving environment that helps their child’s brain cells connect with one another – a process that’s known as neural synapsis. You see, the process of forming new connections between neurons is dependent on the release of certain hormones, such as brain-derivative neurotrophic factor (BDNF). In this piece, we will cover some scientific sources that detail the importance of exercise for kids. Why is it important for your kids to exercise? Today, kids are more sedentary than ever. Spending hours in front of TVs, computers, and smartphones means there is no actual time for exercise. Therefore, one of the effective ways to motivate them to exercise is by limiting their screen time. According to the American Academy of Pediatrics (AAP), parents should: - Limit the screen time of their children (e.g., TV, video games, TikTok, social media) - Only allow 1 hour of screen time for children aged 2 to 5 years old - Educate their kids about the importance of staying active - Engage with their kids in fun activities that stimulate the body (e.g., jogging, outdoor games) - Turn off screens during mealtimes to prevent overattachment to these devices How Much Exercise Is Enough? As a parent, you should encourage your kids to be active every day. The Physical Activity Guidelines recommend that school-age kids have at least 60 minutes of moderate to intense physical activity per day. Examples include strength training and cardiovascular exercises. Experts recommend getting at least 3 days of exercise per week. For younger kids, there are no precise recommendations. However, aiming for 3 hours of moderate activity per day may be appropriate. To keep things fun, the activities should include planned, adult-led physical activity, as well as unstructured active free play. How to raise active kids Here are a few tips to help your kids stay active: - Encourage them to engage in age-appropriate exercises - Set and follow up with a regular schedule of physical activity - Carve the culture of staying active from an early age - Embrace a healthier lifestyle yourself to be a role model for the family - Engage in group activities as a family - Make sure to keep it fun Keeping your kids active from a young age will teach them the importance of this habit, which will yield unquantifiable benefits during their entire lives. We hope that this article helped understand and appreciate the potential benefits of exercise for kids. Here’s to your health! Click for more information about Dr. Kevin Crawford, Orthopedic Surgeon Dr. Kevin Crawford
For a large part of the state’s history, the legislature was constitutionally and politically the dominant branch of the government. The 1777 constitution ignored the separation of powers and created a constitutional order that revolved around the legislative branch. That dominance faded as the nineteenth century wore on, however, as a series of restraints was placed on the legislative branch in response to legislative abuses. In the twentieth century, the demand for governmental reorganization and leadership in government gave the executive branch the dominant... Users without a subscription are not able to see the full to access all content.
Philemon Wright — Ottawa River Pioneer You wouldn’t know it but some of the wood used in Riverwood Acoustics Hudson speakers is over 300 years old! The story of how they got there begins with the start of the logging trade in the Ottawa river. In the beginning of the 19th century, Great Britain, at the time affected by the wars ravaging Europe, was unable to have access to their normal supply in the Baltic, feeling the effects of Napoleon’s blockade. This forced them to lean heavily on Canada to supply them with the vast amount of timber needed for their Navy. This demand helped grow the surrounding region and spur development along the river. Turning the Ottawa river was a major trade artery in which timber flowed from the wilds of Canada to foreign markets. Philemon Wright is credited for bringing the first raft of timber down the river from the Ottawa Basin to Quebec around 1806. They christened the raft “Colombo,” with the journey taking them around three months to complete due to the raft repeatedly breaking and navigating the rapids in certain sections of the river. The rafts themselves were made from the lumber itself, most of it squared timber, They cut the logs into squares, lashed them together into a larger structure and constructed huts on top for the workers to live in while they made the journey to Quebec,where the lumber was dried and shipped out. Every spring saw such rafts dotting the many tributaries of the Ottawa river as lumberjacks made their way down to cash in on their product. People called them “stick-men” due to the fact that when the rafts became stuck they would spring up, longs sticks in their hand, and try to push the raft away from whatever was stopping the raft. The entire process from start to finish was painstaking and arduous. The lumberjacks spent most of their time in isolated camps far from civilization. From cutting down the trees in Canada’s hinterlands, to sawing them down, making the lumber into rafts and then making the treacherous journey through rapids and waterfalls to Quebec. The journey wasn’t a pleasant cruise, with rapids and waterfalls along the way. Sometimes they even had to disassemble the raft to get around such obstacles, then reassemble them on the other side. Often the rafts would break apart, claiming the lives of the men piloting it. The profits these men were chasing where substantial, around 1850 a single raft would be approximately $12,000. At the end of the century, as timber supplies diminished the prices rose to around $100,000 or a single raft. Adjusted for Inflation $12,000 in 1860 would be around $346.998, and in 1890 $100,000 would be around $2,637,439. Certainly the payout awaiting the men at the end of their journey seemed worth the peril. Not all of the logs made it to their destination. Some fell to the bottom of the river due to the rafts breaking apart or the wood they were transporting became heavy with water and sank. There at the bottom of the riverbed they became covered in sediment, with the pressure helping to preserve them. It’s these logs that Riverwood uses for their speakers, but first they have to be recovered from the river. Have you ever heard of a logging company that doesn’t cut down trees? Well now you have:founded in 1997, Logsend specializes in retrieving those lost bits of Canadian history. They’re the number one exporter of old-growth timber of Canada and do so in a completely sustainable fashion. Dredging up wood from the bottom of a river that at some parts is 402 feet deep is no simple task. First, trained scuba divers plunge into the icy waters in order to find and mark the logs. Each log has floatation devices tied it, and is lifted out the riverbed to avoid disrupting the surrounding environment. A tug boat then comes, hooks the logs, lifts them up enough so they’re not dragging along the riverbed and brings them to shore. Then they’re picked out of the water and sent to be processed. The logs are then sawn and place them to air-dry, which can take anywhere from 3 months to year. The air-dried logs are then sent to a dry-kiln and fashioned into a wide array of products, such as custom hardwood flooring, moulding, and the paneling that you see on Riverwood’s own speakers. The recovered logs are considered Old-Growth, meaning they had the opportunity to grow uninterrupted for hundreds of years. The annular rings of Old-Growth wood is tighter, results in an ultra dense wood cabinet produce an unparalleled tone.Here at Riverwood Acoustics, we’re proud to partner with a company dedicated to helping the environment through sustainable manufacturing methods. Our priorities when designing the Hudson was to make utilize local manufacturing to ensure quality, local jobs and minimize carbon footprint. By working with Logsend, we’ve taken one step closer to achieving that goal. If you’re looking to own a piece of history, take a look at our Hudson speakers. Available in either a natural or walnut finish, the Hudson produces an unrivaled, crisp sound. Countless hours went into perfecting the design, style and acoustics so that you will enjoy for a lifetime. Riverwood Acoustics has committed to providing sound for the Arnprior museum exhibit that will be showcasing some treasures from the historic logging drive. The new exhibit is going to be ready in the early summer and should be on everyone must do list. Historic, Artful, Unrivaled. Photo credit: Arnprior/McNab archives Article by: Johnathan Larkey
The number of people, including children, living with HIV keeps growing in the Russian Federation and other countries in Eastern Europe and Central Asia, which is the only region where HIV prevalence remains on the rise. This folder contains ten leaflets covering the following themes: an overview of the HIIV and AIDS epidemic; HIV and AIDS in South Asia. HIV and AIDS epidemic in Pakistan; What is HIV? What is AIDS?; Sources or causes of transmission of HIV and AIDS. This booklet is addressed to youth, particularly students. It contains basic information about HIV and AIDS, modes of transmission, precautionary measures against HIV infection, what young students should know about their health, adolescence issues, and life skills.
Promoting the sustainable development of marine environments requires planning, just as we have long had spatial planning for land-based activities. Now researchers from the University of Gothenburg and elsewhere are showing that marine planning must take climate change into consideration – something that it does not currently do. The researchers' models show that changes to temperature and salt content may impact ecosystems and species as much as all other effects on the environment combined. Symphony is a digital tool that has existed for the past few years. It uses GIS maps that show the distribution of important ecosystems and species along Sweden's coastlines and how by environmental disturbances, such as nutrient pollution, boat traffic and fishing, affect them in different areas. The maps are to guide setting priorities and various measures for public authorities and others that work with marine planning. One problem with the current version of Symphony is that it does not consider how the climate will change in the future. Now researchers in the ClimeMarine project have studied what happens when the expected changes in temperature and salt content are implemented into the tool. It showed that the anticipated climate changes will increase the total environmental impact by at least fifty per cent, and in some areas, as much as several hundred per cent." Per Jonsson, Researcher, University of Gothenburg and Study Co-Author Maps reveal where climate change has the most impact The GIS maps show how the effects of climate change vary for different areas. "It's a clear sign that we may need to reduce other impacts to lower the total rate of impact in some areas. For example, in areas with valuable eelgrass meadows, we might consider rerouting a shipping line or slowing the expansion of marinas and leisure boating," says Jonsson. The tool also enables identification of areas expected to experience less climate impact, such as so-called upwelling areas like off the island of Gotland, where deep cold water rises and cools the water at the surface. Such areas can function as climate refuges, where sensitive species can survive. "Marine reserves may be considered to protect these areas, where we 'remove' other factors that have an impact. Sweden has committed to establishing several new protected marine areas, and Symphony can help identify where they should be located." We need more research on how ecosystems and species react Per Jonsson notes that these types of forecasts naturally have weaknesses. The mathematical models used to calculate future temperatures and salt content are continuously being developed and improved. We also do not know what will happen with our carbon dioxide emissions in the future. This is a political issue that is difficult to assess. "We also need to better understand how sensitive different ecosystems and species are to climate change. We need experimental studies that show what happens when the temperature rises and salt content decreases." Even without these, however, he is confident of the impact of climate change for the future of marine environments. "What we present in the study can be viewed as informed guesses based on the information we currently have. But the effects of a changed climate clearly must be incorporated into marine planning." Wåhlström, I., et al. (2022) Projected climate change impact on a coastal sea—As significant as all current pressures combined. Global Change Biology. doi.org/10.1111/gcb.16312.
Blue is a very prominent color. But why only very few plants have blue and far less in Animals? Let’s try to explain it in simple. First, let’s understand the Spectrum of Natural Light. The Blue color region of the light spectrum has a short wavelength with high energy. Therefore those are the waves absorbed by plants from the natural light mainly for photosynthesis. (Not gonna go into detail about photosynthesis here) That’s why you would rarely see plants with blue leaves. In simple, colors you see on anything under natural light are the colors reflected by that surface absorbing the rest of the colors. Hence you don’t see Blue being reflected by the leaves. Then how are there certain plants that do have blue flowers? Most plants create the color blue by mixing up naturally occurring pigments (by varying the acidity). That’s how plants like Hydrangea and Morning glory would create blue-colored flowers. Now, what about animals? Animals may appear in the color Blue for many reasons including, to attract the opposite sex during the courtship or more importantly to warn about their toxicity (aposematism) like the Blue Poison Dart Frog. But would you believe non of these animals including your favorite emoji butterfly (Blue Morpho Butterfly) are not really Blue but rather an optical illusion? Usually, the pigments on animals come from the food they eat. For example, flamingos get their pink color from the tiny crustaceans they eat. But due to the lack of natural blue food, animals would not create blue pigmentation on them. Going back to blue morpho butterflies, (or Blue tigers which are commonly found in Sri Lanka) they make the blue color from the scales on their wings shaped in a way that only the wavelength of blue light is reflected. If their scaled were shaped differently, the color blue will vanish. Then what about the birds? Similar to butterflies, birds have tiny microscopic structures on their feathers that the light goes thru a series of reflections in which only the blue color wavelength is escaped and the other wavelengths are canceled out. Think of the scenarios of how a prism would split the natural light into different wavelengths to separate the colors and the process of a noise cancellation headphone. If you look at a Blue colored bird carefully in different angles you would notice that the depth of the blue color changes based on the angle you are looking at. So does blue pigmentation ever occur naturally? Yes! as far as my knowledge goes, THE ONLY animal with true blue pigmentation is the Obrina Olivewing Butterfly (Nessaea obrinus). If blue is so hard to be made naturally, Then why blue is so popular? I suppose the answer to this is in the question itself. Until recent synthetic color-making technologies, Blue color was the rarest. Therefore anything associated with the color blue was considered privileged. Think of the Royal Blue and old aged most expensive paintings done with the color blue (ex. Picasso’s Blue period). So to answer our previous poll question on Instagram, Were those feathers of birds truly Blue (pigmented), the answer is NO. It’s only an optical illusion.
These organisms can cause a whole host of issues, one of which is bacterial infections in the blood. The severity of the infection is determined by a number of factors including the infecting organism, nature of the injury where the bacteria entered, and how vigorous the host organism is, among others. In "Todar’s Online Textbook of Bacteriology," Dr. Kenneth Todar says that staph bacteria is one of the more common types of infections because it lives fairly easily on human skin. In most cases when the skin is broken by a small scrape or cut, the wound usually heals itself without much worry about staphylococcal bacteria. If not treated properly, cuts that go deep into the skin may develop an infection known as cellulitis. One of the more common sources of staph infections are cuts that have been made during surgery or other medically necessary procedures. Because it has been present in hospitals for so long, an antibiotic-resistant form of staph infection known as MRSA has developed. The Centers for Disease Control and Prevention warn that this strain of the virus has shown resistance to many common antibiotics, including amoxicillin and penicillin. This resistance makes treating these infections much more difficult. The Centers for Disease Control reports that fungus Sporothrix schenckii, the fungus that causes the infection sporotrichosis, is most commonly found in plant materials such as hay, mulch, or rose bushes. Many times, individuals performing yard work will stick themselves with sharp thorns or sticks, which could lead to exposure to the fungus. According to the Centers for Disease Control, the incubation period for the infection can be anywhere from one week to twelve weeks. It begins as a discolored lump at the infection site (perhaps red or purple). More of these irregularities will appear as the infection spreads and reaches the lymph nodes. Further, the body develops unsightly lesions which burst, ooze, and become painful. In the case where a person has a weak immune system, the infection will spread to other places within the body, including the organs. Vibrio vulnificus is a salt-water dwelling bacteria which can be found in the southeast United States. This is a life-threatening bacteria that can infect people if they’ve been swimming in contaminated waters. Dr Michael Bross, in a 2007 article in "American Family Physician," states that Vibrio vulnificus develops quickly as cellulitis with additional flu-like symptoms. If left untreated, the infection can spread to the bloodstream. Unfortunately, about 1 in 6 people who become infected by Vibrio vulnificus die from side effects of the infection. - Todar’s Online Textbook of Bacteriology: Staphylococcus aureus and Staphylococcal Disease - Centers for Disease Control and Prevention: Overview of Healthcare-associated MRSA - Centers for Disease Control and Prevention: Sporotrichosis - Merck Manual for Healthcare Professionals: Sporotrichosis - “American Family Physician”; Vibrio Vulnificus Infection–Diagnosis and Treatment; Michael H. Bross, M.D., et al.; August 2007
Thesis, research papers, dissertations, and any other type of academic writing need to be cited using different citation styles. Students and researchers commonly use APA, MLA, and Chicago citation styles for citing sources. Citing the sources is an important part of the research paper. With citation, you will easily save yourself from plagiarism and show the relevance of your work. However, some students get confused about between different citation styles. Therefore, in this blog, you will get a complete style guide and know how to cite a research paper in different formats. The citation style helps the students or researchers to format their papers in a specific way. Therefore, it is an essential part of the paper when you cite another researcher’s work. However, citation styles are generally used to locate particular sources. Also, it is published in an official handbook that contains examples, explanations, and instructions. The main purpose of citations is to: The citation style you will select will depend on the discipline in which you are writing. However, in most cases, the professor assigns a citation style to students. Therefore, consult them first and then start writing the research paper. When you cite sources, you should mention some main things: Therefore, keep these things in mind when you cite the sources. Also, the information you add will remain the same in all citation styles, but the presented order is different. When you write a research paper, you must cite the sources you use in your paper. The main citation styles used in academic writing are: The basic information is the same in all citation styles. However, only the format to present information depends on APA, MLA, Chicago, and any other citation style. Let us discuss these referencing styles in detail. APA is the most common research paper format, and it is generally used in psychology and social sciences papers. However, it is not limited to such disciplines. Also, it is not a complex citation style, and students normally use this style to format their research papers. For your help, we gathered some formatting rules that you should follow when writing the research paper in APA format. Also, in APA format style paper, the research paper should include: Here are some specific APA formats for different kinds of sources:. |Website||Author, (Year, Month Date of Publication). Article title. Retrieved from URL| |Book||Author, (Year of Publication). The Title of work. Publisher City, State: Publisher| |Magazine||Author, (Year, a month of Publication). Article title. Magazine Title, Volume(Issue), pp.-pp| |News Pape||Author, (Year, Month Date of Publication). Article title. Magazine Title, pp. xx-xx| In-text citations direct the readers to references entries at the end of the paper. It is usually used when you describe another author’s work or ideas. The APA in-text citation has two major elements: Also, when you cite the source, you must add the timestamp and page number. For Example: (Alexa, 1993, p. 45) The APA in-text citation come in two different forms: Both types have different styles of citation. In parenthetical citation, you write like this (Alexa, 2010), but in a narrative, write the citation in this format Alexa (2010). Take a look at this sample and get a better idea of the APA format style research paper APA Format Sample Paper MLA citation is another common research paper format, and it is mainly used in the humanities and liberal arts subjects. This style is mainly used by scholars, researchers, or students to format their writing assignments. However, when you write the paper in MLA format, use the title “Works-Cited List” for your reference list. Here are some formatting rules that you should know when drafting the research paper in MLA format. Therefore, keep these rules in mind and format the MLA-style research paper effectively. Below-given are the following MLA formats for different kinds of sources: |Website||Last Name, First Name. “Page Title.” Website Title. Sponsoring Institution/Publisher. Publication Date: Page Numbers. Medium.| |Book||Last Name, First Name. Book Title. Publisher City: Publisher Name, Year Published. Medium.| |Magazine||Last Name, First Name. “Article Title.” Magazine Name Publication Date: Page Numbers. Medium| |News Paper||Last Name, First Name. “Article Title.” Newspaper Name Publication Date: Page Numbers. Medium.| |Blog Posts||ast name, Middle name, Surname. “Title of the blog post.” Blog Title, Publisher, date posted, URL.| |@twitterhandel. “Description.” Twitter, date, time. URL.| |Last name, first name. “Description” Website Title, date, URL.| Check this sample and write a research paper in MLA format without any difficulty. MLA Format Sample Paper Several researchers also use the Chicago style, and it is also normally used in humanities and book bibliographies. In this citation style, the writers cite sources in endnotes and footnotes. Also, this style allows the writers to demonstrate acceptability to the related references used in the content. The following are the Chicago-style formatting rules that every student or researcher should follow. For your help, we also gathered the Chicago-style citation to cite sources of different kinds. |Website||Last Name, First Name. “Page Title.” Website Title. Web Address (retrieved Date Accessed).| |Article||Last Name, First Name. “Article Title.” Journal Name Volume Number (Year Published): Page Numbers| |Magazine||Last Name, First Name. Article title. Magazine Title, Month Date, Year of publication.| |News Paper||Last Name, First Name. “Article Title.” Newspaper Name, Publication Date.| |Youtube Video||Last name, First name, “Title of Video.” YouTube video, length. Date published. URL.| |Last name, First name. Twitter Post. Month Day, Year, Time. Tweet URL.| |Author(s) of the post (@handle). "Text of the Instagram post." Instagram, Date of posting. URL.| |Blog Posts||First name Last Name, “Title of Blog Post,” Blog Title (blog), Month Date, Year of Post, URL.| |Book||Last Name, First Name. Title of Book. Publisher City: Publisher Name, Year Published.| Also, look at this sample and get help from them for your research paper. Chicago Style Sample Paper The Sociological Association creates this citation style, and scholars use it for writing sociology research papers. The main elements of the ASA paper are: Here are some formatting tips that help you in writing the research paper in ASA forma Below are the ASA citation style guidelines for citing different kinds of sources. |Website||Author’s Last, First Name. Date of Publishing. Title. Publisher. Retrieved Month Day, Year (link).| |Magazine Article||Author’s Last, First Name. Year of Pub. "Title." Magazine Name, Month Year, pp. Inclusive page numbers.| |Book||Author’s Last, First Name. Year of Publication. Title. Country of publisher: Publisher.| |Newspaper Article||Author Last name, First name. Year of publication. "Title of Article." Title of Newspaper (italicized), Month date, pp. Numbers.| Here is an ASA sample paper for your ease. ASA Sample Paper The IEEE citation style is used in electronics, computer science, engineering, and IT. This citation style is an official style of the Institute of Electrical and Electronics Engineers. Also, the IEEE citation style has three main parts: Like all other citation styles, the format, page numbers, and other information will differ depending on the types of cited sources. The following table shows how you cite different kinds of sources in your research paper. |Handbooks||Name of Manual/Handbook, x ed., Abbrev. Name of Co., City of Co., Abbrev. State, year, pp. xxx-xxx.| |Conference Paper||J. K. Author, “Title of paper,” presented at the Abbreviated Name of Conf., City of Conf., Abbrev. State, Country, Month and day(s), year, Paper number.| |Lectures||J. K. Author. (Year). Title of lecture [Type of Medium]. Available: URL| |Periodicals||J. K. Author, “Name of paper,” Abbrev. Title of Periodical, vol. x, no. x, pp. xxx-xxx, Abbrev. Month, year.| |Reports||J. K. Author, “Title of report,” Abbrev. Name of Co., City of Co., Abbrev. State, Country, Rep. xxx, year.| |Thesis and Dissertation||J. K. Author, “Title of thesis,” M.S. thesis, Abbrev. Dept., Abbrev. Univ., City of Univ., Abbrev. State, year.| |Website||First Name Initial(s) Last Name. “Page Title.” Website Title. Web Address (retrieved Date Accessed).| |Books||J. K. Author, “Title of chapter in the book,” in Title of His Published Book, xth ed. City of Publisher, (only U.S. State), Country: Abbrev. of Publisher, year, ch. x, sec. x, pp. xxx–xxx.| We collected an example that helps in your IEEE citation-style research paper. IEEE Citation Example The Harvard citation style is used in economics, and it is based on an author-date system. Also, in the Harvard style, the body of your paper contains in-text citations. Here are some tips that you should follow in writing the research paper in Harvard style. However, some institutes have their own rules for the Harvard style, so check them first and then start writing. The below table shows different kinds of sources and their Harvard-style format for your research paper. |Book||Author A and Author B (year) Book Title. Place: Publisher name.| |Journal Articles||Author Surname, Initial. (Year) Title of article. Title of journal, Vol. no. (Part no./Issue/Month), Pages, use p. or pp.| |Thesis or Dissertation||Author Surname, Initial. (Year). Title. Designation (Level, e.g., MSc, PhD.), Institution.| |Report||Author Surname, Initial(s) or Corporate Author., (Year of publication). Title of report. Paper number [if applicable]. Place of Publication: Publisher.| Here is an example for you to understand this citation style even better. Harvard Style Paper The following are the tips that you should follow and carefully cite sources in your research paper. You get a complete guide of different citation styles but need a professional writer’s help writing the research paper. Simply consult strong CollegeEssay.org. Our skilled writers have extensive experience writing college essays, thesis, journals, articles, research papers, and other academic assignments. So contact us now and get a professional writer’s help for your college essay assignments. Literature, MarketingAs a Digital Content Strategist, Nova Allison has eight years of experience in writing both technical and scientific content. With a focus on developing online content plans that engage audiences, Nova strives to write pieces that are not only informative but captivating as well.
Member of an American Indian people who lived in North Carolina until the early 18th century. Their language belongs to the Iroquoian family. Hemp was gathered for rope and medicine, and later traded with the Europeans for rum, which they sold to neighbouring peoples. They rose against the colonists in 1711 but were defeated in 1713, and the majority moved north. In 1722 they were the last to join the Iroquois confederacy. After splitting allegiance during the American Revolution, they separated to reservations in New York and Ontario, Canada, where most now live. The Southern Band are descendants of those who remained in North Carolina and Virginia. The Tuscarora were originally divided into three distinct groups: the hemp gatherers, the people of the pines, and the people of the water. Society was matrilineal (descent being traced through the maternal line), and consisted of clans that belonged to one of two moieties, or divisions. Each of the clans was headed by a clan mother. They lived in towns of round bark-covered lodges, subsisting on maize supplemented by hunting. When first contacted by Europeans in 1708, the Tuscarora were living across most of North Carolina, and had 15 towns and an estimated population of 6,000, including 1,200 warriors. European colonists kidnapped many of their people and sold them as slaves, and also stole their lands, leading them to raid the colonists in 1711 under the leadership of Chief Hancock. The settlers retaliated and, in 1713, the Tuscarora were defeated with the aid of reinforcements from South Carolina. Some 1,000 Tuscarora were either killed or captured and sold into slavery. Most of the survivors migrated north in 1714, and eventually settled on Oneida territory in New York State. In 1722 the 1,600 remaining Tuscarora joined the Iroquois confederacy, although they were not allowed voting rights. During the American Revolution the Tuscarora split their allegiance between the colonists and the British. Those who supported the British were granted lands on Grand River reservation in Ontario along with the Mohawk, while those who supported the Americans were settled on reservations in New York State, although some were removed to Indian Territory (Oklahoma) in 1846. The Southern Band Tuscarora remained in North Carolina and were placed on the Indian Woods Reservation in 1717. However, settlement began to encroach on these lands, and the Tuscarora were coerced into illegal land deals. Efforts to preserve reservation lands by law in 1748 and 1777 proved ineffective, and in 1803 a group of chiefs sold the rest of the reservation lands and migrated to New York State. The remaining Tuscarora and their chiefs dispersed, many joining other Indian groups or moving to the mill towns. They began to regroup in the late 20th century, and now have a membership of some 250 (2001). Land was purchased within the original boundaries of Indian Woods through donations. They have reorganized into seven clans, and plan to establish a cultural centre and museum.
Barbara McClintock (1902-1992) was a scientist and cytogeneticist who discovered the “jumping gene”, focusing on maize as a model organism. She received a B.S. in 1923 from Cornell’s College of Agriculture, an M.S. in botany in 1925, and finished her Ph.D. in the new field of cytogenetics in 1927. To continue her research on corn chromosomes, she remained at Cornell as a professor, where she met Harriet Creighton in 1929. McClintock and Creighton worked together to prove the occurrence of chromosomal crossovers in corn, and that it increases genetic variation in the species. She studied the effects of X-rays, and how to cause mutations, on corn chromosomes, which led to her discovery of translocations, inversions, deletions and ring chromosomes in corn. She developed the first genetic map for maize, where she linked chromosomes to physical traits, and proved that genetic information can be suppressed between generations. She taught at the University of Missouri for five years, but felt that she would never be promoted. She left to join Marcus Rhoades and Milislav Demerec at Cold Spring Harbor Laboratory, where she discovered the process of transposition (jumping genes) in corn chromosomes, the switching on or off of physical traits during reproduction. She stayed at Cold Spring Harbor until her death in 1992. She was the only woman to receive an unshared Nobel Prize for Physiology or Medicine, for the discovery of genetic transposition in 1983.
A powerful computing tool allows scientists to compare two of the most hazardous events in recent U.S. history. The summer of 1988 saw much of the United States bake in a heat wave that resulted in the worst season of drought since the 1930s. Five years later in 1993, drenching rains in the spring and summer caused historic flooding in Midwest states. A complex combination of weather and climate factors gave rise to these opposing catastrophes. To better understand these events scientists used an integrated record of Earth's weather and climate over the past three decades that combines computer modeling with a broad collection of satellite and meteorological observations. This dataset, called the Modern Era Retrospective-Analysis for Research and Applications (MERRA), allows scientists to pinpoint the unique conditions behind the 1988 drought and 1993 floods. The pattern of air circulation over the middle of the country in 1988 blocked the typical movement of moisture from the Gulf of Mexico northward. Conversely, 1993 saw an extreme intensification of winds pushing rain clouds north from the Gulf of Mexico. In the visualizations below, look for the differences in this recreation of wind patterns over the United States from May 1 to July 31 in 1988 and 1993.
Every day we face situations that call for our spatial abilities. Locating yourself in the street, analysing diagrams and plans, playing video games or picturing how things might look from another angle all involve our spatial reasoning abilities. As its most basic, spatial reasoning involves the ability to create, understand and differentiate spatial patterns, along with drawing conclusions and solving problems based on these visualizations. REASONING - SPATIAL measures the spatial reasoning abilities of an individual and offers a reliable estimation of his ability to combine concepts based on projective skills. These abilities are of most importance in the areas of STEM (science, technology, engineering, mathematics), but they can also useful in other professions, as spatial intelligence goes beyond cognitive skills providing a more comprehensive view of ideas and concepts. REASONING - SPATIAL assessment is recommended for recruitment to positions in STEM domains in particular, but also for jobs that require mental manipulation of 2D and 3D objects. People with high spatial abilities would be more comfortable interpreting diagrams and graphs, or generating abstract and schematic mental images. Selection for Universities / Business Schools Nowadays, the student potential is evaluated mainly on the basis of verbal and numerical abilities, spatial skills being often ignored. As a result, students with excellent spatial skills who do not perform well on verbal or numerical tests are at a disadvantage. Thus, STEM courses such as those in the engineering and physical sciences are missing talented students during the admissions. Recent studies have shown that spatial ability can play an important role in the development of creative thinking and innovation. By detecting and cultivating these skills, universities and business schools may offer better opportunities to improve student innovation potential. Target groups: STEM (science, technology, engineering, mathematics) and no- STEM professions such as marketing or design. Questionnaire: 8 questions Time: 16 minutes maximum (timed) Languages: English and French Evaluates a candidate's general intelligence (IQ) Uses original and varied questions Provides detailed solutions to the questions (on demand) Spatial intelligence includes different dimensions. REASONING - SPATIAL measures an essential dimension of spatial intelligence: mental rotation. This ability requires very good skills in mental visualization skills since it is about to mentally represent a complex object and repositioning it in time and space. Mental rotation is useful in everyday life and in many professional fields, for example in interior architecture, sculpture or pottery, even in haute couture. General description of reasoning abilities Comparison between STEM and no-STEM population Test solutions (this feature is activated on demand)
All three species of vampire bat live in Central to South America, the common vampire bat (Desmodus rotundus), the hairy-legged vampire bat (Diphylla ecaudata), and the white-winged vampire bat (Diaemus youngi). It’s the nose of the common vampire bat (Desmodus rotundus). These bats belong to the family Phyllostomidae, one of three families of leaf-nosed bats (Rhinolophidae and Megadermatidae being the other two families). One of the exceptional skills mediated by this nose makes use of the same receptor that makes our mouths burn when we eat chili peppers. Vampire bats can detect the hot blood in your veins from far away! It’s the noseleaves of the vampire bat that are so amazing, but maybe we should include the rest of the bat head as well. The ears, teeth, mouth, and eyes all work with the nose to give this bat some jet fighter skills. Leaf nosed bats come in some very odd varieties. The picture on the below and right will give you some idea of the shapes and sizes possible. The question is – what’s the reason for these bizarre growths and why isn’t one odd shape enough? The answer will best be found if we know what their function is, because in biology – form follows function (except for proteins, see this post). Two basic needs of the bat are to find food and find its way. Whether it's a fruit bat, an insectivorous bat, or a vampire bat, a bat must be able to negotiate obstacles within its environment and find a source of nutrition. To accomplish these tasks, especially given that most bats are nocturnal, they use echolocation. They send out a high-pitched sound, and it bounces off objects and returns to their ears. This is very much like the radar used in airplanes. But this isn’t all they use. Bats can see just about as well as humans; the phrase “blind as a bat” might as well be “blind as a Bob.” 2010 study, the leaves aren’t used to gather the returning sound, but to focus the outgoing sound so that the “pictures” formed by the returning echos will be most accurate. Different shapes help to increase the difference in the reflectivity of objects in the area of focus as opposed to those in the periphery. This allows the various species to hone in what they need to discern and dismiss those things that are uninteresting. Different backgrounds and different needs require different nose leaf shapes. This answers the question about the wild shapes of noses, but it brings up another question. If vampire bats find their food by echolocation, sight, and smell, then why do they have heat sensors? To answer this new question, consider the sizes of the vampire bat and its intended prey. The bat weighs about 2.5 oz (71 g), but it needs blood for food (mammalian blood for common vampire bats, bird blood for hairy legged and white-winged species). In fact, vampire bats are the only mammals that completely depend on hematophagy (blood meals). Because of this, they often feed on animals that are over 1000x their size. The bats need to locate a place on the sleeping animals where blood vessels are near the surface. This is where the heat sensing comes into play. Vessels close to the surface will give off the most heat to the environment, and vampire bats can “see” these vessels from up to 20 cm away! The vessels in question need to be covered with less hair, so the bat almost always goes for the lower leg or snout. They will land on the ground, and walk or run up to the prey from behind the animal to make the bite. Vampire bats wings are much stronger than most other bats, so they have an easier time moving along the ground, supporting some of their weight on their wings. Instinct tells the vampire that a good feeding once will probably mean a good feeding again – if they can find the same animal. So how do they find the same animal several night is a row? They hear them. A 2006 study showed that vampire bats do tend to feed on the same individual (be it human or cow) for several nights in a row. They can distinguish their previous victim by the sounds of their breathing! Every animal has a unique breathing pattern and sound profile, and the vampire bat can distinguish between individuals to find the one that matched a previous good meal. Imagine if we could find our favorite meal again by listening for the clinking of the right pans! Returning to a good feeding spot each night, the vampire bat searches for a surface vessel to drink about 1-2 teaspoons of blood (4-5 ml). This isn’t enough to harm the animals, and is what allows them to go back several nights in a row. How do vampire bats locate that ankle vessel they need to feed on? Back we go to that amazing nose. The heat sensors of bats are called pit organs, just like in the pit vipers we talked about last week. There are three to four of these organs in the noseleaves of the bat, and a couple across the upper lip as well. As opposed to the pit vipers, vampire bats have adapted a heat sensor, not a cold sensor to use as their infrared detector for blood vessels. TRPV1, the same receptor that is used for the capsaicin burn and heat regulation in mammals, is present in very high numbers in the neurons of the pit organs. But this is no ordinary TRPV1. Mammals can’t detect heat from 20 cm away with a regular TRPV1 – this is a modified TRPV1. A 2011 study found that this version of the protein is missing the last three amino acids on the carboxy terminus (the end produced last). This small change increases the sensitivity of the receptor from 43˚C all the way down to 30˚C, so that small differences in heat can be noted from almost a foot away. One more amazing fact - the bats have regular TRPV1 too. The two version of the protein come from the same gene and the normal one is used throughout the bat’s body for all the things we use TRPV1 for: heat regulation, reproduction, cancer inhibition, etc. Only in the neurons of the pit organs is the mRNA altered after it is transcribed from the gene (alternately spliced) to make the slightly shorter, more sensitive protein. Here is a cartoon of how blood clots. On the bottom flow chart, the first anti-co line is where desmolaris and draculin work. The third line is where desmoteplase acts. Their mouths have specialized salivary glands that make anticoagulants so no clot is formed. There is one anticoagulant that someone with a sense of humor named draculin. It acts to prevent blood clot formation. We have mentioned a second anticoagulant before, called desmoteplase. One of our Halloween posts talked about how it may be good for people that have had strokes. It dissolves any clots that may form. A 2014 clinical trial is showing that desmoteplase is better than the tissue plasminogen activator clot busters now being used (rtPA), since they have a half-life of four hours (as opposed to 5 minutes for rtPA) and it’s breakdown products aren’t as toxic to nerves and the blood brain barrier as compared to rtPA. A newer anticoagulant is called desmolaris. A 2013 study showed that it works on yet another part of the clotting system to prevent clot formation. And this isn’t all of them. A 2014 protein survey suggests that there may be dozens more anticoagulant proteins in vampire bat saliva. |Which flying machine is more complex and cool?| Lets add up the vampire bat’s technologies and compare them to an F16. The bat can fly and turn better. The bat has radar and infrared heat detection. It has high powered listening devices that can discriminate between two individuals. Finally, it has biological weapons that allow it to do its work without alarming the target. All that in a “machine” that can fit into the palm of your hand. Defense aeronautical engineers must feel so embarrassed. Next week, let’s take it just a bit further. Female mosquitoes aren’t just looking for you, they’re tasting and feeling for you. They use CO2 gradients as well as my prodigious heat to find me on a warm picnicking evening. For more information or classroom activities, see: Leaf-nosed bats – Alternate splicing –
Last updated: December 1st, 2020 Vitamin C is commonly known for its role in supporting the immune system, particularly for preventing infections such as the common cold. But did you know that the health effects of vitamin C range from collagen formation to prevention of ocular diseases? Continue reading to learn about vitamin C, including its functions in the body, health benefits, recommended intake levels, and the best sources of vitamin C. What is vitamin C? Vitamin C, also known as L-ascorbic acid, is a water-soluble vitamin and essential micronutrient. As humans, we cannot produce vitamin C in the body, so it must be obtained from dietary sources, primarily fruits and vegetables. Vitamin C is absorbed in the small intestine and high concentrations are stored in the brain, adrenal glands, pituitary gland, eyes, leukocytes (a type of white blood cell), (1) and skin. (12) What does vitamin C do? In the body, vitamin C supports immune health, acts as an antioxidant, and assists in collagen synthesis. (11) Vitamin C functions as a cofactor for enzymes involved in regulating gene transcription and synthesizing certain neurotransmitters and hormones. (3) In the immune system, vitamin C supports the function of the epithelial barrier, the cells that provide a physical barrier against pathogens. Vitamin C may enhance the function of various white blood cells, resulting in the destruction of microbes. (4) As an antioxidant, vitamin C can neutralize free radicals that may be caused by certain environmental factors, such as exposure to environmental pollutants and ultraviolet (UV) radiation. (12) Vitamin C may further enhance antioxidant function in the body by regenerating (recycling and repairing) other antioxidants such as vitamin E. (2)(11) Vitamin C also enhances the absorption of non-heme iron, a form of iron that is less readily absorbed in the gastrointestinal tract. (8) Health benefits of vitamin C Research has examined the effects of vitamin C on various health conditions and processes in the body, including cardiovascular disease, cancer, ocular diseases, immune function, cognitive health, blood sugar management, and skin health. A systematic review concluded that vitamin C deficiency is associated with a higher risk of cardiovascular disease (CVD) mortality. In certain groups, including individuals with low vitamin C levels, vitamin C supplementation may improve endothelial function and blood pressure levels, two indicators of cardiovascular health. However, the authors noted that there is insufficient evidence to support vitamin C supplementation for CVD risk reduction in all individuals. (10) Researchers have sought to understand the protective effects of vitamin C against cancer. Vitamin C supplementation may improve well-being, reduce pain, and increase survival in cancer patients. Specifically, high-dose intravenous (IV) administration of vitamin C was associated with increased survival in advanced cancer patients when compared to controls. (6) Along with other antioxidant vitamins and minerals, vitamin C may slow the progression of advanced age-related macular degeneration and the loss of visual acuity (sharpness) in individuals showing signs of the condition. Vitamin C may also have beneficial effects on the development of cataracts and diabetic retinopathy, however, further studies investigating these effects are needed. (6) Vitamin C is known to support cellular functions of the innate and adaptive immune systems, and a deficiency in the vitamin may increase the risk of infections. Vitamin C supplementation is commonly used to prevent and treat upper respiratory infections. (4) Research suggests that with ongoing supplementation of vitamin C, the duration of the common cold may be shortened in both adults and children, and the severity of symptoms may be significantly reduced. (7) To learn about immune-supportive foods, visit the Fullscript blog. Studies have consistently found lower vitamin C levels in individuals who are cognitively impaired when compared to healthy individuals. (14) In the nervous system, vitamin C may be used to improve neurotransmission, a process involved in learning, memory, and movement. The use of vitamin C supplementation has been examined in neurodegenerative conditions, and animal studies suggest it may reduce the incidence of Alzheimer’s disease. (6) Blood sugar management Vitamin C supplementation may improve blood glucose (blood sugar) levels in individuals with diabetes. A meta-analysis of trials examining the use of vitamin C supplementation in individuals with type 2 diabetes found that vitamin C was associated with reduced levels of fasting blood glucose. Interestingly, the findings suggested that other antioxidants (with or without the use of vitamin C) had insignificant effects on blood sugar markers. (13) Vitamin C plays a variety of roles in supporting healthy skin, including promoting collagen formation and neutralizing damaging free radicals, particularly when used in combination with vitamin E. Vitamin C derivatives, such as magnesium ascorbyl phosphate, may decrease the synthesis of melanin (skin pigment), which may be helpful in addressing age spots and melasma, a condition characterized by dark, discolored skin patches. (12) Topical vitamin C serums and moisturizers are widely available and may be effective in reducing some of the visible signs of aging, including dark spots and fine lines. (12) If you have sensitive skin or other skin conditions, speak to a healthcare practitioner before adding a vitamin C serum to your skincare routine. Learn about the top supplements for skin health on the Fullscript blog. Recommended daily intake of vitamin C Individual requirements for vitamin C vary based on factors including age, gender, health status, and environmental exposures. The following table summarizes the daily recommended dietary allowances (RDAs) and adequate intake (AI) of vitamin C. RDAs are the average daily intake that will meet the nutrient requirement of most healthy individuals (over 97%). As an RDA has not been established for infants, the AI is listed for children below one year. (11) Signs and symptoms of a vitamin C deficiency Vitamin C deficiency may result in a number of health symptoms and complications, such as impaired immune health and increased vulnerability to infections. (4) A lack of vitamin C compromises the formation of collagen, which in turn impairs the integrity of collagen-containing structures in blood vessels, bones, mucous membranes, and skin. (1) Within eight to 12 weeks of insufficient vitamin C intake, individuals may develop scurvy, a clinical syndrome of vitamin C deficiency. Scurvy is characterized by several symptoms including swollen gums, poor wound healing, hemorrhage (internal bleeding from ruptured blood vessels), and hyperkeratosis (skin thickening). (9) Certain factors may increase the risk of vitamin C deficiency, including: - Alcoholism or anorexia (1) - Being elderly (1) - Certain health conditions (e.g., malabsorption, certain forms of cancer, individuals with end-stage renal disease on chronic hemodialysis) (11) - Certain medications (e.g., aspirin, corticosteroids, indomethacin, oral contraceptives, tetracyclines) (1) - Infants fed boiled or evaporated milk (11) - Liver transplant (1) - Smoking and second-hand smoke (11) - Unvaried or restricted diets (e.g., due to food fads, food allergies) (11) Additionally, individual needs for vitamin C may increase due to factors such as air pollution, infections, and conditions characterized by inflammation and oxidative stress (e.g., type 2 diabetes, (4) arthritis, asthma). (1) Sources of vitamin C Regular dietary intake of vitamin C is required to maintain health and prevent deficiency of the nutrient. Whole food sources of vitamin C provide additional nutrients and phytochemicals, such as bioflavonoids, which may increase the nutrient’s bioavailability (proportion of the vitamin that is circulated for use). (3) When an individual’s need for vitamin C is increased or intake through dietary sources is insufficient, vitamin C supplementation may be considered. Foods high in vitamin C Vitamin C is abundant in several fruit and vegetables, including: - Bell peppers - Brussels sprouts - Citrus fruits (e.g., orange, grapefruit) - Tomatoes and tomato juice (11) If you’re trying to get more of this vitamin in your diet, it’s important to note that heat may destroy vitamin C. In order to preserve vitamin C, consider consuming your produce raw or steamed, as opposed to broiled, grilled, or roasted. (11) Vitamin C supplements Scientific literature has proposed that the RDAs for vitamin C may not meet bodily needs and that optimal health may require vitamin C supplementation. Vitamin C can be supplemented orally or intravenously when higher doses are required. (5) Research suggests that synthetic vitamin C supplements may have comparable bioavailability to vitamin C found in food. A review examined the effects of synthetic vitamin C, food-derived vitamin C, and the combination of vitamin C and bioflavonoids in human trials. The study findings suggest that the intake of vitamin C in tablets, capsules, and liquid solution was comparable to the food sources used in the trials (e.g., kiwi, orange juice, broccoli, raspberries). Additionally, the study included one relevant human trial involving supplementation of vitamin C with bioflavonoids which showed the combination supplement had a comparable bioavailability to vitamin C on its own. (3) While no serious adverse effects of vitamin C intake have been consistently observed, research suggests that the tolerable upper limit for vitamin C intake is approximately 2 g per day. With intravenous use, adverse effects may include flushing, headaches, nausea, and dizziness. (1) The most commonly reported adverse effect from oral vitamin C supplementation is gastrointestinal distress, such as diarrhea and abdominal cramps, which may occur at higher doses due to its osmotic effect in the large intestine. (5)(11) The bottom line It’s important to include vitamin C-rich foods or supplements in your diet to obtain this essential nutrient. The health benefits of vitamin C have been observed for cardiovascular disease, cancer, ocular diseases, immune function, cognitive health, blood sugar levels, and skin health. If you’re a patient, speak to your integrative healthcare practitioner prior to making any changes in your health or supplement plan. - Abdullah, A., Jamil, R. T., & Attia, F. N. (2019). Vitamin C (ascorbic acid). In StatPearls . Treasure Island, FL: StatPearls Publishing. Retrieved from https://www.ncbi.nlm.nih.gov/books/NBK499877/ - Buettner, G. R. (1993). The pecking order of free radicals and antioxidants: Lipid peroxidation, α-tocopherol, and ascorbate. Archives of Biochemistry and Biophysics, 300(2), 535–543. - Carr, A. C., & Vissers, M. C. (2013). Synthetic or food-derived vitamin C–are they equally bioavailable? Nutrients, 5(11), 4284–4304. - Carr, A. C., & Maggini, S. (2017). Vitamin C and immune function. Nutrients, 9(11), 1211. - Deruelle, F., & Baron, B. (2008). Vitamin C: Is supplementation necessary for optimal health? The Journal of Alternative and Complementary Medicine, 14(10), 1291–1298. - Grosso, G., Bei, R., Mistretta, A., & Marventano, S. (2013). Effects of vitamin C on health: A review of evidence. Frontiers in Bioscience, 18(3), 1017-29 - Hemilä H. (2017). Vitamin C and infections. Nutrients, 9(4), 339. - Lynch, S. R., & Cook, J. D. (1980). Interaction of vitamin C and iron. Annals of the New York Academy of Sciences, 355(1 Micronutrient), 32–44. - Maxfield, L., & Crane, J. S. (2019). Vitamin C deficiency (scurvy). In StatPearls . Treasure Island, FL: StatPearls Publishing. Retrieved from https://www.ncbi.nlm.nih.gov/books/NBK493187/ - Moser, M. A., & Chun, O. K. (2016). Vitamin C and heart health: A review based on findings from epidemiologic studies. International Journal of Molecular Sciences, 17(8), 1328. - National Institutes of Health Office of Dietary Supplements. (2019, July 9). Vitamin C fact sheet for health professionals. Retrieved from https://ods.od.nih.gov/factsheets/VitaminC-HealthProfessional/ - Pullar, J. M., Carr, A. C., & Vissers, M. (2017). The roles of vitamin C in skin health. Nutrients, 9(8), 866. - Tabatabaei-Malazy, O., Nikfar, S., Larijani, B., & Abdollahi, M. (2015). Influence of ascorbic acid supplementation on type 2 diabetes mellitus in observational and randomized controlled trials; A systematic review with meta-analysis. Journal of Pharmacy & Pharmaceutical Sciences, 17(4), 554. - Travica, N., Ried, K., Sali, A., Scholey, A., Hudson, I., & Pipingas, A. (2017). Vitamin C status and cognitive function: A systematic review. Nutrients, 9(9), 960.
Good terms that describe the opposite of integrity include dishonesty, wickedness and immorality. Integrity is best described as conduct that conforms to an accepted standard of right and wrong, absolute devotion to telling the truth and faithfulness to high moral standards.Know More Integrity usually refers to a quality of a person's character, and any person said to be acting with integrity is usually being honest and noble or is perhaps making a personal sacrifice for the greater good. Words describing the opposite of integrity are therefore used to refer to base behavior, such as lying, deceiving others for personal gain, stealing or behaving in an immoral manner that should be a cause of personal shame.Learn more about Ethics Ethics influence human behavior by helping people make informed decisions and affecting the way they relate to other people. Ethics also determine how seriously individuals take their roles.Full Answer > Moral behavior is extremely subjective, but it is generally represented by an individual's knowledge of social and cultural norms and the capacity to perform good works through selfless actions. Some moral behaviors may include honesty, giving to charity and avoiding negative situations.Full Answer > In its application to law and sentencing, moral reconciliation is a form of restorative justice in which wrongdoers attempt to repair the relationship between themselves and their victims and, by doing so, achieve a moral transformation. In contrast to punishment which is solely retributive or compensatory, moral reconciliation attempts to restructure the wrongdoer-and-victim relationship to one based on a workable degree of moral equality. Moral reconciliation, forgiveness and wrongdoer rehabilitation are also viewed as ideals to strive for, which may only be achieved in varying degrees depending on the parties involved.Full Answer > Most psychologists and researchers agree that ethics can be taught, as did Socrates some 2,500 years ago, which is because ethics requires knowing what a person should do, and that knowledge can be shared. When it comes to moral development in human beings, the Harvard psychologist Lawrence Kohlberg has conducted research showing that a person can still grow from a moral and ethical standpoint later in life.Full Answer >
The overall aim of the curriculum is to enable pupils to articulate their ideas and to express these in a written form with a focus on spelling and handwriting. At Parklands, we teach our children to become effective writers through the following skills At Parklands we want to raise the profile of and develop vocabulary across the school. Each class has a set of key story books that are rich in vocabulary. Each half term a book is chosen and time is spend teaching a bank of words from the story. Activities are then carried out that week relating to the new vocabulary to develop children's understanding of the words. In KS1 new vocabulary learnt is kept in a class vocab book for children to refer to and looked at by the children. Ways you can support your child at home: -sharing and reading stories and talking about the the meaning of words with your child is crucial. -just because your child might be able to decode and read a word correctly doesn't mean they understand what that word means. Never presume and always discuss vocabulary you come across in stories. Please see below for our most up to date policies.
Sign In / Sign Out - ASU Home - My ASU - Colleges and Schools - Map and Locations If you've accidentally taken a sip of sea water or had to gargle with salt water, you've probably realized that freshwater and saltwater have some pretty important differences. These differences exist not only when these waters are liquid, but also when they freeze. In this experiment, we will look at one major difference between frozen freshwater and frozen saltwater. As you read in the Frozen Life companion story, when sea ice forms, freshwater freezes and leaves behind a concentrated salt solution called brine. This brine is found in pockets throughout the ice. Brine pockets allow organisms that get trapped in the ice to avoid freezing and survive until the next spring. The pockets are small and isolated in winter, but in spring, as the ice begins to warm, the brine pockets get bigger and combine with other pockets to form channels which allow the organisms to move throughout the ice. You can explore the differences in channels between seasons in our channel maze. In this experiment we are going to compare the difference between regular freshwater ice (the kind you would put in your drink) and sea ice. To do this we will create fresh and saltwater ice, then put a couple drops of dye on each type of ice and compare what happens. What do you think is going to happen? Do you think the dye will act the same in both ice types? This should ideally be done over two days. The first four steps in the Procedure should be done on Day 1, and the remaining steps should be completed on Day 2. Using your observations, decide whether your predictions were correct (what happened and did the dye act the same in both ice types?). Make a conclusion about what you think is happening based off of what you saw. Then, when you're done, you can move on to the next step. The way the food coloring disperses in the ice lets you see the difference between sea and freshwater ice. Take a look at these two ice core sections. The one on the left is ice made from fresh water, and the one on the right is ice made of salt water. What are some of the differences you notice in this image? Sea ice (on the right) forms brine channels which food coloring is able to penetrate. This turns the inside of the ice the same color as the dye. The freshwater ice (on the left) is solid all the way through and has no channels. This results in food coloring pooling on top or run down the side of the ice. Kyle Kinzler is a graduate student at Arizona State University and is working on his Masters thesis focusing on Arctic sea ice algae with Dr. Susanne Neuer. To learn more about Dr. Neuer's research, visit What Lies Beneath. Kyle Kinzler. (2014, July 15). When Water Gets Icy. ASU - Ask A Biologist. Retrieved February 18, 2019 from https://askabiologist.asu.edu/experiments/when-water-gets-icy Kyle Kinzler. "When Water Gets Icy". ASU - Ask A Biologist. 15 July, 2014. https://askabiologist.asu.edu/experiments/when-water-gets-icy Kyle Kinzler. "When Water Gets Icy". ASU - Ask A Biologist. 15 Jul 2014. ASU - Ask A Biologist, Web. 18 Feb 2019. https://askabiologist.asu.edu/experiments/when-water-gets-icy
What is ZigBee - ZigBee is a specification for a suite of high level communication protocols using small, low-power digital radios based on an IEEE 802 standard for personal area networks (PAN). - ZigBee is a cost- and energy-efficient wireless network standard. It is targeted at radio-frequency (RF) applications It employs mesh network topology, allowing it provide high reliability and a reasonable range. - ZigBee is the global wireless language connecting dramatically different devices to work together and enhance everyday life. ZigBee Use Areas - Home & Building: air-conditioning temperature control system, automatic control of lighting, curtains, automatic control, gas measurement control, remote control of appliances, etc.; - Energy: industrial Control and agricultural control, various monitoring, sensor automatic control; - Health: patient monitoring, fitness monitoring, medical sensors, and other emergency pager. - Telecom: PC and peripherals; Wireless Serial Port: ZigBee can be thought as wireless serial port. At least two ZigBee modules are needed to make them communicate with each other. ZigBee modules are connected with computer through serial port. Although most used adapters have USB interface, computer still treats them as serial port. Send data to specific address: Each ZigBee module has an address. It also talks to a specific destination address. The destination address can be the broadcast address. That is all modules in same network can receive the data. Grouped by PAN ID: ZigBee network can be simple as just include two modules. It is also possible that the network contains a coordinator and many routers and end devices. The most important thing is that they must share the same PAN ID. AT and API: AT mode is transparent. Raw data is sent and received. In API mode, data will be wrapped in packages with API header and followed by a checksum. AT mode is easy to use, while API mode has more control for transfer.
Substrate waters is the basis waters where algae can grow and develop properly. Deployment of sea algae and density in a body of water depends on the type of substrate, seasons and species composition. According to Mubarak and Wahyuni (1981) the types of substrates that can be covered by marine algae is sand, mud and rubble. The best type of substrate for the growth of marine algae is a mixture of sand, rocks and rubble. On soft substrates such as sand waters and mud, will be found many kinds of marine algae Halimeda sp, Caulerpa sp, Gracillaria sp. While bottom waters are bersubstrat hard as live coral, rocks and rubble will encounter many types of marine algae Sargassum sp, sp Turbinaria, Ulva sp, and Entermorpha sp. Nontji (1993) stated that at least marine algae found in waters with sand or muddy base, due to the limited hard objects are sturdy enough to place attachment. Chemical composition of the substrate does not affect the life of marine algae, just as the attachment of marine algae at the bottom waters. Marine algae Eucheuma sp is the most excellent growth at the base of the rocky waters. Dissolved oxygen is very important because it is needed by aquatic organisms. Dissolved oxygen is generally encountered in the surface layer, therefore the oxygen gas from the air nearby conduct dissolution (diffusion) into the water. Phytoplankton also helps increase the amount of dissolved oksigan levels in the surface layer at a time when daytime. This addition is caused by the release of oxygen gas as a result of photosynthesis. The solubility of oxygen in the sea is very important in influencing the chemical equilibrium of sea water and also in the life of the organism. Oxygen is needed by animals and water plants, including bacteria for respiration. DO quality standard for the seaweed is more than 5 mg / l (Sulistijo and Atmadja, 1996), this means that if the dissolved oxygen in waters up to 5 mg / l, the metabolism of seaweed can run optimally. Buesa (1977) in Iksan (2005) states that the daily oxygen changes may occur in the sea and can be significantly affected benthic algae production. Fortunately usually always enough oxygen for the metabolism of algae (Chapman, 1962 in Iksan, 2005).
Founder Jan Sedláček got interested in the history of hygiene by reflecting over its massive influence on society: “Lack of hygiene and therefore infectious diseases were the biggest reason for mortality in the past, not wars. Especially the inadequate dealing with excrements caused millions of deaths.” The aim of the museum is to present the artistic and utilitarian level of the objects themselves and the field of human waste disposal, and present to the public aspects of this neglected part of human culture. The exhibition illustrates the development to the toilet of nowadays as well as showcasing hundreds of historical pisspots, which differed due to the ´class of population´ they were used by. There are special pisspots for men, women and children, and furthermore the visitor can find various designs and materials – from glass and porcelain to stone, metal or plate. The exhibits are ordered chronologically and sometimes contain references to special historical events – for example the exhibition shows a pisspot that was used on the famous Titanic. Visitors can also find a “washiki” – a historical Japanese toilet made out of porcelain. The “washiki” are used by squatting over them – a position, that is in fact generally agreed to be much better for our health and makes splashing-down far easier. Why do we stick to the common, more unhealthy design of our toilets without considering alternatives and getting inspired by other cultures? That is also a question the museum wants to rise. Next to extraordinary design, Sedláček is especially interested in curious stories accompanying the toilets and piss pots. “Interesting are the so called ´Bourdalou´ which are named after the French preacher Louis Bourdaloue. It is said that in the France of the late 17th century his preachment was so exciting that the ladies couldn’t hold it anymore, but also didn’t want to miss any of his words. A servant then passed them one of these oval pots, in which they could empty their bladder, covered by their long and wide skirts.” The first water closet was constructed by Sir John Harrington in 1596 for the Queen of England. But it still would take quite some centuries until toilets with flush would become natural accoutrement in houses - for a long time, Europeans fancied toilet-tables in which the closet was hidden. Jan Sedláčeks dream is to constantly extend his collection. The next big achievement for him would be to exhibit a space-ship toilet – another proof for how the design of the toilet constantly adjusts to human desires and visions.
Published: June 5, 2013 by Akio Matsumura Contaminated water is posing a new problem at the Fukushima site. Tepco must continue to cool the irradiated fuel rods, but has not devised a permanent and sustainable disposal process for the highly radioactive contaminated water that results. While they have a process that can remove much of the radiation from the water, some elements like tritium – a carcinogen – cannot be removed and is concentrating at magnitudes much higher than is legal. Tepco wants to spill the water into the Pacific Ocean in order to dilute the tritium levels to legal amounts, but fishermen skeptical of the power company oppose the move. Meanwhile, Tepco is storing the contaminated water in tanks. Unsurprisingly, those tanks are leaking (NYT). They admit they will eventually run out of space for the storage tanks. Management of the contaminated cooling water has come to be the most demanding and dangerous issue that Tepco has faced since 2011. According to The Japan Times (excerpted): How Water Becomes Radioactively Contaminated by Gordon Edwards, Ph.D. (1) When nuclear fuel is used in a nuclear reactor or an atomic bomb, the atoms in the fuel are “split” (or “fissioned”) to produce energy. The fission process is triggered by subatomic particles called neutrons. In a nuclear reactor, when the neutrons are stopped, the fission process also stops. This is called “shutting down the reactor.” (2) But during the nuclear fission process, hundreds of new varieties of radioactive atoms are created that did not exist before. These unwanted radioactive byproducts accumulate in the irradiated nuclear fuel — and they are, collectively, millions of times more radioactive than the original nuclear fuel. (3) These newly created radioactive materials are classified as fission products, activation products, and transuranic elements. Fission products — like iodine-131, cesium-137 and strontium-90 — are the broken pieces of atoms that have been split. Activation products — like hydrogen-3 (“tritium”), carbon-14 and cobalt-60 — are the result of non-radioactive atoms being transformed into radioactive atoms after absorbing one or more stray neutrons. Transuranic elements — like plutonium, neptunium, curium and americium — are created by transmutation after a massive uranium atom absorbs one or more neutrons to become an even more massive atom (hence “transuranic,” meaning “beyond uranium”). (4) Because of these intensely radioactive byproducts, irradiated nuclear fuel continues to generate heat for years after the fission process has stopped. This heat (“decay heat”) is caused by the ongoing atomic disintegration of the nuclear waste materials. No one knows how to slow down or shut off the radioactive disintegration of these atoms, so the decay heat is literally unstoppable. But decay heat does gradually diminish over time, becoming much less intense after about 10 years. (5) However, in the early years following a reactor shutdown, unless decay heat is continually removed as quickly as it is being produced, the temperature of the irradiated fuel can rise to dangerous levels — and radioactive gases, vapors and particles will be given off into the atmosphere at an unacceptable rate. (6) The most common way to remove decay heat from irradiated fuel is to continually pour water on it. Tepco is doing this at the rate of about 400 tons a day. That water becomes contaminated with fission products, activation products and transuranic elements. Since these waste materials are radiotoxic and harmful to all living things, the water cannot be released to the environment as long as it is contaminated. (7) Besides the 400 tons of water used daily by Tepco to cool the melted cores of the three crippled reactors, another 400 tons of ground water is pouring into the damaged reactor buildings every day. This water is also becoming radioactively contaminated, so it too must be stored pending decontamination. (8) Tepco is using an “Advanced Liquid Processing System” (ALPS) that is able to remove 62 different varieties of radioactive materials from the contaminated water — but the process is slow, removal is seldom 100 percent effective, and some varieties of radioactive materials are not removed at all. (9) Tritium, for example, cannot be removed. Tritium is radioactive hydrogen, and when tritium atoms combine with oxygen atoms we get radioactive water molecules. No filtration system can remove the tritium from the water, because you can’t filter water from water. Released into the environment, tritium enters freely into all living things. (10) Nuclear power is the ultimate example of the throwaway society. The irradiated fuel has to be sequestered from the environment of living things forever. The high-quality materials used to construct the core area of a nuclear reactor can never be recycled or reused but must be perpetually stored as radioactive waste. Malfunctioning reactors cannot be completely shut off because the decay heat continues long after shutdown. And efforts to cool a badly crippled reactor that has melted down result in enormous volumes of radioactively contaminated water that must be stored or dumped into the environment. No wonder some have called nuclear power “the unforgiving technology.” Nine Medical Implications of Tritium-contaminated Water by Helen Caldicott, M.D. (1) There is no way to separate tritium from contaminated water. Tritium, a soft beta emitter, is a potent carcinogen which remains radioactive for over 100 years. It concentrates in aquatic organisms including algae, seaweed, crustaceans and fish. Because it is tasteless, odorless and invisible, it will inevitably be ingested in food, including seafood, over many decades. It combines in the DNA molecule – the gene – where it can induce mutations that later lead to cancer. It causes brain tumors, birth deformities, and cancers of many organs. The situation is dire because there is no way to contain this radioactive water permanently and it will inevitable leak into the Pacific Ocean for over 50 years or longer along with many other very dangerous isotopes including cesium 137 which lasts for 300 years and causes very malignant muscle cancers –rhabdomyosarcomas, strontium 90 which also is radioactive for 300 years and causes bone cancers and leukemia, amongst many other radioactive elements. (2) All cancers can be induced by radiation, and because much of the land in Fukushima and beyond is contaminated, the food – tea, beef, milk, green vegetables, rice, etc. – will remain radioactive for several hundred years. (3) “Cleanup” is a misnomer, radioactively contaminated soil, timber, leaves, and water cannot be decontaminated, just possibly moved to another site there to contaminate it. (4) Incineration of radioactive waste spreads the cancer-inducing agents to other areas including non-contaminated areas of Japan. (5) Cancers have a long incubation period – 2 to 80 years after people eat or breath radioactively contaminated food or air. (6) The IAEA says that decommissioning of these reactors will take 50 to 60 years and some people predict that this mess will never be cleaned up and removed. (7) Where will Japan put this highly radioactive melted fuel, fuel rods and the like? There is absolutely no safe place to store this deadly material (that must be isolated from the exosphere for one million years according to the US EPA) on an island that is riven by earthquakes. (8) As these radioactive elements continually seep into the water and the ocean and are emitted into the air the incidence of congenital deformities, cancer and genetic defects will inevitably increase over time and into future generations. (9) Children are 10 to 20 times more sensitive to the carcinogenic effects of radiation than adults (little girls are twice as sensitive as boys) and fetuses are thousands of times more sensitive – one X ray to the pregnant abdomen doubles the incidence of leukemia in the child.
Dryden Flight Research Center's Contributions to Apollo's Moon Landing Success NASA's Flight Research Center at Edwards Air Force Base (renamed the Dryden Flight Research Center), generally thought of as an aeronautical flight-test facility in the 1960s, made a number of contributions to the NASA space program during that era as well. For example, researchers explored the concept of paraglider landings for a space vehicle and the use of wingless spacecraft that could glide to precise landings, but it was the X-15 hypersonic research program and the Lunar Landing Research Vehicle that had the most direct impact on the Apollo missions to the Moon. The North American Aviation X-15 rocket planes--designed to explore the problems of atmospheric and space flight at supersonic and hypersonic speeds--served as flying laboratories, carrying scientific experiments above the reaches of the atmosphere. Many research results from the X-15 program at Dryden Flight Research Center contributed directly to the success of the Apollo lunar missions, now being celebrated on the 40th anniversary of the first moon landing on July 20, 1969. North American – later North American Rockwell, then Rockwell International – served as prime contractor for both the X-15 and Apollo Command/Service Module spacecraft. Designers of the Apollo CSM drew upon experience from the X-15 program, and even used the X-15 as a test bed for new materials. Advanced titanium and nickel-steel alloys developed for the X-15 were used in the Apollo and later spacecraft designs. The discovery of localized hot spots on the X-15, for example, led to development of a bi-metallic 'floating retainer' concept to dissipate stresses in the X-15's windshield. This technology was subsequently applied to the Apollo and space shuttle orbiter windshields. The X-15's performance allowed researchers to accurately simulate the aerodynamic heating conditions that the Apollo Saturn rocket would face, and allowed full recovery of test equipment, calibration of results, and repeated testing where necessary. In 1967, technicians applied samples of cryogenic insulation--designed for use on the Apollo Saturn V second stage--to the X-15's speed brakes to test the material's adhesive characteristics and response to high temperatures. X-15 re-entry experience and heat-transfer data were also valuable, and led to design of a computerized mathematical model for aerodynamic heating that was used in the initial Apollo design study. Lessons learned from X-15 turbulent heat-transfer studies contributed to the design of the Apollo CSM because designers found that they could build lighter-weight vehicles using less thermal protection than was previously thought possible. Following the challenge by president John F. Kennedy in 1962 to land on the moon, two groups began working on a way to prepare astronauts for the critical descent and landing on the moon. The problems facing them were considerable: how to build a free-flying simulator that could negate 5/6ths of the Earth's gravity while entirely eliminating the effects of the atmosphere, since the moon had no atmosphere and only 1/6th of Earth's gravity. Ideas for this unique type of flying machine had begun circulating at Dryden Flight Research Center, a year earlier. Center engineers initially didn't know that Bell Aircraft Company, later Bell Aerosystems, was also working on the task, but by the end of the year, the center had awarded a study contract to Bell. Bell was the only firm in the United States that had significant experience developing vertical takeoff aircraft using jet lift for takeoff and landing. After winning a contract from the center to design and build the machines in 1963, Bell delivered two Lunar Landing Research Vehicles or LLRVs--often called 'flying bedsteads' due to their ungainly appearance--to the Flight Research Center in 1964 for flight testing and development. The LLRV had a jet engine hung vertically in the middle of the frame, fixed inside two gimbals, allowing the vehicle itself to rotate as much as 40 degrees in any direction while the jet remained vertically aligned. A series of hydrogen peroxide thrusters, eight around the frame's center and four at each corner, provided lunar simulation thrust that the pilot controlled. Three analog computers took data on side forces and vehicle weight and produced just enough jet thrust so that, in lunar simulation, the LLRV descended as though in lunar gravity. Any gusts of wind were cancelled when the computers sensed them and fired thrusters to automatically cancel the wind. There were no mechanical links between the pilot and the engine or thrusters: everything was sent to the computers that, in turn, commanded the thrust desired. During flight tests, a pilot directed the LLRV to climb about 300 feet, initiated lunar simulation mode, and then had less than eight minutes to complete a safe descent. Research flying over the next two-and-half years yielded a configuration suitable for astronaut training, and Bell subsequently built three similar craft--Lunar Landing Training Vehicles--that were sent to the Manned Spaceflight Center in Houston, now the Johnson Space Center. One of the LLRVs at the Flight Research Center was also sent to Houston for the training. Apollo 11 commander Neil Armstrong recalled later that his landing on the moon on July 20, 1969 was a familiar job because of the LLTV’s authenticity. As a side note, today's aircraft with fly-by-wire digital electronic control systems trace their lineage to the LLRV and its analog computers, and to the engineers who worked on that project. They cut their teeth on computer-controlled flight systems with the LLRV, allowing them the confidence to modify an F-8 jet fighter into the first aircraft with pure digital fly-by-wire electronic controls. Partially restored by a movie company in the late 1990s, one of the two original Lunar Landing Research Vehicles remains on sheltered display today at NASA Dryden. Peter Merlin and Christian Gelzer, Historians NASA Dryden Flight Research Center
Electroluminescence (EL) is the phenomenon in which electrical energy is converted to luminous energy without thermal energy generation. In our research group we explore so-called high field electroluminescent thin film materials. These materials are different in principle from standard light emitting diode (LED) and diode lasers where electrons and holes recombine to create light. In these high field EL materials, typically rare earth and transition metal ions are doped in a wide bandgap materials. This phosphor layer is sandwiched between two insulators to limit the current and driven with an alternating current at high fields. The figure below shows mechanistically how these alternating current thin film electroluminescent (ACTFEL) devices work. The four main mechanisms are: 1) tunnel emission of electrons from interface states, 2) acceleration of electrons to high energies, 3) impact excitation or impact ionization of the luminescent center, and 4) de-excitation of the excited electron by radiative (photon generation) or non-radiative recombination. Also illustrated below is a standard thin film stack for the ACTFEL device (the total film stack is ~ 1 mm thick). ACTFEL thin film stack ACTFEL device mechanisms Because these devices can be made thin (about the thickness of a glass sheet) this is a flat panel display technology. The figures below show a few different ACTFEL displays. Figures of 2 flat panel ACTFEL displays (from Planar Systems Inc.) Cathodoluminescence is a process whereby light is created from an energetic electron beam. This process is what probably powers your TV at home and your computer monitor on your desk (unless you have an LCD screen). This process is responisible for the light from cathode ray tubes (ie standard TV and computer monitors. The figure below illustrates how your TV works. Namely an energetic electron beam is rastered across the screen and bombards individual pixels which have powder materials that give off light when the electron beam hits it. Schematic of a standard cathode ray tube (CRT) Because we always want things bigger and better, CRT displays have grown in size over the years. The problem is that as the display size increases, the depth also increases because of the electron optics needed to address the larger and larger screens. To save space, there is a push for flat panel display technologies to replace standard cathode ray tubes. An alternative technology which is fundamentally similar to CRTs is the so-called Field Emission Display (FED). A cross-section schematic of an FED is shown below which illustrates that rather than having a single electron source addressing many pixels, an FED has many electron sources addressing a single pixel. Consequently, the depth needed to "steer" the electron beam is unnecessary and it can be made "flat". Schematic of a Field Emission Display (FED) Field emission displays typically require lower operating voltages than a standard CRT display, however most materials are not as efficient at lower electron beam energies. Consequently, we are exploring the synthesis of new materials for low voltage applications.
Hemp helps detoxify and regenerate the soil Falling leaves and shrubs not used in processing fall to the ground and replenish the soil with nutrients, nitrogen, and oxygen. This rich organic mulch promotes the development of fertile grassland. Some of the carbon which is "breathed" in by the plant in the form of CO2 is left in the roots and crop residues in the field. The CO2 is broken down by photosynthesis into carbon and oxygen, with oxygen being aspirated back into the atmosphere. With each season more CO2 is reduced from the air and added to the soil. Hemp roots absorb and dissipate the energy of rain and runoff, which protects fertilizer, soil, and keeps seeds in place. Hemp plants slow down the velocity of runoff by absorbing moisture. By creating shade, hemp plants moderate extreme variations in temperatures, which conserves moisture in the soil. Hemp plants reduce the loss of topsoil in windy conditions. Hemp plants also loosen the earth for subsequent crops Hemp plants can even pull nuclear toxins from the soil. In fact hemp was planted near and around the Chernobyl nuclear disaster site to pull radioactive elements from the ground. The process is called phyto-remediation, which means using plants (phyto) to clean up polluted sites. Phyto-remediation can be used to remove nuclear elements, and to clean up metals, pesticides, solvents, crude oil, and other toxins from landfills. Hemp breaks down pollutants and stabilizes metal contaminants by acting as a filter. Hemp is proving to be one of the best phyto-remediative plants found. The minimum benefit of a hemp crop is in its use as a rotation crop. Since hemp stabilizes and enriches the soil farmers grow crops on, and provides them with weed-free fields, without cost of herbicides, it has value even if no part of the plant is being harvested and used. Any industry or monetary benefit beyond this value is a bonus. Rotating hemp with soy reduces cyst nematodes, a soy-decimating soil parasite, without any chemical input. Hemp could be grown as a rotation crop and not compete with any other food crops for the most productive farmland. Marginal lands make fine soil for hemp, or hemp can be grown in between growing seasons. Hemp and the Environment All hemp products are completely biodegradable, recyclable, and hemp is a reusable resource in every aspect: pulp, fiber, protein, cellulose, oil, or biomass. Hemp can grow in any agronomic system, in any climate, and requires no herbicides, pesticides, fungicides, or insecticides to grow well. Hemp is its own fertilizer, its own herbicide (it is a weed), and its own pesticide. Hemp plants only need 10-13 inches of water, 1/3 of the amount which cotton requires, to grow to 8-12 feet in 3-4 months. Using hemp as biomass fuel would also reduce global warming because the hemp energy crop would pull carbon from the air and realease an equal amount when burned, instead of just releasing carbon as petrolium gasoline does now. Using hemp biomass to make charcoal, could eliminate the need to burn petrolium coal. Hemp biomass burns with virtually no sulfur emissions or ash, which minimize acid rain caused by the burning of coal. Deforestation is a big problem. Keeping trees alive and standing is necessary to our oxygen supply, and our well being. Trees provide the infrastructure which keeps microbes, insects, plants, fungi, etc. alive. The older and bigger the tree, the better for the environment it is. The more trees there are, the more oxygen is in the air, which helps reduce global warming. Hemp growing could completely eradicate the necessity to use wood at all because anything made from wood can be made from hemp, especially paper. The paper demand is suppose to double in next 25 years, and we simply cannot meet this demand without clear-cutting all of our forest. Using hemp for paper could reduce deforestation by half. An acre of hemp equals at least 4 acres of trees annually. Hemp paper can be recycled 7 to 8 times, compared with only 3 times for wood pulp paper. Hemp paper also does not need to be bleached with poisonous dioxins, which poison waterways. Carpets made from nylon, polyester, and polypropylene contaminate ground water. Hemp carpet is biodegradable and safe for the ground water when it is discarded. In 1993, carpet made up 1% of solid waste, and 2% of waste by volume. Our garbage facilities are overfilling with plastics. Hemp can make plastics which are biodegradable. Petrochemicals lubricants, paints, sealants, etc., poison the ground when they are discarded. Hemp can replace all of these petroleum-based products with non-toxic biodegradable organic oil-based products. Hemp can also be used to create green cleaning products. Many business owners and NY cleaning services have switched to green cleaning practices to ensure safety in the workplace and to help protect the environment. "Why use the forests which were centuries in the making and the mines which required ages to lay down, if we can get the equivalent of forest and mineral products in the annual growth of the fields?" --Henry Ford
This is a great winter activity to work on beginning sounds. Children choose a snow globe and identify the letter on it. They will find 3 snowflakes that have that letter, as well as a tree and snowman that have a picture on them with the beginning sound.. There are also worksheets for extra practice -26 complete sets for making a snow globe (A-Z) -5 worksheets for capital and lowercase matching -5 worksheets for beginning sounds If you own my Winter Preschool Pack , this is included in it.
There are two equinoxes every year – in March and September – when the Sun shines directly on the equator and the length of night and day are nearly equal. What is the September Equinox? There are two equinoxes every year – in September and March – when the sun shines directly on the equator and the length of day and night is nearly equal. The September equinox (or Southward equinox) is the moment when the sun appears to cross the celestial equator, heading southward. Due to differences between the calendar year and the tropical year, the September equinox can occur at any time from the 21st to the 24th day of September. At the equinox, the sun rises directly in the east and sets directly in the west. Before the Southward equinox, the sun rises and sets more and more to the north, and afterwards, it rises and sets more and more to the south. In the Northern Hemisphere the September equinox is known as the autumnal equinox. In the Southern Hemisphere it is known as the vernal or spring equinox. Spring in the South Seasons are opposite on either side of the equator, so the equinox in September is also known as the Autumnal (fall) equinox in the northern hemisphere. In the southern hemisphere, it's known as the Spring (vernal) equinox. Sun Crosses Celestial Equator The September equinox occurs the moment the Sun crosses the celestial equator – the imaginary line in the sky above the Earth’s equator – from north to south. This happens either on September 22, 23, or 24 every year. 10 Facts About the March Equinox At the equinox, the Sun rises directly in the east and sets directly in the west. However, because of refraction it will usually appear slightly above the horizon at the moment when its "true" middle is rising or setting. For viewers at the north or south poles, it moves virtually horizontally on or above the horizon, not obviously rising or setting apart from the movement in "declination" (and hence altitude) of a little under a half (0.39) degree per day. For observers in either hemisphere not at the poles, the further one goes in time away from the September equinox in the 3 months before that equinox, the more to the north the Sun has been rising and setting, and for the 3 months afterwards it rises and sets more and more to the south. The Axial Tilt The Earth's axis is always tilted at an angle of about 23.5° in relation to the ecliptic, the imaginary plane created by the Earth's path around the Sun. On any other day of the year, either the southern hemisphere or the northern hemisphere tilts a litte towards the Sun. But on the two equinoxes, the tilt of the Earth's axis is perpendicular to the Sun's rays, like the illustration shows. On the equinox, night and day are nearly exactly the same length – 12 hours – all over the world. This is the reason it's called an "equinox", derived from Latin, meaning "equal night". However, even if this is widely accepted, it isn't entirely true. In reality equinoxes don't have exactly 12 hours of daylight Traditions and Folklore In the northern hemisphere the September equinox marks the start of fall (autumn). Many cultures and religions celebrate or observe holidays and festivals around the September equinox. Copyright © Time and Date AS 1995–2015. All rights reserved. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply
Overview: In this lesson, students work together in order to successfully complete a series of tasks. Grades: Preschool and K-2 Length of Lesson: 30–45 minutes After completing this lesson, students will be able to: - Describe the importance of working with others to achieve a desired goal. Related Goals from the Space RacersTM Curriculum: Observation: Looking carefully is one way to learn about things around us. - Use any of the senses or a combination of multiple senses to gain information. - Take note of a variety of properties and describe as accurately as possible (e.g., number, shape, size, length, color, texture, weight, motion, temperature, other physical characteristics, etc.). - Inspect/investigate in detail in order to sort, group, classify, or sequence according to size or other characteristics. A Team Approach: Collaborating with a team and then sharing what one has learned is often useful. - Share observations and findings through pictures, graphs, charts, representations, and/or words. - Communicate effectively with members of one’s own team, as well as others outside the team. - Legos, blocks or other pieces to build with- have one block or Lego piece per student. - 6 large containers/bins (recycling bins, empty cardboard boxes, etc.) - 6 objects in each of the following colors: red, blue, yellow, green, orange, purple. See details in the “Prep” section. - Color Cards handout (optional). See details in the “Prep” section. - 4-6 blindfolds For Activity 1: - Place the Legos/blocks in a bin so that there is enough for each student to have one Lego or block. For Activity 2: - In each of the 6 large bins, put one object of each of the team colors. If desired, use the cards in the “Color Cards” handout or sheets of construction paper to replace some of the objects. Activity 2 is a relay-type race, done in 4-6 teams. If you are playing with 4 teams, place 4 objects in each container. If playing with 5 teams, put 5 objects in each. For 6 teams, put 6 objects in each. Note: Each team should be assigned one of the colors featured in the “Color Cards” handout. Therefore, if you are playing with 5 teams, only use 5 of the colors featured in the handout and if playing with 4 teams, only use 4 colors. - Bin 1: Plastic fruit and/or vegetables (red apple, cherries, strawberries, red pepper etc.; blueberries or the “blue” color card; yellow banana, lemon, corn, etc.; green pear, lime, green pepper, etc.; orange clementine, orange, etc.; and/or purple grapes, plums, eggplant, etc. - Bin 2: plastic shapes – one in each of the different colors - Bin 3: toy cars, trucks or other vehicles – one in each of the different colors - Bin 4: blocks, Legos, and/or stacking rings – one in each of the different colors - Bin 5: assorted toys, stuffed animals, or the color cards – one in each of the different colors - Bin 6: different balls – one in each of the different colors - Write the numbers 1-6, each one on a different sheet of paper. Place one number on each of the containers. (Feel free to add additional bins, if desired, making sure to put one object for each team in each bin.) - Print out one copy of the “Color Cards” to use in Activity 2. Print out one copy of the color cards per student to use in Activity 4. Activity 1: Building Blocks - Give each student one block or Lego piece. Ask each student, working alone, to build a castle. - If students ask for more pieces, explain that they need to create the castle only with the pieces that you have given out. Tip: Conduct this activity outside or in another classroom where there aren’t any other Legos or blocks and bring just enough for each student to have one. - Now, tell your students that they can now work together to create a castle and share their pieces. - Give students a few minutes to work together to create a castle. - Ask the students to discuss what they had to do in order to successfully build a castle. (They had to share their blocks and work together with others.) - Optional: To reinforce the importance of sharing, view the Space RacersTM Mine, Mine, Mine episode. After watching the episode, discuss the following: - What did Robin, Eagle and Hawk do in order to successfully get out of the cave? (Robin used the objects that each of them had brought—grippy tires, night vision glasses and a drill—and the advice she received from Hawk and Eagle, to successfully get out of the cave and get help.) - What did the Space Racers learn from this experience? (They got a lot more done by sharing and working together than by working alone.) Activity 2: Working Together - Divide your students into 4-6 teams. Assign each team to one of the following colors: red, blue, yellow, green, orange, purple. - Place the 6 bins, labeled 1-6 (each featuring one object for each team) in different places throughout the room. (Note: This activity is also fun to do in a gym or outdoors.) See “Prep” section for details about what to include in each container. - Have the teams line up next to each other, with each person on a team standing in front of and/or behind another member of their team. - Place a blindfold on the first person on each team. Tell the teams that team member #1 on each team needs to go to bin #1, while blindfolded, and pick out the object that has his/her team’s color and then bring it back to the team. The 2nd person on the team must escort the blindfolded team member to the bin (by allowing team member 1 to hold on to his/her arm). Once team member 1 has arrived at the bin, team member 2 and/or other team members can verbally guide the blindfolded member to pick the correct object out of the bin. - Once the blindfolded team member has picked the right object, team member 2 must guide him/her back to the team, where team member 1 must place the object by his/her team. - Team member 1 then takes off the blindfold and gives it to team member 2, who puts on the blindfold. Team member 1 goes to the back of the line and team member 3 now guides team member 2 to bin #2, following the same procedure described above. This continues until each team has successfully retrieved all of its objects. - After all the groups have completed the activity, put the objects back in the bins and play again. - After completing the activity, lead a discussion about what the teams needed to do in order to successfully complete each task. Discuss the importance of working together and communicating effectively in order to successfully guide the blindfolded team member to get to the correct bin and select the correct object. Activity 3: Dodo in Distress - View the Space RacersTM Dodo in Distress episode. - After watching the episode, discuss how the Space Racers worked together to find Dodo. (Hawk guided Robin and Eagle, from afar, to find Dodo and to successfully exit with Dodo to safety. Also, Dodo helped Robin and Eagle find him by shining his light.) Activity 4: Book of Colors (Optional) - Give each student a copy of the “Color Cards” handout. Have each student cut out each of the cards and staple them together to make his/her own book of colors. - Lead a discussion about today’s lesson. Ask students to talk about what they learned. - Ask them to discuss how they had to work together to complete the activities in this lesson. - Ask them to think of situations in the real world when it is important to work together as a team to achieve a common goal.
Humans take up a lot of space in this world. From the food we eat to the waste we produce, the scary fact is that we're currently living outside our means. what is a global footprint? Your global footprint measures how much of the planet's natural resources your lifestyle requires. This can be measured on an individual, national, or global basis. One way of measuring your footprint is by asking yourself how many planets would be required if every person in the world used the same amount of resources as you. what does this mean? All species naturally use the Earth's resources, and the Earth copes by regenerating itself — to a certain point. Unfortunately, we're currently using resources faster than our planet can replenish them. If we continue using natural resources we don't have, we'll eventually run out. From depleted groundwater to increased CO2 in our air, we'll all feel the effects. how can you lower your carbon footprint? The facts may be alarming, but something can be done! Living a greener lifestyle doesn't mean you have to give up modern comforts. If every person made just a few concessions, we could start to see a difference. For example, the average car emits 5 tons of carbon dioxide a year. By carpooling with just one other person, you can cut your emissions in half! Other small modifications to your driving routine can also make a difference. - Stop your engine: Sitting idle for just 10 seconds uses more gas and produces more CO2 emissions than simply turning the engine off and then on again. And newer engines warm up by being driven rather than idling — even in cold weather. - Check your tire pressure: Properly inflated tires can improve gas mileage enough to earn a free tank of gas every year! - Clean out your trunk: The lighter your car, the better gas mileage it'll get. - Plan your shopping trips: Cars burn more CO2 when they're not warmed up, so it pays to run all your errands at one time. And what about at home? It doesn't take much to make a difference. - Buy locally grown produce: The less your fruit and veggies have to travel, the less CO2 that goes into the air. Plus, local produce is often fresher and better tasting — and buying locally boosts the local economy. - Wash your laundry in cold water: Several companies now make cold water detergents. You'll save money on your energy bills to boot! - Buy energy efficient appliances: Next time you're in the market for a new refrigerator, washing machine, or dishwasher, look for the Energy Star logo. - Shave a few minutes off your shower: Water-saving shower heads are also an inexpensive way to conserve. - Fill the dishwasher before running it: Using the dishwasher can actually be more efficient than washing by hand, but only if you wait until it's full. - Use a broom: Rather than hosing down your driveway, sweep leaves and trash away. The office is another place where going green can go a long way. Find out what your company is doing to reduce their global footprint, and offer other simple suggestions, such as: - Ask the cleaning company to use eco-friendly products. - Have more plants around the office. - Convert to recycled paper. - Print on both sides of the paper and decrease your margin sizes. - Drink from reusable mugs and glasses rather than disposable ones. - Use a projector rather than printing your presentations. - Turn off your computer and monitor every night. During lunch breaks and meetings, turn off your monitor. Until we find a way to inhabit other planets, we need to keep the one we have sustainable — for ourselves, our children, and all the generations to come. For more green tips, check out our tips on befriending the environment. what’s your global footprint? You can measure your own everyday CO2 use and get more solutions for reducing your carbon footprint by using EarthLab's Carbon Footprint Calculator. Once you get your score, you can develop your earth conservation plan, track your progress, and learn the skills needed to combat the climate crisis! Esurance and the environment Check out all the ways Esurance is committed to help improve our planet. More about car insurance Looking for another insurance product?
The Creation Explanation |The Age of the Earth| A Survey of Some Short-term Chronologies The Age of the Oceans The age of the oceans may be calculated from data concerning the total amount of salts present in the oceans, and the rate at which the salts are accumulating in the oceans.5 These salts are transported from land into the ocean by the river systems of the world. Uranium salts are being carried into the oceans over 100 times as fast as they are being removed by salt spray and other means, in contrast to other salts such as those of sodium and aluminum which are now entering and leaving the ocean in more or less equal amounts. Thus uranium content can be the basis for an estimate of the age of the oceans.6 The estimated total uranium dissolved in ocean water is about 4 billion tons. The amount carried into the oceans annually is about 21,200 tons. Of this some 85 percent is taken up by various sinks or absorbed by ocean sediments and rocks. This leaves about 3,180 tons to be added each year to the uranium dissolved in the ocean waters. Assuming that there was no uranium in the ocean waters at the time of their formation and that the rates involved were roughly constant, we can estimate the maximum age of the oceans. The present uranium content divided by the annual increase is 4,000,000,.000/3.180 = 1,260,000 years. This "age" is much smaller than the 4.5 billion years embraced by evolutionary scientists. It has been assumed for the above estimate that the uranium "clock" has been running at a uniform rate. The assumption that geologic processes have been operating at constant rates is the basic assumption of uniformitarian historical geology. If it is true, however, that waters covered the earth some 5,000 years ago in a great Flood, then the rate at which uranium was leached out of the earth and into the oceans was greater in the past than at present. This would result in a shorter time needed to reach the present uranium content. Furthermore, we assumed that our uranium "clock" was set at zero when the oceans began. It would not be unreasonable to suppose that the oceans were created with some uranium already present. That is to say, the uranium clock was not set at zero time when the oceans were formed. It would be reasonable to conclude, then, that the age of the oceans, and of the earth, if both were formed at the same time, is perhaps 10,000 years. Other short-term chronologies for the oceans are based upon the oceanic content of various chemical elements and compounds relative to the annual inflow of these substances from all known sources. One of the most thorough studies of this kind of data revealed that of fifty-one chemical elements contained in ocean water, twenty could have accumulated to their present concentrations in one thousand years or less. An additional nine of the elements would have required no more than ten thousand years, and eight other elements no more than 100,000 years.7 Two Helium Clocks: the Atmosphere and Hot Rocks Just as many dissolved salts are building up in the oceans via drain-off of continental rivers, in a similar manner helium-4, the most abundant isotope of helium(atoms of the same element which differ from each other in atomic weight because of different numbers of neutrons in their nuclei are called isotopes), is flowing into the atmosphere from at least three sources: 1. principally helium-4 produced by radioactive decay of uranium and thorium in the earth's crust and oceans; 2. from cosmic helium raining on earth, mainly from the sun's corona and in meteorites; and 3. from nuclear reactions in the earth's crust caused by cosmic rays. There is also evidence that the earth's original atmosphere contained helium. Dr. Melvin Cook first pointed out the problem which atmospheric helium raises for a multibillion year age of the earth.8 At the present rate of flow of helium into the atmosphere, the content of helium in the atmosphere could have been built up in only a small fraction of a billion years. This difficulty has yet to be solved by secular science. Dr. Larry Vardiman's technical monograph, The Age of the Earth's Atmosphere, published in 1990, is the most recent survey of the helium problem.9 The atmosphere now contains about 4.1 billion tons of He-4. It is estimated that about 2400 tons per year of He-4 is released from the crust into the atmosphere. The theoretically calculated rate of escape of He-4 from the atmosphere into space averaged over an eleven-year solar cycle is only about 70 tons per year.10 This is only 1/33rd of the rate of inflow from the crust. If we assume a zero content of He-4 in the original atmosphere, the maximum age of our atmosphere calculated from these figures is only about 1.8 million years. On the other hand, if the earth were 4.5 billion years, the atmosphere should contain 2,500 times its measured content of helium. Joseph Chamberlain and Donald Hunten at the close of a detailled examination of atmospheric helium concluded, "The problem will not go away and it is unsolved."11 They then briefly describe two possible solutions. Vardiman discusses three possible solutions considered by secular scientists for the missing helium problem. These are "polar wind" (the escape of ionized helium at the poles where the earth's magnetic field lines could guide ions out into space); "solar wind sweeping," (in which streams of charged atoms from the sun interact with the earth's exosphere); and "hot-ion exchange" (in which high energy ions give helium atoms a kick out into space). Vardiman's discussion shows that these concepts have not yet made the helium problem go away.12 Are we not justified in concluding that the atmospheric helium clock continues to report a young age for the earth? Another helium clock is provided by radiogenic gas trapped in very hot rocks deep in the earth's crust. The rate of escape and diffusion upward of such trapped gas is greatly increased at high temperatures. These deep rocks are supposed to be billions of years old, yet much of the helium-4 produced in them has not escaped. This suggests that these rocks are not billions of years old or millions of years old.13 The Erosion and Sedimentation Clock In the modern western world, the concept of a very old earth originated only about 200 years ago, principally with the work of James Hutton (1726-1797), who announced his ideas in 1785. Around 1830 Hutton's ideas of gradualism as an alternative to catastrophism, or sudden change, began to have wide acceptance through the efforts of Sir Charles Lyell, who discussed it in the first geology textbook to be published. Different strata of the earth's crust were supposed to represent different time periods. On this concept one of the best known "earth clocks" is founded. Geologists studying the rates of volcanic activity, erosion, and sedimentation have observed that these processes are now occurring at fairly uniform rates. Many geologists assume that these rates have remained the same throughout time. This is the principle of uniformitarianism which Lyell so persuasively promoted in his very influential three-volume work, Principles of Geology. Charles Darwin was given a copy of the first volume by Captain Robert FitzRoy when they set out on their momentous five-year voyage around the world on the HMS Beagle late in 1831. As young Darwin read the book on board the ship, he realized that Lyell's uniformitarianism and great-age chronology for earth history provided the essential ingredients needed to make an evolutionary history of life plausible. In textbooks on geology it is explained how it is possible to obtain average rates of deposition for many types of sediments. By studying exposed rocks or rock strata all over the world, and assuming that the many layers at any one point on the earth's surface were piled in order on top of one another, one can draw up a universal stratigraphic column or "geologic column." This imaginary column supposedly provides a kind of calendar for earth history. Certain adjustments must be made to the geologic column because there are often gaps of missing sediments in different parts of the earth. When the thickness of each formation is divided by the normal or present rate of accumulation of that type of sediment, the time span represented by each type of formation can be estimated. Addition of all these times gives a figure for the approximate age of any individual rock layer back to the time represented by the deepest strata of sedimentary rocks. On this basis some geologists have attempted to set up a hypothetical "sediments clock." Many flaws exist in the "sediments clock," and not all geologists have been happy with it. In fact its principal element, uniformitarianism, has been severely shaken in recent decades. American geologist J. Harland Bretz struggled for several decades to get his biased colleagues to pay attention to striking evidence of catastrophic erosion in the scablands in western Montana, Idaho and Washington.14 British paleontologist, Derek Ager, showed in his book The Nature of the Stratigraphical Record, that the greater part of the geological strata was laid down during relatively brief periods of very rapid deposition.15 Interspersed with the depositions were periods of often rapid scouring away of sediments and also periods of quiescence. These facts mean that the "sediments clock" has very little value for the quantitative measurement of geologic time. In this century radiometric methods have provided the most substantial and imposing support for the great-age chronology of the earth. Nevertheless, it is the fossils in the rocks, plus the theory of evolution, which have the final say in determining the relative ages of rock strata. The secular scientific establishment is committed to the view that the sedimentary strata were laid down over a period of many hundreds of millions of years, and that the encased fossils of plants and animals give historical evidence for the evolution of life. It is assumed that all plants and animals are descended from original single-celled ancestors. Thus there supposedly has been a fantastic, continual increase in complexity, man being at the pinnacle of this process. A historical scenario has been worked out for this process. So the complexity of fossil species and its apparent relationship to the sequence of species arranged in the scenario is considered to determine the relative ages of the fossils. Consequently, the theory of evolution and its resultant scenario are the final determinants of relative age of fossils and therefore of the sedimentary strata in which they are found. Some of the difficulties encountered in the uniformitarian interpretation of the fossil and sedimentary rock data were discussed in Chapter-3. These problems included missing strata, "reversed" strata (i.e., fossils found lying in the reverse of the assumed evolutionary order), living fossils (some of which had been used for indexing certain strata), evidence of catastrophic global flooding and volcanic activity, polystrate fossils that could not possibly have been buried slowly, and the fact that fossils simply are in general not formed under the conditions assumed in uniformitarian geologic theory. Thus there is much evidence that the rock strata clock is a questionable which interpreted according to uniformitarian great-age theory of geology. A particular kind of sedimentary data suggests that the great-age chronology is not reliable. The sediments on the ocean floors set a limit to the age of the earth, according to Dr. Steven Austin, a geologist with the Institute for Creation Research near San Diego, California.16 The average depth of sediments deposited on the ocean floors is estimated to be just over one-half mile. This amounts to about 8.2x1017 tons. The present rate at which sediments from the continents are being washed into the oceans or deposited from underground springs is about 2.75x1010 tons per year. The current theory of plate tectonics pictures large quantities of these ocean floor sediments being subducted or buried deep in the earth's mantle where great moving plates of the earth's crust meet. but this is estimated to be only about 2.75x109 tons per year, or just one-tenth of the annual new sediments being added. Assuming uniformitarian conditions in the past, let us calculate the age of the earth from this data. The formula is Thirty-three million years is less than one percent of the 4.5 billion years commonly cited for the age of the earth. Moreover, recent data from deep sea drilling in ocean sediments indicate that sedimentation rates in the recent past were perhaps ten to one hundred times a great as at present. If to this is added the vast quantities of sediments which must have been dumped rapidly on the ocean floors during the Flood of Noah, we can see that the thickness of ocean sediments is actually far more consistent with the biblical model of creation and the flood over a time span of around 10,000 years than with the uniformitarian model. Billions of years of erosion and sedimentation should have loaded up to sixty miles of non-existent sediments on the ocean floors. Since the sediments are mission, are we not justified in concluding that the billions of years are likewise non-existent? Evidence of Catastrophic Activity in Earth History In addition to what has already been surveyed above and in Chapter-3, there is a huge body of evidence for catastrophic events in the history of the earth. Steven A. Austin has assembled in his book, Catastrophes in Earth History, several hundred such events which have been reported in the scientific literature.17 They relate to astronomy, cosmology, many aspects of geology, petrology, paleontology and atmospheric science. Moon Dust Clocks: A Broken Clock Is Replaced by One that Works When the Lunar Lander program was approaching the historic first landing, there was much speculation and disputation among space scientists concerning the problem of dust on the moon. Many felt that because of the great age of the earth's airless satellite, there probably was a thick layer of dust on the surface, into which any lander would likely sink and perhaps be disabled. The original landers were carefully and at great expense designed to avoid this possible danger. The actually landing revealed a dust layer only 1/8th to 3 inches in depth.18 The absence of appreciable lunar dust was taken up by creationists as evidence for a young moon. The lack of meteoritic dust accumulation in the surface layers of the earth was also interpreted as evidence for a young earth.19 The first edition of The Creation Explanation (1975) included this information and interpretation.20 However, new data from satellite measurements of the meteoritic material in the space around the earth and also measurements of radio waves produced by meteoritic bombardment of the upper atmosphere showed that the amount of dust infall is much less than previously estimated. Thus the depth of dust accumulated in 4.5 billion years on the moon's surface should be only a few inches, rather than many feet.21 In like manner, the infall of meteoric dust on the earth's surface was estimated to total only a few inches. The most recent research on this subject was reported in 1993 in Science.22 The Long Duration Exposure Facility satellite exposed large aluminum surfaces to impacts by cosmic dust particles in the space around the earth. After almost six years in orbit around the earth, the satellite was retrieved and the craters on the aluminum surfaces were measured and counted. From this type of information it is estimated that the infall of cosmic dust on the earth is approximately 44,000 tons per year. An entirely independent estimate of dust infall based on the relative contents of two isotopes of the element osmium in deep sea sediments is about 57,000 tons per year. The average of these two figures is about 50,000 tons per year. If we assume that cosmic dust has fallen on the earth at this rate for 4.5 billion years; and take the density of the compacted dust to be 3.5 grams/cm3, this would produce a layer of dust on the earth's surface only about 4.5 inches deep. We may therefore conclude that cosmic dust evidence formerly cited for a young earth and moon appears to have been eliminated by the progress of scientific knowledge. There is, however, another process occuring on the moon which should have produced much moon dust. British astronomer R.A. Lyttleton of Cambridge University had proposed in 1950 that the action of ultraviolet light and X-rays upon moon rocks should continually spall off surface layers to produce dust.23 He estimated the rate of this process to be a few ten-thousandths of an inch per year. If only 0.00001 inches of dust were produced annually for 4.5 billion years, the result would be about 375 feet of dust! No such dust layer exists, even in the lunar seas (large low-lying areas) into which electric fields and solar wind would tend to sweep loose dust. Thus the lack of lunar dust still is suggestive of a young moon that is not billions of years old. An Important Lesson: In Science, Losing a Debate Is a Learning Exerience The above history is instructive for Christians who wish to offer to their unbelieving friends an intelligent and effective apologetic for the faith of Christ. We must never forget that our platform is the Word of God, not the current conclusions of scientists. Science is always changing. So we should not be disturbed if on occasion new scientific findings invalidate one of our arguments based on previous science. We expect the worldlings to acknowledge the scientific evidence which points to the mind and hand of God in the world. We cannot expect them to respect us if we cling to scientific evidence from the past which has been discredited by new research findings. Remember, the kingdom of God is not going to collapse because we sometimes lose an argument. Christ's kingdom, i.e., His sovereign rule over all, will in the end prevail, even though we His servants may once in a while have to admit our mistakes. Honesty and candor will in such a circumstance be a powerful demonstration of the reality of our faith and of our total commitment to truth. As the old saying goes, "We can't win 'em all." In any event, in science losing a debate is a learning experience. The Population Growth Clock The "explosion" of the world population has become a significant topic during the last few years. Scientists studying growth rates are especially concerned about the necessary requirements for life such as food, water, and space. Some scientists who specialize in these kinds of studies have made calculations that might help answer the question, How long would it take for the present world population to grow from just one family? The rate of world population growth has varied radically during history as a result of many influences, including famines, pestilence, wars, and probably a number of catastrophic events. Estimates of the total human population 2,000 years ago center at about 300 million.24 If the average length of a generation were forty years, in 5,000 years one couple could multiply to 300 million if each family had an average of just 2.3 children. This corresponds to an average annual population increase of only 0.35 percent, whereas the present world population growth rate, considered by some to be catastrophic, is about two percent annually, almost six times as great as the hypothetical rate used in the above calculation. If, on the other hand, the human race had been on earth for one million years with a growth rate of only 1/100th of one percent annually, the resulting population would be 5x1043 people, i.e., the number 5 followed by forty-three zeros. This is enough people to fill completely more than a thousand solar systems solidly packed. Thus the notion that the human race has been multiplying for a million years or so seems absurd, even taking into account the fact that modern medicine and technology were not available until this century. The consideration of reasonable population growth curves seems to support the biblical chronology of thousands of years, rather than the evolutionary chronology of several million years for man's history on earth. This argument, however, is not a proof that the human race could not have existed on the earth for a million years. The above highly oversimplified population growth estimate would be greatly reduced if there were periodic catastrophes every few thousand years which wiped out a large proportion of the human race. However, intuitively it does seem unlikely that the human race could have existed on earth for a million years without long ago reaching the absolute limit set by the available earth resources. The Age of the Mississippi River Delta Additional striking evidence from sediments is to be found in the extensive studies of the delta of the Mississippi River during the past century. Charles Lyell, father of evolutionary geology, after a superficial examination, estimated the age of the delta to be only 60,000 years. This seems to be far short of what is required by current historical geology. More recent study shows that Lyell's estimate was excessive. Detailed examination conducted over a period of many years by the U.S. Army Corps of Engineers with the cooperation of civilian geologists greatly reduced this figure. It now appears that the maximum age of the delta, that is, the time required for the Mississippi River to deposit the present accumulation of sediments making up the delta, is no more than 5,000 years.25 The Gas and Oil Seepage Clock Trapped oil and gas deposits and the rates at which they leak through the layers of sediments to the surface of the earth again suggest a shorter time scale for the earth than evolutionists claim. Sometimes in oil well exploration a "gusher" results. The well goes wild and spews out oil and natural gas until measures are taken to bring the flow under control. How can we explain this? It is believed that the oil was formed when organic materials were covered and trapped suddenly beneath heavy layers of earth and rock, usually under conditions of elevated temperature and pressure. The pressures in oil deposits result from three basic causes. Most of the pressures are due to the weight of all the layers of earth resting on top of the trapped oil and gas; the other causes of pressure are from the weight of oil pressing down upon itself and gases that may be present. After rather detailed study of measurements of pressure in deep oil wells in various parts of the world, Dr. Melvin Cook concluded that the very high observed pressures require sudden deep burial.26 Moreover, the containing rocks are porous so that to retain these pressures for periods greater than a few thousand years is apparently impossible under the observed permeabilities of the reservoir and trap formations. (Permeability determines the rate at which leakage may occur through the material.) Geostatic pressure refers to the total force of the overlying layers of earth pressing down on any material beneath it. If an oil deposit is trapped beneath the thousands of tons of earth, the pressure may be evenly distributed throughout the liquid. The pressure of the earth's weight will push the oil up through the opening made by a drill. The conclusion that oil and gas have remained trapped at such high pressures for millions of years does not seem to be valid. It would not be possible for the rocky sediments to maintain a sufficiently good seal for such a long time. Secular scientists have admitted that the occurrence of fluids in the reservoirs within the earth at excessive pressures is a mystery. These formations containing oil, gas, or water are supposedly millions of years oil, and the pressures are extremely high. a fully satisfactory explanation is lacking.27 Rather recent catastrophic deposition of sediments could be the answer. Believers in creation feel that they possess the answer to this mystery, for it fits within their basic assumption of global catastrophe. They believe that the excessively high-pressure oil and water reservoirs are not nearly as old as most geologists think. What may have happened five to ten thousand years ago was a tremendous catastrophe which caused a great deal of overthrusting, or the movement of large blocks of earth over one another. This type of violent action could have trapped rivers and lakes as well a animal life and all sorts of vegetable matter. The strata left overlying many of the trapped fluid formations would have produced high geostatic pressures. From that time on, leakage would have caused the pressures to decay, some things very slowly but always fast enough that no excess pressures would remain at all if the formations were more than a few thousand years old. The Cooling Earth Clock In the late nineteenth century the British physicist Lord Kelvin (William Thompson), a devout believer in creation, greatly upset the evolutionary theorists by demonstrating that the cooling of the earth from a molten state to its present temperature would only require some millions of years, rather than the tens of millions of years then in vogue. Several decades later radioactive elements were discovered in the earth's crust. Since these continually produce heat beneath the surface, it was assumed that this new information completely invalidated Kelvin's conclusions. The reason was that the added heat would greatly increase the time required for the earth to cool. More recent calculations show, however, that the problem still persists for evolutionary geology. Without radioactivity the time for cooling is calculated to be 22 million years. With radioactivity the figure is 45 million years, still far too short to fit the evolutionary scenario for history of life on earth.28 The actual facts fit the creation model without difficulty, because there is no need to assume that the earth was ever in a molten state in the first place. In our consideration of many different types of "earth clocks" used to estimate the "ages" of various structures in the earth, we have seen that they do not give exact values. Furthermore, all of them involve assumptions with respect to initial conditions as well as to past rates of natural processes. Nevertheless, it is apparent that there is evidence to support the view that the earth is not as old as the secular scientific community believes. The belief that the earth and the universe are billions of years old is grounded, first, in the belief that the present state of the physical universe and of the biosphere is the result of very slow processes of change and, second, on the results of age estimates and measurements which make use of certain radioactive isotopes of naturally occurring elements in the earth's crust, in meteors, and in lunar rocks. Let us see if these radiometric methods for determining the ages of the materials composing the earth have any deficiencies. 6. Block, Salman, Geochimica et Cosmochimica Acta, Vol. 44, 1980, pp. 373-377. 7. Riley, J.B. and G. Skirow, editors, Chemical Oceanography, Vol. 1 (Academic Press, London, 1965), pp. 164-165. 8. Cook, Melvin A., "Where Is the Earth's Radiogenic Helium?" Nature, 1957, Vol. 179, p. 213; Cook, Melvin, Prehistory and Earth Models (Max Parrish, London, 1966), pp. 10-14. 9. Vardiman, Larry, The Age of the Earth's Atmosphere: A Study of the Helium Flux Through the Atmosphere (Institute for Creation Research, El Cajon, CA 92021, 1990). 10. MacDonald, G.J.F., "The Escape of Helium from the Earth's Atmosphere," in The Origin and Evolution of Atmospheres and Oceans, P.J. Brancazio and A.G.W. Cameron, editors. (John Wiley and Sons, New York, 1964), pp. 127-182; see p. 127. 11. Chamberlain, Joseph W. and Donald M. Hunten, Theory of Planetary Atmospheres, 2nd Edition (Academic Press, new York, 1987), p. 372. 12. Vardiman, Larry, ibid. (ref. 9), pp, 24-25. 13. Gentry, R.V., et al., "Differential Helium Retention in Zircons: Implications for Nuclear Waste Management," Geophysical Research Letters, vol. 9, Oct. 1982, pp. 1129-1130. 14. Bretz, J.H., "The Lake Missoula Floods and the Channeled Scabland," Journal of Geology, vol. 77, 1969, pp. 505-543. 15. Ager, D.V., The Nature of the Stratigraphical Record, Second Edition (John Wiley, New York, 1981). 16. Nevins, S.E. (Steven Austin), Creation--Acts, Facts, Impacts (Institute for Creation Research, El Cajon, CA, 92021), 1974, p. 164. 17. Austin, Steven A., Catastrophes in Earth History (Institute for Creation Research, El Cajon, CA, 1984). 18. Pasachoff, J.M., Contemporary Astronomy (W.B. Saunders Co., Philadelphia, 1977), pp. 294-295. 19. Morris, Henry M., editor, Scientific Creationism (Creation-Life Publishers, San Diego, 1974), pp. 151-153; Slusher, Harold S., Age of the Cosmos (Institute for Creation Research, San Diego, 1980), pp. 39-41. 20. Kofahl, Robert E. and Kelly L. Segraves, The Creation Explanation (Harold Shaw Publishers, Wheaton Ill., 1975), pp. 190-191. 21. Dohnanyi, J.S., Icarus, Vol. 17, 1972, pp. 1-48. 22. Love, S.G. and D.E. Brownlee, Science, Vol. 262, 22 Oct. 1993, pp. 550-553. 23. Lyttleton, R.A., The Modern Universe (Clarendon Press, Oxford, 1968), p. 154. 24. Coale, Ansley J., Scientific American, Vol. 231, Sept. 1974, p. 43. 25. Allen, Benjamin F., Creation Research Society Quarterly, Vol. 9, March 1973, pp. 96-114. 26. Cook, Melvin A., Prehistory and Earth Models (Max Parrish, London, 1966), pp. 254-265; See also Hubbert, M.K. and W.W. Rubey, Bull. Geological Society of America, Vol. 70, 1959, pp. 115-206. 27. Dickey, P., et al., Science, Vol. 160, 10 May 1968, p. 609. 28. Ingersoll, Leonard R., Otto J. Zobel and Alfred C. Ingersoll, Heat Conduction With Engineering, Geological and Other Applications (Univ. of Wisconsin Press, Madison, 1954), pp. 99-107; Slusher, Harold S. and Thomas P. Gamwell, The Age of the Earth: A Study of the Cooling of the earth Under the Influence of Radioactive Heat Sources (Institute for Creation Research, San Diego, 1978).
Modern cosmological research concentrates on 'quantum cosmology', which attempts to reconcile the quantum physical conditions just after the big bang with the general relativistic conditions thereafter. Standard cosmology, based on Einstein's general relativity, describes most of the history of the cosmos very well. It is only when distances, volumes and energy densities are reduced to the Planck scale that general relativity fails. The Planck scale for energy is defined by the energy of a photon with a wavelength equal to one Planck length, which is about 1.6 E-35 meter (where E-35 means 10 to the power –35). This photon wavelength gives the Planck energy as about 2 E+9 Joule, roughly equal to the electricity consumed by an average person in a developed country in two weeks. Planck Energy Density The Planck energy is not all that high, but constrain this energy into a volume of one cubed Planck length and you have the Planck energy density of about 4.6 E+113 Joule per square meter. Ouch!! The point of all the laboring is that quantum gravity shows that when the energy density of the universe is near the Planck limit, an anti-gravity term grows to a magnitude that overwhelms normal gravity and causes space to expand. A Big Bounce If there were a contracting universe before the big bang, this anti-gravity would have stopped the contraction and reversed it, i.e., a big bounce would have happened before the density reached the Planck limit. The figure below illustrates such a 'big bounce' caused by quantum gravity. In this illustration, time runs upwards while space-time first contracts to near the Planck limit (the red band, with color indicating energy density) and then bounces back into an expansion again. The red spheres represent elementary particles (concentrated energy) separating from each other due to the expansion of the fabric of space-time. This is merely a 'toy-model' of quantum cosmology that calculates the simplest scenario in order to enhance insight. More representative quantum cosmology models of contraction with fully formed structures influencing the possible bounce scenarios are under study. It is not clear if the pre-bounce and the post-bounce universes lived in the same space-time. It is more likely that the two space-times were different universes. A White Hole? As an example, if a very massive star goes supernova and leaves a remnant that contracts into a black hole, the same bounce should happen at the central singularity. However, we observe the effects of stable black holes without a bounce. The bounced black hole may have popped out in another universe as a white hole…
Smallest part of any element that can exist independently This is a search for atom in our database ... A carbon bonded to four different atoms; also called chiral carbon . The bonds can be arranged in two different ways producing stereoisomers that are mirror images of each other. (Figure 2-6) ... atom the smallest part of an element that can enter into various combinations with atoms of other elements. atrium a thin-walled receiving chamber in which blood accumulates in fishes. one particle‚ one piece of an element (a = not‚ without; tom = to cut) Australian Realm the biogeographical realm including the continent of Australia and some of the surrounding islands (austr(ali) = southern) ... atom The smallest indivisible particle of matter that can have an independent existence. atomic number The number of protons in the nucleus of an atom. [Gk. atomos, indivisible] The smallest unit of matter that retains the properties of an element. atomic number ... (Science: chemistry, physics, radiobiology) a particle of matter indivisible by chemical means. It is the fundamental building block of the chemical elements. Van der Waals molecule History of the molecule list of compounds for a list of chemical compounds List of molecules in interstellar space ... atom The basic unit of matter; the smallest complete unit of the elements, consisting of protons, neutrons, and electrons. atomic mass A mass unit determined by arbitrarily assigning the carbon-12 isotope a mass of 12 atomic mass units. An with only one shell requires two electrons to complete its outer shell. Atoms with more than one shell require 8 electrons to complete their outer shells. Periodic Table of the Elements ... Every atom has a characteristic total number of covalent bonds that it can form, equal to the number of unpaired electrons in the outermost shell. This bonding capacity is called the atom's valence. an amino group (hence "amino" acid) a carboxyl group (-COOH). This gives up a proton and is thus an acid (hence amino "acid") one of 20 different "R" groups. ion An atom that has lost or gained electrons from its outer shell and therefore has a positive or negative charge, respectively; symbolized by a superscript plus or minus sign and sometimes a number, e.g., H+, Na+1, Cl-2. Ion: An or group of atoms that carries a positive or negative electric charge as a result of having lost or gained one or more electrons. More Biology Terms ... Each carbon atom can form four bonds with other molecules Carbon atoms form the skeleton of organic molecules ... Each type of (element) has its own characteristic electronegativity. If the electronegativities of the two atoms in a bond are equal or close, then the electrons are shared more or less equally between them and the bond is said to be non-polar. Each hydrogen atom is split into its constituent H+ (hydrogen ion) and electron. The electron is the part that actually gets passed down the chain from carrier to carrier. The H+, however, remains in the mitochondrial matrix. Matter: acid, , base, catalyst, compound, covalent bond, ion, ionic bond, element, solution ... radius of Hydrogen atom: 25pm radius of Helium atom: 31 pm 10-10 m = 1 Ångström ... The α-carbon has four different groups attached to it arranged at the points of a tetrahedron. A unit of measurement equal to the mass of a hydrogen atom, 1.67 x 10E-24 gram/L (Avogadro's number). Death phase. The final growth phase, during which nutrients have been depleted and cell number decreases. (See Growth phase. Denature. 5' or 3' end The nucleoside residues which form nucleic acids are joined by phosphodiester linkages between the 3' C of one ribose moiety and the 5' C of the next. Biology is a science full of terms and concepts that range from the hard-to-imagine, such as the structure of an atom, to those we see every day, such as the structure of our own face in the mirror. How are these ideas and concepts related? At one end of each strand there is a phosphate group attached to the carbon number 5 of the deoxyribose (this indicates the 5' terminal) and at the other end of each strand is a hydroxyl group attached to the carbon number 3 of the ... Okay so how do these guys play a part into each other, what do they look like in an atom? Well if we go to Neil's board model which is the simplest model to describe what an atom looks like. It is composed of one oxygen and two hydrogen atoms. Each hydrogen is covalently bonded to the oxygen via a shared pair of electrons. Oxygen also has two unshared pairs of electrons. The main determinants of secondary-ion formation are the sputtering rate (the rate of removal of the target atom by the primary ions), the ionization yield (the fraction of sputtered atoms that are ionized) and the local concentration of atoms. The aldehyde or ketone group may react with a hydroxyl group on a different carbon to form a hemiacetal or hemiketal , in which case there is an oxygen bridge between the two carbon atoms, forming a heterocyclic ring. Each atom of the living beings was originated in a star. The Iron of hemoglobin was generated in the moment when the atomic nuclei of a star fused to form heavier elements; for example Iron. A weak electrostatic link between an electronegative (such as oxygen) and a hydrogen which is linked covalently to another electronegative ; hydrogen bonding is what makes water stick to itself. Deoxyribonucleic acid (DNA) ... We're really reducing the question down to the smallest atom, and trying to understand at an anatomical level the structure of this receptor and how it's activated and how is it doing its job during this process of male sexual differentiation. ion /EYE-on/ n. An or small molecule with a negative or positive charge. ion channels /EYE-on/ n. Proteins, present in all cell membranes, governing the passage of specific ions between the interior and exterior of the cell. The size of the carbon atom is based on its van der Waals radius. Goodsell, David S (2002, February). Molecular Machinery: A Tour of the Protein Data Bank. Retrieved September 10, 2008, from Protein Data Bank. See also: Molecule, Protein, Organ, Trans, Proteins
Health and Medicine Originating Technology/NASA Contribution While the human eye can see a range of phenomena in the world, there is a larger range that it cannot see. Without the aid of technology, people are limited to seeing wavelengths of visible light, a tiny range within the electromagnetic spectrum. Hyperspectral imaging, however, allows people to get a glimpse at how objects look in the ultraviolet (UV) and infrared wavelengths—the ranges on either side of visible light on the spectrum. Hyperspectral imaging is the process of scanning and displaying an image within a section of the electromagnetic spectrum. To create an image the eye can see, the energy levels of a target are color-coded and then mapped in layers. This set of images provides specific information about the way an object transmits, reflects, or absorbs energy in various wavelengths. Using this procedure, the unique spectral characteristics of an object can be revealed by plotting its energy levels at specific wavelengths on a line graph. This creates a unique curve, or signature. This signature can reveal valuable information otherwise undetectable by the human eye, such as fingerprints or contamination of groundwater Originally, NASA used multispectral imaging for extensive mapping and remote sensing of the Earth’s surface. In 1972, NASA launched the Earth Resources Technology Satellite, later called Landsat 1. It had the world’s first Earth observation satellite sensor—a multispectral scanner—that provided information about the Earth’s surface in the visible and near-infrared regions. Like hyperspectral imaging, multispectral imaging records measurements of reflected energy. However, multispectral imaging consists of just a few measurements, while hyperspectral imaging consists of hundreds to thousands of measurements for every pixel in the scene. In 1983, NASA started developing hyperspectral systems at the Jet Propulsion Laboratory. The first system, the Airborne Imaging Spectrometer, led to the development of the powerful Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) that is still in use today. AVIRIS is connected to the outside of aircraft and is used to gather information to identify, measure, and monitor the environment and climate change. In 2001, NASA launched the first on-orbit hyperspectral imager, Hyperion, aboard the Earth Observing-1 spacecraft. Based on the hyperspectral imaging sensors used in Earth observation satellites, NASA engineers from Stennis Space Center and researchers from the Institute for Technology Development (ITD) collaborated on a new design that was smaller and incorporated a scanner that required no relative movement between the target and the sensor. ITD obtained a patent for the technology and then licensed it to a new company called Photon Industries Inc. In 2005, Lextel Intelligence Systems LLC, of Jackson, Mississippi, purchased the company and its NASA-derived technology (Spinoff 2007). Without the technical expertise to market the product, the company’s license for the scanner returned to ITD. In 2008, Themis Vision Systems LLC, of Richmond, Virginia, obtained an exclusive license for the technology. The CEO of Themis, Mark Allen Lanoue, was one of the original researchers on the staff that developed the device at ITD and saw the potential for the technology. In 2005, Lanoue, several colleagues, and the technology were inducted into the Space Technology Hall of Fame, created by the Space Foundation, in cooperation with NASA, to increase public awareness of the benefits that result from space exploration programs and to encourage further innovation. Themis delivers turnkey solutions in hyperspectral hardware, software, and algorithm development. Worldwide, Themis has built about 40 custom systems, including 3 at the Federal Bureau of Investigation’s hyperspectral imaging laboratory in Quantico, Virginia. With distributors and customers in more than 10 countries, Themis recently installed the first UV hyperspectral system in China to help with studies in forensic science, including fingerprint analysis. The latest product lines from Themis include the Transluminous Series, the Optoluminous Series, and HyperVisual Software. What is most unique about these hyperspectral systems is their size. Themis has developed compact, 4-pound systems—as opposed to the larger, 7- and 10-pound ITD versions—that can fit on multiple platforms including microscopes, tripods, or production lines. The Transluminous Series spans the spectrum from UV to infrared and uses a prism-grating-prism component to split incoming light into separate wavelengths. The Optoluminous Series, the newest line of reflective hyperspectral systems, uses a convex grating reflective spectrograph to gather wavelengths. Both lines feature the NASA-derived scanning technique that requires no relative movement between the target and the sensor. The HyperVisual Software is a graphical user interface-based software package for end-user communication and control. Designed for the Windows operating system, it converts scanned images into a single image format containing spatial and spectral information. HyperVisual acquires and pre-processes the data so that it is ready for analysis. A number of pre-processing routines can be run on the images and then ported into off-the-shelf image processing packages. Early on, the primary application for hyperspectral imaging was for remote sensing for agricultural and land-use planning applications. Now the number of applications continues to grow in a variety of areas: medical and life sciences, defense and security, forensics, and microscopy. Used in medical applications as a diagnostic tool, hyperspectral imaging looks at wounds and burns to monitor healing, scans skin to detect and monitor diseases, and looks inside eyes for diabetic retinopathy and clinically significant macular edema. In forensics, hyperspectral imaging examines ink colors to reveal counterfeit passports, currency, and checks. Microscopy applications include cell, spore, and DNA analysis. The U.S. Department of Agriculture used Themis systems for imaging poultry, beef, and other food products. For poultry, the system captured an image of a bird and then processed the image to determine if the bird had a defect such as a skin bruise, tear, or fecal contamination. The imaging systems also produced spectral signatures of dirt, fungi, fecal matter, and pathogens such as Salmonella and E. coli. A company called X-Rite uses a Themis hyperspectral system to assist in quality control and color measurement in paint mixing. Estée Lauder has utilized a Themis system to improve cosmetics and makeup coverage. One organization is even using the product to develop camouflage, while another is using it to detect camouflage. Other military and defense applications include detecting landmines, tripwires, and for search and rescue operations. One of the more unusual applications for hyperspectral imaging is to get a closer look at paintings. “In art forensics, you look at paintings to see if they have hidden signatures. A lot of times, artists would paint over the signatures. Hyperspectral imaging helps to get a look underneath the layers of paint,” says Lanoue. In fact, Lanoue recently assisted a man with a painting that had been bought at an auction for $3,000. One day, the man noticed a faint signature under the obvious signature. Using a Themis hyperspectral imaging system, Lanoue was able to see the signature underneath. Lanoue and the owner are now working with art experts to confirm what the hyperspectral imaging seems to reveal: that the painting is actually the work of famous Spanish artist Diego Velázquez. This fact could help the painter’s owner turn a significant profit. Besides using hyperspectral imaging for such exciting new applications, Lanoue has filed for new patents for next-generation scanning systems and a biofuel sensory system. He also plans to continue selling turnkey hyperspectral imaging systems, including microimaging systems. Furthermore, Lanoue is working to release a future line of intelligent imaging systems and real-time hyperspectral applications based on the NASA-derived scanning technique. Transluminous™ and Optoluminous™ are trademarks of Themis Vision Systems LLC. HyperVisual Image Analyzer® is a registered trademark of the Institute for Technology Development. Windows® is a registered trademark of Microsoft Corporation.
How can we help? You can also find more resources in our Select a category Something is confusing Something is broken I have a suggestion What is your email? What is 1 + 3? the narrator of a poem the intended reader of a piece tells who or what the writing is about the writer's or speaker's attitude toward the subject of a story, toward a character, or toward the audience (the readers). repetition of initial consonant sounds the use of words that imitate sounds repeated use of sounds, words, or ideas for effect and emphasis Description that appeals to the senses (sight, sound, smell, touch, taste) conjoining contradictory terms (as in 'deafening silence') Placing two elements side by side to present a comparison or contrast A figure of speech in which an object or animal is given human feelings, thoughts, or attitudes a metaphor which extends over several lines or an entire poem Understatement (or euphemism/meiosis) the presentation of something as being smaller, worse, or less important than it actually is (the opposite of hyperbole) rhyme that falls on the stressed and concluding syllables of the rhyme-words. Examples include "keep" and "sleep," "glow" and "no," and "spell" and "impel." is a rhyme that matches two or more syllables at the end of the respective lines (painted-acquainted, passion-fashion) rhyme in which the vowel sounds are nearly, but not exactly the same (i.e. the words "stress" and "kiss"); sometimes called half-rhyme, near rhyme, or partial rhyme a rhyme in which the corrsondance between the two sounds is exact Perfect rhyme where the grammatical end of the line or thought coincides with the perfect rhyme. repetition of sounds within a line (but not at the end of the line) the pattern of rhymes at the ends of lines in a poem repetition of vowel sounds the repetition of consonants (or consonant patterns) especially at the ends of words. ex: ping-pong, sound-sand, round-rind patterns of regular rhythm in language a group of 2 or 3 syllables forming the basic unit of poetic rhythm A unit of speech heard as a single sound; one "beat" of a word or phrase. bearing a stress or accent syllables that are not given a relative emphasis the arrangement of spoken words alternating stressed and unstressed elements The process of marking lines of poetry to show the type of feet and the number of feet they contain five feet per line (10 syllables per line of poetry) six feet per line (12 syllables per line of poetry) one unstressed syllable followed by one stressed syllable (tra-PEZE) a poem usually addressed to a particular person, object or event that has stimulated deep and noble feelings in the poet a sad or mournful poem (usually because of a death) unrhymed verse (usually in iambic pentameter) Poetry that does not have a regular meter or rhyme scheme a group of lines in a poem two consecutive lines of poetry that rhyme three line stanza a stanza of four lines A five line stanza the running over of a sentence or thought into the next line without a pause a term that describes a line of poetry that ends with a natural pause often indicated by a mark of punctuation a pause or break within a line of poetry (marked with || symbol) The leaving out of an unstressed syllable or vowel, usually in order to keep a regular meter in a line of poetry. a metrical line containing one foot a metrical line containing two feet a metrical line with three feet a metrical line containing four feet a metrical line containing six feet a metrical line containing seven feet one stressed syllable followed by an unstressed syllable (PUMP-kin) A metrical foot consisting of two stressed syllables. (PAN-CAKE) a metrical unit with unstressed-unstressed syllables (of the) metrical measurement of two unstressed syllables and then one stressed one (an-a-PEST)) A metrical foot consisting of a stressed syllable followed by two unstressed syllables (MAR-ma-lade)
Transportation vehicles contain different types of products, from seats and linings to batteries and electrical cables. In order to improve fuel efficiency and lower vehicle weights, the materials involved are often plastics, textiles and foams. The fire safety requirements are based on the type of vehicle, the function of the product used and its location in the vehicle. The highest level of fire safety is required for products used in commercial aircraft, regulated in the U.S. and worldwide by the U.S. Federal Aviation Administration (FAA). Most plastic products in airplanes must meet a very stringent heat release test, the Ohio State University (OSU) calorimeter. Aircraft seats do not have to meet the requirements of the OSU heat release test but must meet other flammability requirements that test heat release. For automobiles, materials exposed to the air in the passenger compartment must meet the flame spread test in FMVSS (Federal Motor Vehicle Safety Standard) 302, as regulated by the National Highway Traffic Safety Administration (NHTSA). NHTSA also regulates school buses, which are required to meet the same fire safety standards as cars. The National Congress on School Transportation recommends an additional fire test for school bus seats, but there is no regulation. Other buses also are regulated by NHTSA, which has not mandated any flammability requirements beyond FMVSS 302. But most commercial bus operators use a set of voluntary guidelines based primarily on a flame spread test and a smoke test. These two standards were developed by ASTM, an organization that develops international voluntary standards. Trains traveling between cities and between states are regulated by the Federal Railroad Administration, but most train operators use a set of requirements outlined by the National Fire Protection Association (NFPA) and contained in NFPA 130, many of them similar to those used for buses. NFPA 130 is updated regularly to ensure improved fire safety for passenger trains and for subways (with the same requirements). NFPA 130 focuses heavily on requirements for electric cables and they must meet a vertical cable tray test with pass/fail criteria for both flame spread and smoke. Ships are regulated primarily by the International Maritime Organization (IMO) and its SOLAS (Safety of Life at Sea) convention, which has important requirements for flame spread of surface linings using the IMO/LIFT apparatus and for electric cables (for flame spread and smoke). The U.S Coast Guard voluntarily adheres to the IMO requirements. Internationally, requirements for aircraft and ships are the same as in the U.S., as they are set by the FAA and IMO, respectively. The FMVSS 302 test is called ISO 3795 in Europe, and sets the fire safety requirements for automobiles. With regard to trains, the European Union has developed a set of requirements, which parallel many of the requirements in NFPA 130. However, NFPA 130 is, in fact, extensively used in many countries by rail and subway operators.
NASA's Infrared Observatory Measures Expansion of Universe Updated Oct. 5, 2012 to include a similar result from another study WASHINGTON -- Astronomers using NASA's Spitzer Space Telescope have announced one of the most precise measurements yet of the Hubble constant, or the rate at which our universe is stretching apart. The Hubble constant is named after the astronomer Edwin P. Hubble, who astonished the world in the 1920s by confirming our universe has been expanding since it exploded into being 13.7 billion years ago. In the late 1990s, astronomers discovered the expansion is accelerating, or speeding up over time. Determining the expansion rate is critical for understanding the age and size of the universe. Unlike NASA's Hubble Space Telescope that views the cosmos in visible and short-wavelength infrared light, Spitzer took advantage of long-wavelength infrared light for its latest Hubble constant measurement of 74.3 kilometers per second per megaparsec. A megaparsec is roughly three million light-years. This finding agrees with an independent supernovae study conducted last year by researchers primarily based at the Space Telescope Science Institute in Baltimore, Maryland, and improves by a factor of 3 on a seminal 2001 Hubble telescope study using a similar technique as the current study. "Spitzer is yet again doing science beyond what it was designed to do," said project scientist Michael Werner at NASA's Jet Propulsion Laboratory in Pasadena, Calif. Werner has worked on the mission since its early concept phase more than 30 years ago. "First, Spitzer surprised us with its pioneering ability to study exoplanet atmospheres," said Werner, "and now, in the mission's later years, it has become a valuable cosmology tool." In addition, the findings were combined with published data from NASA's Wilkinson Microwave Anisotropy Probe to obtain an independent measurement of dark energy, one of the greatest mysteries of our cosmos. Dark energy is thought to be winning a battle against gravity, pulling the fabric of the universe apart. Research based on this acceleration garnered researchers the 2011 Nobel Prize in physics. "This is a huge puzzle," said study lead author Wendy Freedman of the Observatories of the Carnegie Institution for Science in Pasadena. "It's exciting that we were able to use Spitzer to tackle fundamental problems in cosmology: the precise rate at which the universe is expanding at the current time, as well as measuring the amount of dark energy in the universe from another angle." Freedman led the ground-breaking Hubble Space Telescope study that earlier had measured the Hubble constant. Glenn Wahlgren, Spitzer program scientist at NASA Headquarters in Washington, said infrared vision, which sees through dust to provide better views of variable stars called cepheids, enabled Spitzer to improve on past measurements of the Hubble constant using Cepheids. "These pulsating stars are vital rungs in what astronomers call the cosmic distance ladder: a set of objects with known distances that, when combined with the speeds at which the objects are moving away from us, reveal the expansion rate of the universe," said Wahlgren. Cepheids are crucial to the calculations because their distances from Earth can be measured readily. In 1908, Henrietta Leavitt discovered these stars pulse at a rate directly related to their intrinsic brightness. To visualize why this is important, imagine someone walking away from you while carrying a candle. The farther the candle traveled, the more it would dim. Its apparent brightness would reveal the distance. The same principle applies to cepheids, standard candles in our cosmos. By measuring how bright they appear on the sky, and comparing this to their known brightness as if they were close up, astronomers can calculate their distance from Earth. Spitzer observed 10 cepheids in our own Milky Way galaxy and 80 in a nearby neighboring galaxy called the Large Magellanic Cloud. Without the cosmic dust blocking their view at the infrared wavelengths seen by Spitzer, the research team was able to obtain more precise measurements of the stars' apparent brightness, and thus their distances. These data opened the way for a new and improved estimate of our universe's expansion rate. "Just over a decade ago, using the words 'precision' and 'cosmology' in the same sentence was not possible, and the size and age of the universe was not known to better than a factor of two," said Freedman. "Now we are talking about accuracies of a few percent. It is quite extraordinary." The study appears in the Astrophysical Journal. Freedman's co-authors are Barry Madore, Victoria Scowcroft, Chris Burns, Andy Monson, S. Eric Person and Mark Seibert of the Observatories of the Carnegie Institution and Jane Rigby of NASA's Goddard Space Flight Center in Greenbelt, Md. For more information about Spitzer, visit: For more information on last year's supernovae study, visit: For more information about WMAP, visit: - end - text-only version of this release NASA press releases and other information are available automatically by sending a blank e-mail message to To unsubscribe from this mailing list, send a blank e-mail message to Back to NASA Newsroom | Back to NASA Homepage
Continue your child’s math learning Sign up for a FREE Splash Math account. Signup with Email Why parents choose Splash Math for their second graders? Intelligently adapts to the way each child learns Get coins for each correct answer and redeem coins for virtual pets Monitor progress with iPhone app, weekly emails and detailed dashboards Counting with Quarters and Half Dollars- 2nd Grade Math Students practice to count money with a collection of coins, including a quarter and a half-dollar. In mixed collections, students need to follow the same method of skip counting. It is easier if they arrange coins by decreasing order of their value before counting. Common Core Alignment 2.MD.8Solve word problems involving dollar bills, quarters, dimes, nickels, and pennies, using $ and ¢ symbols appropriately.
Changes in the earth’s atmosphere, the additional greenhouse effect and the resultant changes in the climate . . . represent a global danger for humanity and the entire biosphere of the earth. If no effective counteracting measures are taken, dramatic consequences are to be expected for all of the earth’s regions. This warning will undoubtedly seem familiar, perhaps even mind-numbingly so. But if the substance sounds like the same-old same-old, the date on which it was issued might seem surprising. It was not in the run-up to the Copenhagen climate summit or indeed anytime in the last decade. The above passage is nearly two decades old. It comes from a resolution adopted by the German Bundestag in September 1991. The resolution in question summarizes and endorses the recommendations of a parliamentary commission of inquiry on “Taking Precautionary Action to Protect the Earth’s Atmosphere.” The commission had been set up in October 1987. Appearing before the Bundestag some seven months earlier, Chancellor Helmut Kohl had warned that the “greenhouse effect” threatened to bring about “a grave pattern of climate change” and had called for the burning of fossil fuels to be limited, not just in Germany but “worldwide.” The June 1987 motion to form the commission envisioned “greenhouse gas” emissions producing “a global warming of from three to seven degrees Celsius” and called for counteracting measures to be taken even in the absence of scientific corroboration of the supposed threat—since otherwise, the document concludes darkly, “in a few decades . . . it could be too late.” The original impulse to take action had come from the German Physics Society, which in January 1986 published a “Warning of an Impending Climate Catastrophe.” Just over six months later, in August, the newsweekly Der Spiegel popularized the German physicists’ “warning” in a spectacular cover story headlined “The Climate Catastrophe.” The image on the cover of the magazine depicted Cologne’s historic cathedral surrounded by the waters of the Atlantic Ocean: a consequence of the melting of the polar ice caps, as was explained on the inside of the issue. Thus was the “global warming” scare born. In Germany, in 1986. In a report submitted to the Bundestag on October 2, 1990, the commission of inquiry laid out a veritable “roadmap” for concerted international action to combat “climate change.” The commission called for CO2 emissions to be cut by 30 percent by the year 2005 in all “economically strong industrialized countries.” Germany itself was called upon to meet this goal. But the formulation “economically strong industrialized countries” was clearly tailored to fit Germany’s major economic rivals: Japan and the United States. The report also calls for a 20-25 percent reduction in CO2 emissions among all the countries of the then European Community and a 20 percent reduction for all industrialized countries. “One needs to convince the other countries concerned of the necessity of such ambitious targets,” the report explains, “and to arrive as quickly as possible at corresponding international agreements.” The report declared it to be “urgently necessary” that a first international convention on “climate-relevant emissions” be adopted “at the latest in 1992 during the U.N. Conference on Environment and Development in Brazil.” And so it would come to pass. It was at the 1992 U.N. conference—more commonly known as the “Rio Earth Summit”—that a certain American senator began his career as would-be prophet of warming-induced gloom and doom. Al Gore’s book Earth in the Balance was timed for release just before the summit began. It was also here that the United Nations Framework Convention on Climate Change was opened for signature. It would be wrong to say that the climate change convention was merely “anticipated” by the work of the German Bundestag’s commission of inquiry. The commission’s 1990 report contains a full draft of such a “framework convention.” The proposed convention was supposed to be supplemented later on by a protocol establishing the concrete emissions reduction obligations of the parties. This would become the Kyoto Protocol. The German commission stated that the protocol should “come into force by 1995 at the latest.” In this respect, however, the international community was not able to keep to the schedule laid down by the German parliamentary commission. The Kyoto Protocol would first be adopted in 1997, and it would only come into force in 2005—as is well known, without the participation of the United States. Under the terms of the treaty, the assigned emissions targets are supposed to be met by the end of 2012.
Tent caterpillars can be a severe nuisance in southwestern Colorado. Spring outbreaks result in masses of tents in trees, defoliation of deciduous trees, and ample frustration for the landowner trying to control the masses of larvae. Their wriggling tents are where the larvae are found. Different tent caterpillars will feed on various species of trees with fruit trees, aspen, mountain mahogany, oak, ash, and cottonwood used as hosts. Tent caterpillars feed and establish their silky tents in the crotches of trees and shrubs in late spring. During heavy infestations, the tent caterpillars will migrate and feed on a variety of other plants. Several kinds of caterpillars feed in groups or colonies on trees and shrubs and produce a silken shelter or tent. Most common in spring are various types of tent caterpillars (Malacosoma species). During summer, large loose tents produced by fall webworm (Hyphantria cunea) are seen on the branches of cottonwoods, chokecherry, and many other plants. Occasionally, early spring outbreaks of caterpillars of the tiger moth (Lophocampa species) attract attention. Four species of tent caterpillars occur in Colorado. The western tent caterpillar (M. californicum) most often is seen infesting aspen and mountain-mahogany during May and early June. Many other plants, particularly fruit trees, may also be infested. The western tent caterpillar is the most common and damaging tent caterpillar, sometimes producing widespread outbreaks that have killed large areas of aspen. In stands of gambel oak, the sonoran tent caterpillar (M. tigris) occurs and the M. incurvatum discoloratum can be found feeding on cottonwoods and related trees during April and early May in the Tri River area of western Colorado. In northeastern Colorado, the eastern tent caterpillar (M. americanum) can occasionally be found on fruit trees. These tent caterpillars spend the winter in egg masses glued to twigs of the host plant. Prior to winter, the insects transform to caterpillars and emerge from the eggs shortly after bud break. The newly emerged caterpillars move to crotches of branches and begin to produce a mass of dense silk. This silken tent is used by the developing insects for rest and shelter during the day. They also molt (shed their skins) while on the silk mats. Most often the caterpillars leave the silk shelter to feed at night, returning by daylight, although they sometimes feed during daylight hours as well. The tent is gradually enlarged as the caterpillars grow. The caterpillars become full grown in late spring. Most wander from the area of the tent and spin a white cocoon of silk, within which they pupate. The adult moths, which are light brown with faint light wavy bands on the wings emerge about two weeks later. The moths mate and the females then lay a single egg mass. Tent caterpillars produce only one generation per year. The most common and damaging tent caterpillar found in urban areas is the forest tent caterpillar, M. disstria. Although its life history is similar to other tent caterpillars, the forest tent caterpillar does not produce a permanent tent as do the other species. Instead, they make light mats of silk on trunks and branches that are used as temporary resting areas during the day. Forest tent caterpillars feed on a wide variety of plants, including aspen, ash and various fruit trees. Occasionally, they produce outbreaks that can damage plants. Fall webworm is the most common tent caterpillar observed during midsummer. It is found on a many different plants, although chokecherry and cottonwood are the most common hosts. Winter is spent as a pupa, loosely buried under protective debris in the vicinity of previously infested trees. The adults, a nearly pure white moth, emerge in June and July, mate and lay eggs in masses on the leaves of trees and shrubs. Eggs hatch shortly afterwards. The young caterpillars feed as a group, covering the few leaves on which they feed. As they get older, fall webworms progressively cover larger areas of the plant with loose silk, and generally feed within the loose tent that they produce. When full grown, the caterpillars disperse and sometimes create a nuisance as they crawl over fences and sides of homes. There is only one generation of fall webworm known to occur in Colorado, although two or more generations are produced in parts of Kansas, Oklahoma, Texas and other nearby states. Caterpillars of tiger moths (L. ingens, L. argentata) make a dense mat of silk on the terminal growth of ponderosa pine, lodgepole pine, pinyon, Douglas-fir, white fir and juniper. They are one of the few caterpillars that continue to feed and develop during winter. They produce and occupy tents through early spring. By June, they complete their development and pupate. The adult moths emerge and fly during July and August, laying masses of eggs that hatch before fall. Historically, outbreaks of tiger moths occur most commonly in the Black Forest area near Colorado Springs and in West Slope pinyon-juniper stands. Top-kill of damaged trees commonly results from these injuries. Minor tent-producing insects A few other insects are found in Colorado and produce silken tents. Uglynest caterpillars (Archips cerasivornana) can be found on chokecherry, where they produce a messy nest of silk mixed with bits of leaves and insect frass. Outbreaks of the rabbitbrush webbing moth (Synnoma lynsyrana) occasionally damage rabbitbrush. An uncommon group of sawflies, known as web-spinning sawflies, also produce mats of silk on spruce, pines or plum. Many natural enemies attack all of the tent-making caterpillars. Birds, predaceous bugs and various hunting wasps prey on the caterpillars. Tachinid flies and parasitic wasps are important parasites. Tent caterpillars also are susceptible to a virus disease that can devastate populations. Because of these biological controls, serious outbreaks rarely last more than a single season. An exception is found in some communities where fall webworm is an annual problem. One reason for these sustained outbreaks may be the loss of biological controls due to aerial mosquito spraying. The microbial insecticide Bacillus thuringiensis (Dipel, Thuricide, etc.) can be an effective and selective control of all the tent-making caterpillars. However, to control fall webworm, Bt must be eaten by the insect. Therefore, it must be applied before the colony covers all of the leaves. Several contact insecticides also are effective for tent-making caterpillars. Sevin (carbaryl) has long been available. More recently, various pyrethroids such as permethrin, cyfluthrin and esfenvalerate are available for homeowner application and are highly effective. Spinosad, a naturally-derived product (sold as Conserve to commercial applicators) is very selective in its effects of species other than caterpillars. If accessible, tents may also be pulled out and removed. More severe measures, such as pruning or burning, are not recommended because they can cause more injury than the insects. Often, there is no need to control these insects. This is particularly true for fall webworm, which feeds late in the season. Such late season injuries can be well tolerated by plants. Control normally is warranted only where there are sustained, high levels of defoliation over several years. Information provided by W.S. Cranshaw, Colorado State University Extension entomologist and professor, bioagricultural sciences and pest management. May 18 — 4-H Clover Buds, 1:45 p.m. May 18 — 4-H Dog Agility, 3 p.m. May 18 — 4-H Rabbit Project, 4 p.m. May 21 — Back to Basic Food Prep-Pickling/Freezing, 1 p.m. May 21 — Back Country Horseman, 5:30 p.m. May 21 — Back to Basic Food Prep–Pickling/Freezing, 6 p.m. May 22 — 4-H Clothing Construction, 3:45 p.m. May 23 — 4-H Sports Fishing, 4 p.m. May 23 — Archuleta County Fair Board meeting, 6 p.m. Check out our webpage at www.archuleta.colostate.edu for calendar events and information.
Light is a form of energy produced by a An example for non-luminous object is ___________. he phenomenon by which the incident light falling on a surface is sent back into the same medium is known as When light is incident on a polished surface ___________ reflection takes place. An object becomes invisible when it undergoes ______ reflection. According to the laws of reflection, The image formed by a plane mirror is always _______. The centre of the sphere of which the spherical mirror forms a part is called ____________. The focus of a concave mirror is ________. A converging mirror is known as ________. The relation between the focal length and radius of curvature of a mirror is _______. Radius of curvature of a concave mirror is always _____ to the mirror. An image formed by a convex mirror is always ________. If the image formed by a concave mirror is virtual, erect and magnified, then the object is placed __________. Dentists use a _____________ to focus light on the tooth of a patient. An object is placed 1.5 m from a plane mirror. How far is the image from the person? An object placed 2m from a plane mirror is shifted by 0.5 m away from the mirror. What is the distance between the object and its image? What is the angle between the incident and reflected rays when a ray of light is incident normally on a plane mirror? A ray of light is incident on a plane mirror and the angle of incidence is 25 degrees. What is the angle of reflection? A ray of light is incident on a plane mirror and the angle of reflection is 50o. Calculate the angle between the incident ray and the reflected ray. Which of the following is used to make a periscope? Which mirror has a wider field of view? A ray of light passing through the _______ retraces its path. When an object is placed at the focus of a concave mirror, the image will be formed at ________. Butter paper is an example for _______ object. An object of size 2.0 cm is placed perpendicular to the principal axis of a concave mirror. The distance of the object from the mirror equals to the radius of curvature. The size of the image will be ______________. f an incident ray passes through the centre of curvature of a spherical mirror, the reflected ray will __________________. Ray optics is valid when characteristic dimensions of obstacles are: Focal length of plane mirror is: Magnification produced by plane mirror is: (a) +1 (b) -1 (c) < 1 (d) > 1 The magnification produced by a mirror is 1/3, then the type of mirror is A stick partially immersed in water appears broken due to Mirage is produced in hot deserts due to The unit of power of a lens is The astigmatism can be corrected by The human eye can distinguish between different colours due to the presence of The accommodation of a normal eye is from A person suffering from presbyopia should use The rod-shaped cells on the retina respond to Myopia and hypermetropia can be corrected by using ________ lens and ________ lens respectively.
«Everything in the world can be described by formulas» — the mathematicians say. And, probably, it is really possible. They like to feed accurate information and do not like the «water.» The accuracy and brevity of information about English grammar in the tables will not only help students with the initial level of knowledge, but also to those who study it for a long time. Using tables of English grammar Table — this is, above all, a special form of posting information. The table is entered only the base material, replacing a number of standard pages. Tables are necessary additional material for learning English, first of all, thanks to the completeness of the presentation of the studied material. Grammar of the English Language in the tables can also help organize the material. The principle of visibility — the main advantage of the tables in the process of learning English, which is also largely saves the time to explain the new material. Tables material on a particular topic is served in full, and not piecemeal, making it easier to search for a particular media. English Grammar in the tables will not only consolidate the material covered, but simply restore a previously studied. Tables focused on students entry level contain only simple, basic stuff. They do not require further explanation of the teacher and easy to use on their own. The tables are designed for students of advanced level, contain complicated elements, the basic rules that explain them, placed outside of the table. Grammar rules of the English language at the tables, you can create your own. The value of this work is significant, because in their compilation you again worked for the studied material, but the principle of building such a table is simple and clear for you. In addition, the table does not take up much space, and access to the information you do not require a lot of time. English Grammar Tables — a clarity and ease of understanding of the studied material, and at independent work, it is a fast and efficient recovery of the information you need. No matter whether you are teaching English by Skype, are doing a course or prefer self, you definitely need a table. On our site you can find some information on English grammar, presented in tabular form: - Table of irregular verbs in English - Table: plural nouns - Table Times English - Table: conditional sentences - Tables of interrogative sentences at times - The subjunctive mood in a simple sentence (Oblique Moods in a Simple Sentence) - Ways of expression future action - Subjunctive forms in a complex sentence in schemes
| Winter buffer strip. Courtesy of USDA NRCS. Most current adoption takes the form of fairly straight forward practices, e.g., riparian buffers and windbreaks. When considered as an innovative system, agroforestry is a collective name for land use systems and practices in which woody perennials are deliberately integrated with crops and/or animals on the same land management unit.The integration can be either in a spatial mixture or in a temporal sequence. There are normally both ecological and economic interactions between woody and non-woody components in agroforestry (World Agroforestry Center [ICAF]1993). For example, sylvo pasture and permacultural alley cropping focus on the many benefits of agricultural-landscape woody systems, including energy input savings, positive soil and water impacts, and enhanced wildlife habitat and carbon sequestration. Agroforestry incorporates at least several plant species into a given land area and creates a more complex habitat than those found in conventional agricultural or forestry situations. These habitats can produce a wider range of marketable products, create more sustainable business models and support a wider variety of birds, insects and other animals in the environment. There is enormous potential for economic benefit to practitioners and environmental benefit to the land; however, it is a system of practices that is usually only somewhat employed in most current applications. This is changing as producers, stakeholders, agencies and university researchers are coming together to collaborate and coordinate initiatives, and to leverage public funding to promote agroforestry as a mainstream value-added practice where agricultural cropping, forestry and production-system analysis come together in highly integrated and complementary systems. The sustainable forest management of an agroforestry system emphasizes a mix of trees or shrubs that are intentionally used in conjunction with agricultural or non-timber production systems, perhaps in very non-traditional “forest” settings. Knowledge, careful selection of species and good management of trees and crops or livestock grazing are needed to optimize the production and positive effects within the system and to minimize negative competitive effects. Biodiversity in agroforestry systems is typically higher than in conventional agricultural systems. There can be significant economic returns to well-thought-out systems where products focused on viable, high-value markets drive sustainable rural enterprises. In regions dominated by row crops, any woody-biomass (forest) ecosystem addition provides critical habitat and travel corridors for a diverse array of game and non-game species; helps to stabilize soil and maintains soil quality; promotes efficient cycling of water and nutrients; and provides additional hydrological benefits ranging from protecting and enhancing aquatic ecosystems to moderating storm effects, and peak and base flows in watersheds. Because forests and societies are in constant flux, the desired outcome of sustainable forest and agroforest management is not a fixed one. The ecosystem approach is a strategy for the integrated management of land, water and living resources that promotes conservation and sustainable use in an equitable way based on the application of appropriate scientific methodologies focused on logical plans for of biological organization. It also recognizes that humans, with their cultural and social diversity, are an integral ecosystem component that must not be ignored. For this reason, agroforestry managers tend to develop their use and sustainability plans in a holistic fashion, often in consultation with neighboring citizens, cooperating businesses, industry organizations and end users of the products they produce. In the simplest terms, the concept can be described as the attainment of balance; balance between increasing societal demand for forest products and benefits, and the preservation of forest or agroforest health and diversity. This balance is critical to the survival of managed forest systems and to the prosperity of all forest-and agriculture-dependent rural communities. August 2010 ... [Agroforestry]
Culture shock is a reaction to the many changes involved in exposure to a new culture. Children exhibit culture shock in various ways, from physical and emotional signs to cognitive and social indicators. For details, see the resource sheet “Culture Shock: Signs and Support Strategies”. Children born in Canada may still experience culture shock because they first encounter a different culture when they enter the childcare program. If they also experience a new language for the first time, the shock is often greater. A common misconception is that children adapt easily to a new language and culture. In fact, children usually have a more difficult time than adults because - they do not have strong coping mechanisms for stress; - their feelings of security come from their family who are behaving differently because of their own culture shock; - they experience things physically and so huge physical changes have a stronger effect on them (e.g., heavier clothing for winter, different foods, strange bed). The Connection between Separation Anxiety and Culture Shock One of the signs of culture shock is extreme separation anxiety. The impact of not having a gradual separation from the parent is therefore stronger (e.g., the child more frequently builds mistrust, the bonding with the parent is more readily weakened and the likelihood of having difficulty forming relationships is heightened). To reduce culture shock, the first step must be to have a gradual separation from the parent with a consistent caregiver. Signs of Culture Shock Physically a child experiencing culture shock may be frequently ill, may appear listless or may be extremely active. They may not be able to control their emotions. Many children are also unable to be involved in play, or their play patterns and social interest may be quite limited. Children’s self esteem may be weak and their home language skills may regress or disappear. The child may have difficulty focusing and listening even in their home language. Not all children suffer from culture shock, however, and the intensity of the symptoms will vary greatly. Factors Influencing Cultural Adaptation and Strategies Cultural Adaptation of Family Members If a parent is depressed, anxious, feeling isolated or is also experiencing culture shock, the child may feel they have lost their anchor. The parent may be distant or tearful. Their language may change to English or become more directing. By gradually building trusting relationships, you can help parents become more connected to their children. You can also introduce families to each other to help build their social support networks. Newcomer children can lose their sense of identity when the family switches the language used at home; when signs of their culture are less evident; or when dress, rituals and exposure to others from their culture is reduced. If the parent becomes less playful because of culture shock, the child may again become confused. Educators can support children’s identity by using the child’s correct name in songs and conversations and learning key words or phrases in the child’s first language. The family is an excellent resource for sharing songs, stories, rituals and materials from their culture. Including various cultural or family practices in your program also shows respect and builds in familiarity for the child. Children often experience difficulties eating and sleeping during separation, as well as during the early stages of culture shock. Unknowingly, families may initiate changes like weaning from the breast, bottle or soother; potty training; or requiring children to sleep on their own. It is advisable to ask parents to hold off on making those changes while the child is settling. To make children more comfortable, educators can get suggestions from families on favourite foods, how the child is accustomed to falling asleep, etc. Stages of Culture Shock Children arrive in the childcare program in various stages of culture shock. Their previous experience in group care, their knowledge of English, the family’s ability to support them and the gentleness and understanding of the caregivers will all make a difference when it comes to the impact of culture shock. In some cases, the symptoms may last years while in others it may be only months. In the early stages of intense culture shock the child might be unable to tolerate eye contact, may be terrified of strangers and could have extreme separation anxiety reactions (e.g., vomiting, flailing). Some children may be rigid and uninterested in their surroundings, appearing almost catatonic. With appropriate supports, children may be able to tolerate separation yet still be unable to play. They may observe, daydream or lack focus for long periods of time. Many children have a lot of anger during this time and may unexpectedly lash out at other children or objects. Some children may reject everything unknown or will be able to gradually tolerate a situation only to regress when something is changed (e.g., their educator is absent or another family member brings them to the program). As children’s shock decreases they are better able to listen and process information. They begin to become curious again and try to understand some words in the new language. The symptoms of culture shock may seem to fully disappear until an unknown trigger sets everything backwards. This is usually temporary and happens most frequently when the child is ill or lacking sleep. Older children have different symptoms such as rejection of the new or home culture and language and extreme anger about being uprooted. Julie Dotsch is an ECE Diversity consultant for her company One World. She is well known in the community for her interactive workshops and her specialized knowledge of immigrant preschoolers and their families. Julie can be contacted at [email protected].
The linguist George Lakoff has written extensively about the metaphors that shape human languages. He points out that we often talk about abstract concepts and emotions using metaphors to physical objects. For example, we have a lot of language that treats the emotion anger as if it were heated fluid in a container. We might say, "John felt the pressure building up inside of him until he finally blew his top." This metaphor for anger is an effective way of talking about anger. It also reflects a common belief about the way anger works. There is a general assumption that anger and frustration build up inside of us and that they can come to be let out in an energetic fury. Based on this metaphor, people often believe that they can prevent themselves from blowing up by letting off a little steam (to use the metaphor some more...). For example, you might issue a primal scream or hit a punching bag to release some of your anger. People assume that this process, called catharsis, is an effective way of allowing your anger to bubble over to the surface. Unfortunately, it doesn't work. Brad Bushman, Roy Baumeister, and Angela Stack looked at this issue in a 1999 paper in the Journal of Personality and Social Psychology. They manipulated people's anger in a laboratory experiment. Participants wrote an essay on a sensitive topic and then told people that their essay would be evaluated by a peer. In actuality, the feedback they were given was assigned by the experimenter. In the high anger condition, people were told that their essay was poor and was "one of the worst they had ever read." This feedback is known to make people upset. Soon afterward, some people were given the opportunity to punch a punching bag for 2 minutes. Others did nothing at all. Then, a short time later, everyone played a game against a (fictional) opponent. Over the course of the game, participants had a chance to punish their opponent with blasts of noise. The loudness of the noise and the length of the noise have been used as measures of aggression. The belief in catharsis would predict that people would be less aggressive if they had a chance to punch a punching bag after getting angry than if they had to sit and do nothing after getting angry. Instead, the opposite result was obtained. The people who punched the punching bag were actually more aggressive than the people who did nothing. What is going on here? Punching a punching bag makes a connection for people between anger and aggression. That is, it reinforces the link between being angry and acting in an aggressive manner. These connections between emotional states and behavior are an important part of what determines the way we act. So, these results suggest that it is better to take a few moments and do nothing when you are angry. Sitting quietly or meditating is a much more effective way of calming yourself down than attempting to let off steam through another aggressive act. Be sure to read the following responses to this post by our bloggers:
The 2014 National Climate Assessment provides the most comprehensive current analysis of the observed and projected consequences for the U.S. of global climate disruption. Here, extracted from the 30 chapters in the final report, we look at the key findings and messages of Chapters 1 (Overview) and 2 (Our Changing Climate). The following information is extracted from the final draft of the 2014 National Climate Assessment (full report). Download the full report or individual chapters at http://nca2014.globalchange.gov/downloads. This major report was prepared by several hundred scientific and technical experts under the oversight of the National Climate Assessment and Development Advisory Committee and was released by the U.S. government on May 6. A complete listing of key findings and messages in the report is available here in PDF format. 2014 NATIONAL CLIMATE ASSESSMENT – FINAL REPORT KEY FINDINGS & MESSAGES Chapter 1. Overview These findings distill important results that arise from this National Climate Assessment. They do not represent a full summary of all of the chapters’ findings, but rather a synthesis of particularly noteworthy conclusions. 1. Global climate is changing and this is apparent across the United States in a wide range of observations. The global warming of the past 50 years is primarily due to human activities, predominantly the burning of fossil fuels. Many independent lines of evidence confirm that human activities are affecting climate in unprecedented ways. U.S. average temperature has increased by 1.3°F to 1.9°F since record keeping began in 1895; most of this increase has occurred since about 1970. The most recent decade was the warmest on record. Because human-induced warming is superimposed on a naturally varying climate, rising temperatures are not evenly distributed across the country or over time. 2. Some extreme weather and climate events have increased in recent decades, and new and stronger evidence confirms that some of these increases are related to human activities. Changes in extreme weather events are the primary way that most people experience climate change. Human-induced climate change has already increased the number and strength of some of these extreme events. Over the last 50 years, much of the United States has seen an increase in prolonged periods of excessively high temperatures, more heavy downpours, and in some regions, more severe droughts. 3. Human-induced climate change is projected to continue, and it will accelerate significantly if global emissions of heat-trapping gases continue to increase. Heat-trapping gases already in the atmosphere have committed us to a hotter future with more climate-related impacts over the next few decades. The magnitude of climate change beyond the next few decades depends primarily on the amount of heat-trapping gases that human activities emit globally, now and in the future. 4. Impacts related to climate change are already evident in many sectors and are expected to become increasingly disruptive across the nation throughout this century and beyond. Climate change is already affecting societies and the natural world. Climate change interacts with other environmental and societal factors in ways that can either moderate or intensify these impacts. The types and magnitudes of impacts vary across the nation and through time. Children, the elderly, the sick, and the poor are especially vulnerable. There is mounting evidence that harm to the nation will increase substantially in the future unless global emissions of heat-trapping gases are greatly reduced. 5. Climate change threatens human health and well-being in many ways, including through more extreme weather events and wildfire, decreased air quality, and diseases transmitted by insects, food, and water. Climate change is increasing the risks of heat stress, respiratory stress from poor air quality, and the spread of waterborne diseases. Extreme weather events often lead to fatalities and a variety of health impacts on vulnerable populations, including impacts on mental health, such as anxiety and post-traumatic stress disorder. Large-scale changes in the environment due to climate change and extreme weather events are increasing the risk of the emergence or reemergence of health threats that are currently uncommon in the United States, such as dengue fever. 6. Infrastructure is being damaged by sea level rise, heavy downpours, and extreme heat; damages are projected to increase with continued climate change. Sea level rise, storm surge, and heavy downpours, in combination with the pattern of continued development in coastal areas, are increasing damage to U.S. infrastructure including roads, buildings, and industrial facilities, and are also increasing risks to ports and coastal military installations. Flooding along rivers, lakes, and in cities following heavy downpours, prolonged rains, and rapid melting of snowpack is exceeding the limits of flood protection infrastructure designed for historical conditions. Extreme heat is damaging transportation infrastructure such as roads, rail lines, and airport runways. 7. Water quality and water supply reliability are jeopardized by climate change in a variety of ways that affect ecosystems and livelihoods. Surface and groundwater supplies in some regions are already stressed by increasing demand for water as well as declining runoff and groundwater recharge. In some regions, particularly the southern part of the country and the Caribbean and Pacific Islands, climate change is increasing the likelihood of water shortages and competition for water among its many uses. Water quality is diminishing in many areas, particularly due to increasing sediment and contaminant concentrations after heavy downpours. 8. Climate disruptions to agriculture have been increasing and are projected to become more severe over this century. Some areas are already experiencing climate-related disruptions, particularly due to extreme weather events. While some U.S. regions and some types of agricultural production will be relatively resilient to climate change over the next 25 years or so, others will increasingly suffer from stresses due to extreme heat, drought, disease, and heavy downpours. From mid-century on, climate change is projected to have more negative impacts on crops and livestock across the country – a trend that could diminish the security of our food supply. 9. Climate change poses particular threats to Indigenous Peoples’ health, well-being, and ways of life. Chronic stresses such as extreme poverty are being exacerbated by climate change impacts such as reduced access to traditional foods, decreased water quality, and increasing exposure to health and safety hazards. In parts of Alaska, Louisiana, the Pacific Islands, and other coastal locations, climate change impacts (through erosion and inundation) are so severe that some communities are already relocating from historical homelands to which their traditions and cultural identities are tied. Particularly in Alaska, the rapid pace of temperature rise, ice and snow melt, and permafrost thaw are significantly affecting critical infrastructure and traditional livelihoods. 10. Ecosystems and the benefits they provide to society are being affected by climate change. The capacity of ecosystems to buffer the impacts of extreme events like fires, floods, and severe storms is being overwhelmed. Climate change impacts on biodiversity are already being observed in alteration of the timing of critical biological events such as spring bud burst and substantial range shifts of many species. In the longer term, there is an increased risk of species extinction. These changes have social, cultural, and economic effects. Events such as droughts, floods, wildfires, and pest outbreaks associated with climate change (for example, bark beetles in the West) are already disrupting ecosystems. These changes limit the capacity of ecosystems, such as forests, barrier beaches, and wetlands, to continue to play important roles in reducing the impacts of these extreme events on infrastructure, human communities, and other valued resources. 11. Ocean waters are becoming warmer and more acidic, broadly affecting ocean circulation, chemistry, ecosystems, and marine life. More acidic waters inhibit the formation of shells, skeletons, and coral reefs. Warmer waters harm coral reefs and alter the distribution, abundance, and productivity of many marine species. The rising temperature and changing chemistry of ocean water combine with other stresses, such as overfishing and coastal and marine pollution, to alter marine-based food production and harm fishing communities. 12. Planning for adaptation (to address and prepare for impacts) and mitigation (to reduce future climate change, for example by cutting emissions) is becoming more widespread, but current implementation efforts are insufficient to avoid increasingly negative social, environmental, and economic consequences. Actions to reduce emissions, increase carbon uptake, adapt to a changing climate, and increase resilience to impacts that are unavoidable can improve public health, economic development, ecosystem protection, and quality of life. Chapter 2. Our Changing Climate 1. Global climate is changing and this change is apparent across a wide range of observations. The global warming of the past 50 years is primarily due to human activities. 2. Global climate is projected to continue to change over this century and beyond. The magnitude of climate change beyond the next few decades depends primarily on the amount of heat-trapping gases emitted globally, and how sensitive the Earth’s climate is to those emissions. 3. U.S. average temperature has increased by 1.3°F to 1.9°F since record keeping began in 1895; most of this increase has occurred since about 1970. The most recent decade was the nation’s warmest on record. Temperatures in the United States are expected to continue to rise. Because human-induced warming is superimposed on a naturally varying climate, the temperature rise has not been, and will not be, uniform or smooth across the country or over time. 4. The length of the frost-free season (and the corresponding growing season) has been increasing nationally since the 1980s, with the largest increases occurring in the western United States, affecting ecosystems and agriculture. Across the United States, the growing season is projected to continue to lengthen. 5. Average U.S. precipitation has increased since 1900, but some areas have had increases greater than the national average, and some areas have had decreases. More winter and spring precipitation is projected for the northern United States, and less for the Southwest, over this century. 6. Heavy downpours are increasing nationally, especially over the last three to five decades. Largest increases are in the Midwest and Northeast. Increases in the frequency and intensity of extreme precipitation events are projected for all U.S. regions. 7. There have been changes in some types of extreme weather events over the last several decades. Heat waves have become more frequent and intense, especially in the West. Cold waves have become less frequent and intense across the nation. There have been regional trends in floods and droughts. Droughts in the Southwest and heat waves everywhere are projected to become more intense, and cold waves less intense everywhere. 8. The intensity, frequency, and duration of North Atlantic hurricanes, as well as the frequency of the strongest (Category 4 and 5) hurricanes, have all increased since the early 1980s. The relative contributions of human and natural causes to these increases are still uncertain. Hurricane-associated storm intensity and rainfall rates are projected to increase as the climate continues to warm. 9. Winter storms have increased in frequency and intensity since the 1950s, and their tracks have shifted northward over the United States. Other trends in severe storms, including the intensity and frequency of tornadoes, hail, and damaging thunderstorm winds, are uncertain and are being studied intensively. 10. Global sea level has risen by about 8 inches since reliable record keeping began in 1880. It is projected to rise another 1 to 4 feet by 2100. 11. Rising temperatures are reducing ice volume and surface extent on land, lakes, and sea. This loss of ice is expected to continue. The Arctic Ocean is expected to become essentially ice free in summer before mid-century. 12. The oceans are currently absorbing about a quarter of the carbon dioxide emitted to the atmosphere annually and are becoming more acidic as a result, leading to concerns about intensifying impacts on marine ecosystems. * * * Thanks to Nick Sundt at WWF for this compilation.
Infant Dies in New E. Coli Outbreak Ongoing Toxic E. Coli O145 Outbreak: 14 Cases in 6 States, Cause Unknown WebMD News Archive Preventing E. coli Illness STEC outbreaks generally begin in livestock. Although contaminated meat may carry the bugs, outbreaks often are caused by contaminated produce. E. coli is spread by oral-fecal contact. People often unknowingly eat microscopic amounts of human or animal feces. Obvious sources of contact are eating contaminated food (such as undercooked contaminated meat or unpasteurized milk), contact with cattle, or changing diapers. Less obvious sources of contact include eating food prepared by people who did not properly wash their hands after using the toilet. It's probably impossible to fully ensure that you never encounter STEC bacteria. But here's the CDC's advice on how to limit your risk: - Wash your hands thoroughly after using the bathroom or changing diapers and before preparing or eating food. Wash your hands after contact with animals or their environments (at farms, petting zoos, fairs, even your own backyard). - Cook meats thoroughly. Ground beef and meat that has been needle-tenderized should be cooked to a temperature of at least 160 degrees Fahrenheit. It's best to use a thermometer, as color is not a very reliable indicator of "doneness." - Avoid raw milk, unpasteurized dairy products, and unpasteurized juices (like fresh apple cider). - Avoid swallowing water when swimming or playing in lakes, ponds, streams, swimming pools, and backyard "kiddie" pools. - Prevent cross contamination in food preparation areas by thoroughly washing hands, counters, cutting boards, and utensils after they touch raw meat. Most STEC outbreaks have been caused by an E. coli strain called O157. The FDA forbids sale of any beef trimmings with any amount of this bug. But there's been a growing realization that six other STEC strains have been causing U.S. outbreaks. Ironically, the FDA this week began a zero-tolerance policy for six other STEC strains -- including E. coli O145. The outbreak was first reported by ABC News.
The nucleus of an atom has a positive charge equal to Ze (here Z is atomic number), Z electrons (each with –e charge) surround the nucleus. These electrons lie in the atomic orbitals, where each can hold only two electrons. These atomic orbitals are in turn exist in shells (n = 1, 2, 3….), that round the nucleus. Each shell has orbitals, which are assembled into sub shells represented as and. For example n = 3 shell: Have 3 subshells:
Die Cutting, Embossing, Debossing and Foil Stamping and Finishing Techniques Purpose: To gain a greater understanding of padding, foil stamping, die cutting and embossing - Foil Stamping – is a special kind of printing procedure where heat, pressure, and a metallic paper (foil) is used to create different shiny designs and graphics on various materials - Die cutting – is a process used in many different industries to cut a thin flat material (in our case, paper) into a specific shape using a steel cutting die. It can be used to punch out a decorative shape or pattern to incorporate within a larger piece, or it can be used to create the main shape of an object by cutting the entire sheet of paper in an distinct/designed way. - Embossing – several techniques for creating a raised pattern on a material - Debossing – is the opposite of embossing. With debossing, the imprinted design causes depressions in the material leaving a depressed (debossed) imprint of the image on the paper or cardstock. - Scoring – refers to the process of making a crease in paper so it will fold easier. - Padding – the act of padding is done by joining a specific number of individual sheets or forms together by applying a padding compound along one side of the stack. Every stack usually has a chipboard backer to provide stability to the pad. In Foil Stamping, hot dies with raised images press a thin plastic film carrying colored pigments against the paper. The pigments transfer from backing film to paper, bonding under heat and pressure. Foil comes in over 200 colors and is available in dull, pastel, clear, matte and patterns. For best results avoid designs with fine lines and intricate shapes Die makers use thin metal strips with sharp edges to make dies for cutting. After shaping the strips into patterns, they mount them on a wooden base. A press forces the cutting edge into paper. After cutting, rubber pads push the paper away from the die, allowing the sheet to continue through the press. Embossing and debossing each require two matched dies, one of which is heated. Pressing paper between the dies creates the image. The techniques give best results on text and uncoated cover papers and not very good results on coated papers. Padding – Almost every printer can make pads from stacks of loose sheets, the most simple binding operation. For most printers it’s a hand operation with brush and glue pot. Create your own pad of paper with cover. You decide your dimensions, use cardstock to create your cover design, score your cardstock at the right width dimensions, and create your own glued pad of paper (elmers glue). You should have a minimum of 24 sheets inside the pad. Cover Design – 388_CHAO_C_npcover Interior Pages – 388_CHAO_C_nppages Photograph of Final Piece – 388_CHAO_C_nppictures _____/20 Commercial Usability/Design _____/10 Scores – Measurements _____/10 Details/Complexity and Neatness _____/10 Mock Up _____/15 Photograph Final piece
In English, many things are named after a particular country – but have you ever wondered what those things are called in those countries? - The final choice of heat-sink materials depends on several material properties such as cost, electrical conductivity, thermal expansion coefficient, and outgassing. - A bipolar junction transistor is provided that includes an intrinsic collector region of first conductivity type in a semiconductor substrate. - These ions, which include aluminum and silicon dioxide, are chosen for their relative conductivity and form the actual pathways along which current flows. - Now they have shown a specific way in which the molecule's shape affects its conductivity. - At the same time, the single-channel conductance is proportional to the buffer conductivity in a wide range of salt concentrations. - Another significant aspect of this breakthrough is the fact that only one electron from the atom is needed to turn molecular conductivity on or off. - These mobile charges should make the surfaces conductive, but past experiments have failed to show any conductivity. - So, semiconductors and conducting polymers have much lower conductivity than metals. - This cooling results in a measurable change in electrical conductivity of the sensor. - The upshot is that the ratio of thermal to electrical conductivity depends primarily on the square of the thermal speed. - The percentage of total salts was calculated from conductivity readings using conversions as suggested by Jackson. - This calculated conductivity is higher than the measured conductivity since it does not include the inter-conduit resistance. - One way to prevent flashover is to add some conductivity to the surface of the insulator, so charge can bleed away before it builds up. - These conductive fibers alter their conductivity in response to stretching, temperature, or moisture. - Thanks to the high copper content the conductivity of the material is significantly higher than conventional conductive compounds. - This value is higher than the activation energy of electrolyte conductivity, but comparable to that of other ion channels. - As more salts are dissolved in the water, however, its conductivity increases and currents begin to flow. - Measured data may include electrical properties of the volume of interest such as conductivity and dielectric constant. - The electrical conductivity of a material is defined as the ratio of the current per unit cross - sectional area to the electric field producing the current. - As expected, the relative change of the effective conductivity is larger for higher volume fractions and higher membrane conductivity. English has borrowed many of the following foreign expressions of parting, so you’ve probably encountered some of these ways to say goodbye in other languages. Many words formed by the addition of the suffix –ster are now obsolete - which ones are due a resurgence? As their breed names often attest, dogs are a truly international bunch. Let’s take a look at 12 different dog breed names and their backstories.
Marginal Cost (MC): The marginal cost of an additional unit of output is the cost of the additional inputs required to make that output. More formally, the marginal cost is the derivative of total production costs with respect to the level of output. Marginal cost and average cost can vary greatly. For example, assume it costs $1000 to produce 100 units and $1020 to produce 101 units. The average cost per unit is $10, but the marginal cost of the 101st unit is $20 The EconModel applications Perfect Competition and Monopoly emphasize the roles of average cost and marginal cost curves. The short movie Derive a Supply Curve (40 seconds) shows an excerpt from the Perfect Competition presentation that derives a supply curve from profit maximizing behavior and a marginal cost curve.
Full Fertilization and Pregnancy Description [Continued from above] . . . The testes of the male reproductive system produce millions of spermatozoa (or sperm cells) daily as the male gamete. Spermatozoa are flagellated haploid cells that carry half of the father’s genome to the mother’s reproductive system to fertilize ova. Sperm slowly develop and mature as they travel away from the testes through the long tubes of the epididymis and vas deferens and collect at the ejaculatory duct. Several exocrine glands in the male reproductive apparatus produce the liquid portion of semen that is mixed with sperm during ejaculation. During sexual intercourse, the penis enters the vagina of the female reproductive tract and semen is released into the vagina through ejaculation. The sperm next swim from the vagina through the cervix, uterus, and Fallopian tubes until they encounter the ovum. Semen provides protection to the sperm cells, yet the vast majority of sperm never make it to the Fallopian tubes. Only the healthiest sperm can reach the ovum and attempt to fertilize it. Once sperm and egg meet in the Fallopian tube, the actual process of fertilization can begin. Sperm must pass through several layers in the exterior of the egg cell and often several sperm attempt to penetrate the egg cell at once, racing to be the first to enter the egg. The first sperm to enter the egg triggers a reaction that prevents all other sperm from passing through the plasma membrane of the egg cell, ensuring proper fertilization. Each haploid gamete – spermatozoon and ovum – contains half of the genetic material necessary to produce a baby. The sperm delivers its DNA to the ovum as soon as it penetrates the plasma membrane. Once the genetic material is combined the fertilized egg contains a full complement of DNA to produce a new, genetically unique baby. The fertilized egg is now known as a zygote and the process of pregnancy begins. Pregnancy is the process of growth and development of a zygote into a fetus within the mother’s body. The zygote begins its development in the Fallopian tube immediately after fertilization, dividing from a single cell to a 32-cell blastocyst in its first week. During this first week the zygote slowly moves toward the uterus and embeds itself into the uterine wall just after reaching the blastocyst stage. The blastocyst continues developing into an embryo and forms tissues that become the umbilical cord and placenta. During the embryonic stage, which last around 10 weeks, the tiny embryo’s cells differentiate and begin to form all of its major organ systems. External structures, such as eyes, ears, and fingers, begin to develop and are visible. Internal structures, such as the early brain and heart, also form at this time. Despite all of the complex structures that form, the embryo remains less than about an inch (2.5 cm) long throughout most of the embryonic stage. The placenta and umbilical cord play vital roles in an embryo’s survival during pregnancy. Blood from the embryo circulates through the umbilical arteries in the umbilical cord to the placenta and enters into tiny structures known as chorionic villi. Maternal blood circulates in the uterine lining on the outside of the villi. Many chemicals can diffuse through the chorionic villi, allowing the embryo to obtain vital oxygen and nutrients from the mother while waste products and carbon dioxide are transferred to maternal blood for disposal. The freshly oxygenated and nutrient-rich blood in the placenta travels back to the embryo through the umbilical veins to the embryonic heart, which pumps the blood through the embryo’s body. After 10 weeks of development, the embryo reaches the fetal stage of development. During the fetal stage the fetus continues to develop its organ systems while growing from a little over an inch in length to around 18-22 inches by the end of pregnancy. The uterus begins to distend to accommodate the growing fetus at this stage, while the placenta grows to provide sustenance to the fetus. By the end of the fetal stage the fetus has developed all of its organ systems to be viable outside of the mother’s womb. Prepared by Tim Taylor, Anatomy and Physiology Instructor
Orthoptera are an order of insects. The order contains grasshoppers, katydids, and crickets. "Ortho" means "straight", so "Orthoptera" means "straight wings". This means the front wings, called tegmina, are stiff, straight, and not used for flying. The back wings are membranous and are folded like a fan under the front wings when the insect is not flying. Many species use their wings to make sounds, which we usually call "chirping" noises. Life cycle[change | change source] Orthopteran insects start life in an egg case. After three weeks – or when spring comes – the tiny nymphs come out from the egg case. After four or five molts, they have wings. This shows that they are adults and are ready to reproduce. Sound production[change | change source] Their sounds are produced by using file-like edges to their wings, or wing veins. In some groups the wing edges are rubbed together, in others the hind limb is rubbed against a wing edge. Each sound is unique to the species, and its function is identification for mating purposes. It is males which make the sound, and females move towards it. Diet[change | change source] In the order Orthoptera, members chew their food, using their mandibles (jaws). Crickets are omnivores, which means they eat both plants and animals. Actually, they will eat almost anything: vegetables, cereal, and even their own mate if they are hungry enough. Katydids are mostly herbivores (plant eaters), though they will eat their own mate, too, if they are hungry enough. They also enjoy eating aphids and other small, slow-moving creatures. Grasshoppers almost always eat plants like grass, wheat bran, and lettuce, and they can be terrible crop pests. Jumping[change | change source] It's quite hard to catch a member of Orthoptera because they jump so well. They have amazing legs - a grasshopper can jump 20 times farther than the length of its body. Their back legs are very large and long. These long, strong legs give these insects their great ability to jump. Suborders[change | change source] Crickets, katydids, and grasshoppers are in the same order of Orthoptera because they are alike in many ways. However, there are several things that make them different from each other. First, their colors are usually quite different. Since grasshoppers like to move during the daytime, their colors are similar to grass and bright flowers, making them usually green, light brown, or multicolored (lots of different colors at once). Crickets move at night, so they are dark. Katydids like to spend a lot of time on leaves, so they are often leaf-colored, and their wings can look like leaves. Their wings can have the same vein patterns as leaves, and they often have little brown spots just like the ones that might be found on a leaf. Secondly, their behaviors are different. Grasshoppers like being active in the day; crickets, at night; katydids, in the late afternoon and evening. Thirdly, their antennae is different. Katydids and crickets usually have long, thin antennae, while grasshoppers usually have short, thick ones. Of course, this rule does is not perfect – for instance, even though grasshoppers usually have short, thick antennae, the long-horned grasshopper has long, thin antennae like a cricket. Because of this, it is still sometimes hard to tell the members of this order apart. Related pages[change | change source] References[change | change source] |Wikispecies has information on: Orthoptera.|
There are three parts to this resource:Introduction to enlargements: This is an ideal resource for introducing pupils to the concept of enlargement. Pupils and teachers can use the resource to investigate the scale factor between the object and its enlargement. Enlargements without a centre of enlargement: This resource is perfect for demonstrating the enlargement of shapes without using a centre of enlargement. The resource enables the user to enlarge a given shape by dragging the vertices of a polygon. Enlargements with a centre of enlargement: This is a very powerful resource that is perfect for exploring the enlargement of shapes using a centre of enlargement. The flexibility of the resource enables the user to create questions to suit their needs. Not seeing what you want? Is there a problem with the files? Do you have a suggestion? Please give us feedback, we welcome all correspondence from our users.
Just for pre-school-aged children, Pre-school Prehistorics includes hands-on songs and dinosaur patterns to make learning about dinosaurs fun and exciting. Students recreate the classic fairytale of Goldilocks and the Three Bears with a twist using the flannelboard patterns. Students enhance their critical thinking skills by finding what is different in the Visual Discrimination Activity; then, reinforcing the idea of time by discussing what objects are out of place with the dinosaur illustrations. Students then get their hands dirty as they recreate a dinosaurs environment in the sandbox with the Sandbox Dinosaur Landscape activity. Finally, students create their own dinosaur fossils with Fossil Making. This Primary Studies lesson provides a teacher and student section with illustrations, puzzles, big book, hands-on activities, and certificate to create a well-rounded lesson plan.
Phase Shift Method for Distance Measurements Laser rangefinders are often based on the phase shift method, a technique for measuring distances in the following way. A laser beam with sinusoidally modulated optical power is sent to a target. Some reflected light (from diffuse or specular reflections) is monitored, and the phase of the power modulation is compared with that of the sent light. The phase shift obtained is 2π times the time of flight times the modulation frequency. This shows that higher modulation frequencies can result in a higher spatial resolution. Although the phase shift is directly proportional to the time of flight, the term time-of-flight method should be reserved to case where one really measures a delay time more directly. As for an optical interferometer, the phase shift method involves an ambiguity regarding the measured distance, because with increasing distance the phase will vary periodically. However, the periodicity is much larger than in an interferometer, since the modulation frequency is much lower than the optical frequency. Also, the ambiguity can easily be removed, e.g. by coming measurement results with different modulation frequencies. Compared with interferometers, devices based on the phase shift technique are less accurate, but they allow unambiguous measurements over larger distances. Also, they are suitable for targets with diffuse reflection from a rough surface. The power modulation may be obtained with an electro-optic modulator, acting on a continuous-wave laser beam. Modulation frequencies of many megahertz or even multiple gigahertz are easily obtained. A resonant type of modulator can be operated with relatively low input voltage, but only with a small range of modulation frequencies, making the removal of the mentioned ambiguity more difficult. A special kind of power modulation is achieved by using a mode-locked laser. Advantages are the high modulation frequency (allowing for high accuracy) and (for passive mode locking) that no optical modulator is required. The use of a laser beam allows to realize a laser radar, where an image is formed by scanning the laser beam direction in two dimensions. However, imaging systems can also be made with one or several current-modulated light-emitting diodes (LEDs) illuminating the whole object area. The spatial resolution is then obtained via imaging detection. There are optoelectronic chips with two-dimensional sensor arrays, being able to measure the phase shift for each pixel. Questions and Comments from Users Here you can submit questions and comments. As far as they get accepted by the author, they will appear above this paragraph together with the author’s answer. The author will decide on acceptance based on certain criteria. Essentially, the issue must be of sufficiently broad interest. Please do not enter personal data here; we would otherwise delete it soon. (See also our privacy declaration.) If you wish to receive personal feedback or consultancy from the author, please contact him e.g. via e-mail. By submitting the information, you give your consent to the potential publication of your inputs on our website according to our rules. (If you later retract your consent, we will delete those inputs.) As your inputs are first reviewed by the author, they may be published with some delay.
Cements have been known and used as binding materials for almost two millenniums, although the chemical composition of cement has been a variant all since then. Today, ordinary Portland cement is most commonly used for mortar or concrete production. However, manufactured cement is available in a powdered form and is always used as a binder once it reacts chemically with water. Therefore, this cement paste is not uncommon for construction operations and is used as a mortar for laying bricks, for concrete manufacturing, etc. The viscosity of cement paste is termed as its consistency. This thickness of paste is a function of the quantity of water added to the powdered cement. Greater is the quantity added, less viscous the paste will be and vice versa. Hence, a predetermined quantity of water must be added to produce a cement paste of normal or standard consistency. Normal or Standard Consistency The word ‘standard consistency’ or ‘normal consistency’ refers to the viscosity of cement paste obtained under standard test conditions in the standard VICAT apparatus when the plunger penetrates by 10mm. The significance of determining the normal consistency is that it tells us about the actual amount of water expressed as a percentage of cement that will produce the requisite paste as per standard recommendations. Fig. 1: Cement paste with varying consistency Importance of Determination of Normal Consistency of Cement Ascertaining the standard consistency is important because it serves as a pre-requisite for a number of other laboratory experiments carried out on cement. These tests include the following; - Soundness of cement - Initial and final setting time of cement - Compressive Strength of cement - Tensile strength of cement For all the above-mentioned tests, it is imperative to precede with the determination of normal consistency of cement. Laboratory Determination of Standard Consistency Experimentally, the standard consistency of hydraulic cements is found using VICAT apparatus. It consists of a conical ring in which the prepared cement paste is placed. The ring has a tapered cross-section with an internal diameter of at the bottom and at the top and a height of . Prior to filling the mold, it is placed over a square, glass plate that is non-absorptive or non-porous and has a surface area of . The plate being non-porous does not absorb any water from the paste once it is placed over it. Present above the conical ring is a plunger, almost in length and in diameter. The plunger is lowered and made to free fall by rotating the release pin present at the side of the scale. This scale quantifies the depth to which the plunger is lowered into the test sample. The test setup is shown in the figure below. Fig. 2: VICAT Apparatus Standard Followed: ASTM C 187-16 1)Temperature and Humidity The standard recommends the temperature of surrounding air to be and that of mixing water to be . Additionally, the relative humidity of air in the vicinity of the experimental set up must not be less than 50%. The code enjoins the test temperature to ensure that the moisture or humidity present in the air does not commence the hydration of cement prior to the addition of mixing water. 2)Recommended Cement Weight The ASTM standard suggests that cement weight of is ample for the test to be carried out. The mixing duration, as specified by the code, is the time between the instant cement and water come in contact to the placement of paste in the apparatus, and the standard limits this time to be . - Take a trial amount of water as some percentage of cement taken and mix the cement and water thoroughly for the recommended time to make a paste. - Place the conical ring on the glass plate and fill it with the prepared paste. Level the top surface of ring using a trowel. - Bring the scale pointer to zero or note down the initial reading on the scale. - Using the set screws, lower the plunger such that it comes just in contact with the paste and tighten the release pin at this point. It is to be kept in mind that the standard has laid emphasis on the fact that once the mixing is completed, the time duration between adjusting the scale pointer and plunger using screws should not exceed 30 seconds. - Immediately release the plunger by turning the release pin. The plunger starts to settle into the cement paste. - Leave the plunger in the paste for 30 seconds after which note down the penetration of plunger from the scale. - Repeat the above procedure for varying percentages of water. The percentage value that results in 10mm penetration of plunger will give the standard consistency of cement paste. Fig. 3: Test setup for determination of standard consistency of cement Amount of Cement = W1 (grams) Trial Amount of Water = x percent of W1 The water percentage (x) that results in 10mm penetration of plunger under standard test conditions is the standard consistency of cement. - The test assembly should be prevented from any kind of vibration. This is because any vibrating force will provide a stimulus for the penetration of needle, consequently giving misleading results. - Care should be exercised while mixing cement and water and in no case shall the mixing time exceed the limits levied by the standard. This is because delaying the mixing process will ultimately result in setting of the paste which in no case is desirable. - For a cement paste, standard consistency is quintessential to achieve because addition of water less than the normal consistency will not initiate the chemical reaction between cement and water. In a similar way, adding more water will result in strength reduction of the cement paste. - As we increase the fineness of cement, the amount of absorbed water continues to increase. This is owing to the fact that fine particles have more surface area than the coarse ones.
What is Enrichment? We consider that enrichment is the enhancement of learning experiences, broadening and deepening the learning opportunities pupils have. At Holy Trinity CE School there are a number of ways in which we enrich the curriculum pupils have, for example: - the study of a subject beyond the curriculum as defined by the requirements of the DFE - alternative and creative approaches to topics, including open-ended investigations and discussions - connections between topics or themes - the cross-curricular study of different interlinked subjects - visits out of school by pupils or visits to the school by outside speakers - visits to other schools to take part in activities with other schools e.g Opera workshop Why is Enrichment important? - Enrichment provides greater freedom for exploring different topics or subjects and by increasing intellectual satisfaction contributes to a pupil’s well-being. - When central to curriculum planning, enrichment helps to ensure that the learning experience has clear links between such activities, general classroom activities and topics/themes. - Enrichment ensures that all pupils are able to show their abilities in different ways and provides opportunities for them to broaden and deepen their understanding. - Enrichment provides ways in which pupils can develop their understanding at their own level, make connections and develop their learning skills in different ways. We run a number of enrichment activities through the year. These activities are varied and support the learning in different year groups/subjects.
Central Question: What can cause ecosystems to change over time? Guiding Thought Question: Think about and write down a list of things that can cause an ecosystem to change over time. Once you've got a list of maybe 5 things down have them pair up with a table partner and share their list and compare, adding things from the other person’s list as necessary. Once done classify the factors as either natural or anthropogenic. Today's Learning Objective: Students will work through the signs of a pristine ecosystem and the types of factors that can cause ecosystems to change by creating a scientific model that represents the interactions of a number of factors in an ecosystem. - AP GIS&T - AP Human Geography - Biogeography > - Ecological Biogeography > - GEOG 1020 - Human Geography - GEOG 1030 - Physical Geography - GEOG 4100/8106 - Biogeography - GEOG 8040 - Seminar in Geography Education - Geographic Information Systems (GIS) - Global Studies > - Human Geography (Rubenstein) - Human Geography (Tredinnick) - International Relations: World Environmentalism Focus > - World Geography - History Courses - Political Science Courses - Elective Courses
As far as traditional electrical lighting goes, there's not a whole lot of variety in power supply: It comes from the grid. When you flip a switch to turn on your bedroom light, electrons start moving from the wall outlet into the conductive metal components of the lamp. Electrons flow through those components to complete a circuit, causing a bulb to light up (for complete details, see How Light Bulbs Work). Alternative power sources are on the rise, though, and lighting is no exception. You'll find wind-powered lamps, like the streetlamp from Dutch design company Demakersvan, which has a sailcloth turbine that generates electricity in windy conditions. The Woods Solar Powered EZ-Tent uses roof-mounted solar panels to power strings of LEDs inside the tent when the sun goes down. Philips combines the two power sources in its prototype Light Blossom streetlamp, which gets electricity from solar panels when it's sunny and from a top-mounted wind turbine when it's not. And let's not forget the oldest power source of all: human labor. Devices like the Dynamo kinetic flashlight generate light when the user pumps a lever. Most of us are familiar with wind, solar and kinetic power and what they can do. But a device on display at last year's Milan Design Week has drawn attention to an energy source we don't often hear about: dirt. In this article, we'll find out how a soil lamp works and explore its applications. It's actually a pretty well-known way to generate electricity, having been first demonstrated in 1841. Today, there are at least two ways to create electricity using soil: In one, the soil basically acts as a medium for electron flow; in the other, the soil is actually creating the electrons. Let's start with the Soil Lamp displayed in Milan. The device uses dirt as part of the process you'd find at work in a regular old battery.
Love and hatred have been in a fight for supremacy since the beginning of time. The forces of good and evil coexist and are part of our human nature. Philosophers have long been debating over the nature of good and the essence of evil. What’s right and what’s wrong? And why can’t we always be kind to each other? Why do some people feed on others suffering? One could fill entire shelves of books trying to answer these questions. Bullying reminds us that the fight between good and evil isn’t over yet. To do this topic justice in this short article would be impossible, but we can discuss in short the many facets of bullying. You’ve probably experienced it at some stage in your life and your children might face it, too. In fact, one out of four children gets bullied. What do we understand by bullying? Bullying is a term used to describe a wide range of behaviors that hurt someone both physically and mentally. It’s intentional - a bully deliberately acts in a hostile and aggressive way. Bullying is a phenomenon that happens at school, at work, and even on the streets of your neighborhood. Most of the time, those who bully don’t realize the negative impact of their actions. Words can hurt deeply. In Alabama, McKenzie Adams, a 9-year-old girl, committed suicide in December, last year, after being harassed because of her skin color. The strong and popular ones often oppress the weak and vulnerable. They feel empowered to do so, and they think they can make themselves look good by humiliating others. Bullies don’t have real friends, either. They surround themselves with like-minded people and outgoing personalities who don’t dare to condemn their acts. They’re hard to catch and don’t realize that they’ve taken everything too far until they are exposed. Bullying can be: - Physical - hitting, pushing, pinching and other forms of violence - Emotional - threatening, stealing or hiding personal belongings, excluding someone from certain events or activities, marginalization - Verbal - spreading rumors, name-calling, sarcasm, bad jokes - Sexual - inappropriate physical contact, remarks regarding sexual orientation, comments about someone’s looks, sexting, and spreading images without the consent of the person involved. - Racist - racial taunts, allusions, graffiti - Online (cyberbullying) - using the Internet and social media to embarrass someone. Bullying has immediate consequences The sad part of bullying is that the victims often live this drama inside of them and never share the fact that they are bullied. They carry a burden too heavy for their fragile souls and often end up in depression, isolation, and shyness. Persistent bullying erodes self-esteem and prevents a person from reaching full academic potential. Bullying seems to be at its peak during adolescence and less of a problem in the last year of high school. In most neighborhoods, most bullying occurs in schools. However, harassment and acts of violence can happen on the streets as well. Your child may be stalked and intimidated by a gang or by your neighbor’s child. If your neighborhood doesn’t feel safe, don’t hesitate to find a real estate agent on RealEstateAgent.com and move to a safer area. No matter where you move, though, the teenage years remain a huge challenge. All the corners of the Internet are filled with pieces of advice for parents of teenagers, but few parents have time to read them. While children don’t come with a user’s manual, let’s not forget that every child is unique and reacts differently to bullying. However, no one should reach the point where death is the only way out. Bullying must be discouraged at all costs. But for this to happen, teachers and parents must work as a team. Signs that your child is bullied Children don’t often give their parents too many details when they are asked how it was at school. Your child probably gives you always the same answer, too. That’s why parents have to do some guesswork and look for clues like Sherlock Holmes to find if there’s something wrong. A child who constantly experiences violent verbal or physical attacks at school: - Becomes frightened and may not even want to go to school anymore - Doesn’t want to walk to school or changes the regular route - Experiences an academic decline and gets more and more low grades - Comes home starving because someone stole his or her money. Bullied children may even avoid eating lunch because the presence of the bully makes them feel uncomfortable - Comes back with clothes or books destroyed - Shows up with unexplained bruises, scratches or cuts - Changes his or her behavior How to put an end to bullying? Communication is the main weapon in combating bullying, although self-defense techniques may help in extreme situations. Encourage your child to talk openly about what’s happening at school and on the way to and from school. Also, teachers must watch for the above signs, too. When more students report a bully or a gang of bullies, the school staff must immediately take measures to help them integrate and give up their destructive behavior. These groups can be broken by changing their breaks and lunch schemes or by assigning the members to different classes. Talking to their parents may also change their behavior in the future. Bullying incidents must be taken seriously by the school and by the parents. At home, don’t let the mean words thrown at your child bring him or her down. The worst thing is to take everything a bully says to heart. These people are attention suckers and don’t deserve to be taken seriously. Remind your child that there are plenty of people who know how beautiful he or she is. Besides, you’ve probably experienced more than one bullying incident and managed to get over it. Do you remember how? Then, come up with a strategy and stick to it. That could make your child feel a lot more comfortable and safe. We may never understand what bullied children go through. They take their struggles home and let fear and despair live inside them. They often can’t see the light at the end of the tunnel. But we are here to find the closest emergency exit for them. So, let us be the light they need in the middle of their turmoil. They will smile again!
Warmer surface temperatures over just a few months in the Antarctic can splinter an ice shelf and prime it for a major collapse, NASA and university scientists report in the latest issue of the Journal of Glaciology.Using satellite images of tell-tale melt water on the ice surface and a sophisticated computer simulation of the motions and forces within an ice shelf, the scientists demonstrated that added pressure from surface water filling crevasses can crack the ice entirely through. The process can be expected to become more widespread if Antarctic summer temperatures increase.This true-color image from Landsat 7, acquired on February 21, 2000, shows pools of melt water on the surface of the Larsen Ice Shelf, and drifting icebergs that have split from the shelf. The upper image is an overview of the shelf’s edge, while the lower image is displayed at full resolution of 30 meters (98 feet) per pixel. The labeled pond in the lower image measures roughly 1.6 by 1.6 km (1.0 x 1.0 miles). Sensor: Landsat 7/ETM+. Data Start Date: 2/21/00.
As we explore the faith-science relationship, part of the challenge is helping students see connections they did not suspect existed. Building a sense of how things are connected is a major part of learning. Making a leap in thinking to connect things we thought were different can be a powerful learning experience. FASTly activities look for ways to help students see various kinds of connections. For example, we want them to see: - that doing science and caring for others can be connected through the ways we interact as we study, the ways we relate our learning to the wider community and the uses to which we put scientific understanding. - that scientific disciplines and virtues can be connected as we exercise humility, honesty, and kindness during learning and in the practice of science. - that the “sacred” and the “secular” are connected as we explore what faith has to do with daily workings in the science classroom and the big questions raised by modern science. - that learning and life are connected as we consider how what is learned about science illuminates our choices and impacts our lifestyles outside the classroom. Making these connections helps guard against reducing science to facts and learning information. It helps resist the temptation to reduce our understanding of the world to knowledge of material processes. Making connections offers a chance to glimpse the coherence of God’s world and how it hangs together as a whole with all the parts working together. FASTly activities offer a variety of strategies for making connections. These include: - having students investigate multiple perspectives on the same phenomena by, for example, comparing scientific and non-scientific accounts of the same scenario. Activity: Photo Scrapbook - providing tasks that ask students to relate their science learning to choices outside the classroom, such as relating learning about sustainable chemistry to choices about waste disposal. Activity: Labs, Scarcity and Choices - working to help students see connections across different curriculum areas by, for instance, having them consider in Bible class how the early church handled major disagreements and what that says to present day faith and science disputes. Activity: The Jerusalem Council - asking students to sort collections of disparate information and reflect on what their decisions reveal about what they see as related and unrelated. Activity: Adam and Eve: Full Maturity or Child-like Innocence? We invite you to explore the links and consider what further strategies you can use to help students see rich connections, instead of fragmentation, in their learning.
How do you shorten an axle shaft? How do you shorten an axle shaft? 2:20Suggested clip 47 secondsHow to Measure & Cut an Axle | Differential Tech Tips – YouTubeYouTubeStart of suggested clipEnd of suggested clip What material are car axles made of? Axles are typically made from SAE grade 41xx steel or SAE grade 10xx steel. SAE grade 41xx steel is commonly known as “chrome-molybdenum steel” (or “chrome-moly”) while SAE grade 10xx steel is known as “carbon steel”. What is axle configuration? An axle configuration record specifies the axles on a wheeled asset, and the positions for the tires or wheels on the asset. … How does a wheel and axle work? In addition to reducing friction, a wheel and axle can also serve as a force multiplier, according to Science Quest from Wiley. If a wheel is attached to an axle, and a force is used to turn the wheel, the rotational force, or torque, on the axle is much greater than the force applied to the rim of the wheel. What are 3 examples of a wheel and axle? Some examples of the wheel and axle include a door knob, a screwdriver, an egg beater, a water wheel, the steering wheel of an automobile, and the crank used to raise a bucket of water from a well. Does a wheel and axle multiply force? Mechanical advantage The thin rod which needs to be turned is called the axle and the wider object fixed to the axle, on which we apply force is called the wheel. The larger the ratio the greater the multiplication of force (torque) created or distance achieved. What will happen if same size of axles are used? Answer: The larger the ratio the greater the multiplication of force (torque) created or distance achieved. By varying the radii of the axle and/or wheel, any amount of mechanical advantage may be gained. Can differential axle and wheel is used to lift heavy load? Answer. It can amplify force; a small force applied to the periphery of the large wheel can move a larger load attached to the axle. … How can you increase the mechanical advantage of a wheel and axle? The mechanical advantage of the wheel and axle can be found by taking the ratio of the radius of the wheel over the radius of the axle. The larger the mechanical advantage of the machine, the greater the force that the machine can output. What is the purpose of a wheel and axle? Wheel and axle, basic machine component for amplifying force. In its earliest form it was probably used for raising weights or water buckets from wells. Its principle of operation is demonstrated by the large and small gears attached to the same shaft, as shown at A in the illustration. Is a car a wheel and axle? A smaller cylinder shaped wheel, called the axle, connects the wheels on a car. When the axle is turned the wheels turn together, allowing the car to move forward or backward. A wheel and axle is the simple machine at work in steering wheels, doorknobs, windmills, and bicycle wheels. Is a clock a wheel and axle? Clocks: Clocks use an axle and a wheel to tell time, but grandfather clocks also use levers, pulleys, wedges, screws, axles and wheels as a complex machine. What are examples of a wheel and axle? Common Wheel and Axle ExamplesBicycle.Car tires.Ferris wheel.Electric fan.Analog clock.Winch. Is a hammer a wheel and axle? But in science, a machine is anything that makes a force bigger. So a hammer is a machine. There are five main types of simple machine: levers, wheels and axles (which count as one), pulleys, ramps and wedges (which also count as one), and screws. Which tool is a wheel and axle? Force & Simple Machines. The wheel and axle is another tool you might be very familiar with, agent. With this simple tool, a wheel is attached to a rod, a shaft or an axle that can be turned. Usually the rod is grooved or threaded to control the gear or wheel. How do you shorten an axle shaft? 2:20Suggested clip 47 secondsHow to Measure & Cut an Axle | Differential Tech Tips – YouTubeYouTubeStart of suggested clipEnd of suggested clip What material are car axles made of? Axles are typically made from SAE grade 41xx steel or SAE grade 10xx steel. SAE grade 41xx steel is…
Linear Regression Model – This article is about understanding the linear regression with all the statistical terms. What is Regression Analysis? regression is an attempt to determine the relationship between one dependent and a series of other independent variables. Regression analysis is a form of predictive modelling technique which investigates the relationship between a dependent (target) and independent variable (s) (predictor). This technique is used for forecasting, time series modelling and finding the causal effect relationship between the variables. For example, relationship between rash driving and number of road accidents by a driver is best studied through regression. Why do we use Regression Analysis? As mentioned above, regression analysis estimates the relationship between two or more variables. Let’s understand this with an easy example: Let’s say, you want to estimate growth in sales of a company based on current economic conditions. You have the recent company data which indicates that the growth in sales is around two and a half times the growth in the economy. Using this insight, we can predict future sales of the company based on current & past information. There are multiple benefits of using regression analysis. They are as follows: It indicates the significant relationships between dependent variable and independent variable. It indicates the strength of impact of multiple independent variables on a dependent variable. Regression analysis also allows us to compare the effects of variables measured on different scales, such as the effect of price changes and the number of promotional activities. These benefits help market researchers / data analysts / data scientists to eliminate and evaluate the best set of variables to be used for building predictive models. There are various kinds of regression techniques available to make predictions. These techniques are mostly driven by three metrics (number of independent variables, type of dependent variables and shape of regression line). What is Linear Regression? Linear Regression is the supervised Machine Learning model in which the model finds the best fit linear line between the independent and dependent variable i.e it finds the linear relationship between the dependent and independent variable. - Equation of Simple Linear Regression, where bo is the intercept, b1 is coefficient or slope, x is the independent variable and y is the dependent variable. Equation of Multiple Linear Regression, where bo is the intercept, b1,b2,b3,b4…,bn are coefficients or slopes of the independent variables x1,x2,x3,x4…,xn and y is the dependent variable. Residual/Error = Actual values – Predicted Values Sum of Residuals/Errors = Sum(Actual- Predicted Values) Square of Sum of Residuals/Errors = (Sum(Actual- Predicted Values))^2 Application of Linear Regression: Real-world examples of linear regression models - Businesses often use linear regression to understand the relationship between advertising spending and revenue. - Medical researchers often use linear regression to understand the relationship between drug dosage and blood pressure of patients. - Agricultural scientists often use linear regression to measure the effect of fertilizer and water on crop yields. - Data scientists for professional sports teams often use linear regression to measure the effect that different training regimens have on player performance. - Stock predictions: A lot of businesses use linear regression models to predict how stocks will perform in the future. This is done by analyzing past data on stock prices and trends to identify patterns. - Predicting consumer behavior: Businesses can use linear regression to predict things like how much a customer is likely to spend. Regression models can also be used to predict consumer behavior. This can be helpful for things like targeted marketing and product development. For example, Walmart uses linear regression to predict what products will be popular in different regions of the country. Assumptions of Linear Regression: Linearity: It states that the dependent variable Y should be linearly related to independent variables. This assumption can be checked by plotting a scatter plot between both variables. Normality: The X and Y variables should be normally distributed. Histograms, KDE plots, Q-Q plots can be used to check the Normality assumption. Homoscedasticity: The variance of the error terms should be constant i.e the spread of residuals should be constant for all values of X. This assumption can be checked by plotting a residual plot. If the assumption is violated then the points will form a funnel shape otherwise they will be constant. Independence/No Multicollinearity: The variables should be independent of each other i.e no correlation should be there between the independent variables. To check the assumption, we can use a correlation matrix or VIF score. If the VIF score is greater than 5 then the variables are highly correlated. The error terms should be normally distributed. Q-Q plots and Histograms can be used to check the distribution of error terms. No Autocorrelation: The error terms should be independent of each other. Autocorrelation can be tested using the Durbin Watson test. The null hypothesis assumes that there is no autocorrelation. The value of the test lies between 0 to 4. If the value of the test is 2 then there is no autocorrelation.
Second Grade Feature: The Passion Project What is your passion? How can you share your learning with others? In the spring, second graders explore a topic of personal interest. The Passion Project launches with students brainstorming possible topics and questions they have about their passions. Once a topic is chosen, students are taught how to be researchers both with print and digital resources. After research is complete, students are encouraged to think about how they want to share their learning with others. A student who likes to build might create a model. A student who loves to write might write a book. Students who are artists might create a visual piece. Students who want to “try on” technology can create a video with iMovie or a slideshow that teaches others about their passion. This project integrates essential reading comprehension skills, note-taking skills and nonfiction writing skills. This project is also supported by our art, music and media specialists to help students bring their creative ideas to life. The project concludes with a Celebration of Learning. Parents and other Lower School students come to view the projects, and the second graders get to be experts in their field and share their passions with others.
- 1 What can I teach my kindergartener at home? - 2 What are the basics for kindergarten? - 3 What Sight words should a kindergartener know? - 4 What are the 5 methods of teaching? - 5 Can a kindergartener write? - 6 How do I know if my child is ready for kindergarten? - 7 What should a 5 year old know academically? - 8 Should kindergarteners learn sight words? - 9 Do Kindergarteners need to know sight words? - 10 How many sight words should a kindergartener know? - 11 What are the 2 main types of teaching methods? - 12 What are the 10 methods of teaching? - 13 What is best method of teaching? What can I teach my kindergartener at home? Home schooling kindergarten language arts - Listen attentively. - Follow instructions and repeat spoken directions. - Engage in discussions with others. - Be able to wait to speak. - Write first and last name. - Recognize and write all of the letters of the alphabet in upper and lowercase forms. What are the basics for kindergarten? Skills Often Expected at the Beginning of Kindergarten - Identify some letters of the alphabet (Letter Town is a classic book that teaches the ABCs.) - Grip a pencil, crayon, or marker correctly (with the thumb and forefinger supporting the tip) - Write first name using upper- and lowercase letters, if possible. What Sight words should a kindergartener know? The Kindergarten Sight Words are: all, am, are, at, ate, be, black, brown, but, came, did, do, eat, four, get, good, have, he, into, like, must, new, no, now, on, our, out, please, pretty, ran, ride, saw, say, she, so, soon, that, there, they, this, too, under, want, was, well, went, what, white, who, will, with, yes. What are the 5 methods of teaching? Teacher-Centered Methods of Instruction - Direct Instruction (Low Tech) - Flipped Classrooms (High Tech) - Kinesthetic Learning (Low Tech) - Differentiated Instruction (Low Tech) - Inquiry-based Learning (High Tech) - Expeditionary Learning (High Tech) - Personalized Learning (High Tech) - Game-based Learning (High Tech) Can a kindergartener write? Kindergarteners are often enthusiastic writers and they will weave writing activities into their play. Children at this age can read their own writing and should be encouraged to read aloud! How do I know if my child is ready for kindergarten? Children are likely to have some readiness in: - Demonstrating a curiosity or interest in learning new things. - Being able to explore new things through their senses. - Taking turns and cooperating with peers. - Speaking with and listening to peers and adults. - Following instructions. - Communicating how they’re feeling. What should a 5 year old know academically? Correctly name at least four colors and three shapes. Recognize some letters and possibly write their name. Better understand the concept of time and the order of daily activities, like breakfast in the morning, lunch in the afternoon, and dinner at night. Should kindergarteners learn sight words? Children learn to hear and say the sounds of the alphabet and then how to blend those sounds to make words. Most sight words cannot be decoded or sounded out, and they are also difficult to represent with a picture. As a result, children must learn to recognize these words automatically, or at first sight. Do Kindergarteners need to know sight words? It suggests that by the end of kindergarten, children should recognize some words by sight including a few very common ones (the, I, my, you, is, are). Typically, the first 100 high frequency aren’t mastered by most kids until Thanksgiving or so (and that is with considerable effort). How many sight words should a kindergartener know? Acquiring sight words is an important part of learning how to read. By the end of kindergarten, most children are able to identify approximately 50 sight words. There are many fun ways to help your child learn sight words. What are the 2 main types of teaching methods? There are different types of teaching methods which can be categorised into three broad types. These are teacher-centred methods, learner-centred methods, content-focused methods and interactive/participative methods. What are the 10 methods of teaching? Here are some of the top ideas for you to use. - Modeling. After telling students what to do, it’s important to show them exactly how to do it. - Addressing Mistakes. - Providing Feedback. - Cooperative Learning. - Experiential Learning. - Student-Led Classroom. - Class Discussion. - Inquiry-Guided Instruction. What is best method of teaching? Pose thought-provoking questions which inspire your students to think for themselves and become more independent learners. Encouraging students to ask questions and investigate their own ideas helps improve their problem-solving skills as well as gain a deeper understanding of academic concepts.
Learn how to transport, store, and re-heat restaurant leftovers safely. Join a rehearsal of the musical LARD as the teens learn how to handle restaurant leftovers safely.Related Activities: Causes of Foodborne Illness Find out what causes foodborne illness by completing a research project. Food Safety Advertising Campaign Create an advertising campaign to teach others about food safety. Susceptibility to Foodborne Illness Learn which foods are most susceptible to contracting foodborne diseases. Temperature and Germ Growth Experiment with potatoes to discover if temperature affects germ growth. The Importance of Refrigeration See how refrigeration affects food in this experiment.
Look at the earlobes of twenty (20) unrelated individuals. Count how many of them have “free” earlobes and how many have “attached” earlobes. In the picture below, the earlobe on the right is “free” and the earlobe on the left is “attached”. Record the number of people who can and the number of people who free and attached earlobes and answer the following questions: What is the ratioof the number of individuals with free earlobes to the number of attached earlobes. What can you conclude from this ratioabout the probabilities of trait of free earlobes appearing in the human population? Focus your discussion on the ratio of free to attached earlobes and what it might tell you about how the trait is inherited. (Do NOT discuss the value or effect of attached versus free earlobes). Why is it important that the individuals in the survey were unrelated? How would a sample of related individuals affect the results? Complete the online Karyotyping Activity at: http://www.biology.arizona.edu/human_bio/activities/karyotyping/karyotyping.html List the karyotype notation and diagnosis for Patients A, B and C. Then answer the following questions. How is karyotyping related to meiosis? Which set of chromosomes appears to be most commonly involved in nondisjunction? What types of genetic abnormalities would not be able to be identified via karyotyping? Search the Internet for genetic abnormality other than those that you identified above for Patients A, B and Cthat could be identified through karyotyping. Identify the website and briefly report on the abnormality such as its rate of occurrence, effects and any potential treatments. Make sure to use your own words and provide correct citations.
Two trillion! That’s 2,000,000,000,000 galaxies. A trillion is a million times a million. For a long time astronomers have said that our observable universe has about 100 billion galaxies. Two trillion is 2,000 billion, or 20 times larger. It’s still smaller than the U.S. national debt at about $19 trillion, but those are just dollars, not galaxies. There are only 7 billion people on Earth, this number is 300 times larger. Why so many? Galaxies are not static. They evolve. So what’s the twist, why were astronomers so wrong for so long? Well, it’s because now we can see galaxies much further away than before, thanks to the Hubble Space Telescope and other advanced telescopes. In fact we are now detecting many galaxies that were formed in the first billion years of the universe’s history. And the universe is about 13.8 billion years old. A team of astronomers from the University of Nottingham, the Leiden Observatory, and the University of Edinburgh, have built extremely detailed 3D maps of distant galaxies in order to estimate the density of these. They have used data from the Hubble Space Telescope and various ground-based telescopes. When we look at distant galaxies we are also looking back into the past, since light travels at a finite speed. The researchers, lead by Prof. Christopher Consilice at the University of Nottingham, found the density of galaxies when the universe was a few billion years old to be about 10 times higher than at present (after correcting for the expansion of the universe). He noted that “we are missing the vast majority of galaxies because they are very faint and far away”. These earlier galaxies were smaller, less massive, much less so. Large galaxies today like the Milky Way and the Andromeda galaxy have masses of around a trillion times the Sun. These galaxies were much more like the two dozen satellite galaxies found around the Milky Way, such as the Magellanic Clouds. The Large Magellanic Cloud has a mass of about 1% of our Milky Way Galaxy. Image credit: NASA (C141 flight) The main conclusion of the study? We know that galaxies undergo mergers. Apparently there have been many more mergers than previously assumed. Large galaxies such as our Milky Way have been formed by multiple successive mergers. NGC 3921 is actually two galaxies in the process of merging. Note the strange and twisted orientation of the spiral arms, and the appearance of two disk like structures.Image credit: ESA/Hubble & NASA So while there were originally trillions of galaxies in the early universe, in today’s universe the number has been reduced by mergers to order hundreds of billions. Mergers will continue, in fact the Andromeda Galaxy is headed our way, and it’s twice as big as we are! https://www.ras.org.uk/news-and-press/2910-a-universe-of-two-trillion-galaxies – Royal Astronomical Society press release
Contact No:(0241) 2325424 / 2325425 The human ear is divided into three sections, the outer, middle and inner ear and plays an important role in hearing. The outer ear consists of the pinna (auricle) that leads into the external auditory canal. It collects sound waves from a wide area and funnels the sound into the external ear passage. On the inside surface of the outer ear is the tympanic membrane (eardrum). It is stretched across the end of the auditory canal separating the outer ear from the middle ear. The middle ear consists of small bones called ossicles. They are the malleus (hammer), the incus (anvil) and the stapes (stirrup). They transfer sound waves to the inner ear. Located covering an opening into the inner ear is called the oval window. Below is another membrane called the round window that stretches across the opening and adjoins the cochlea in the inner ear. The inner ear comprises a coiled structure called the cochlea. The snail-like spiral coiled tube contains the receptors for sound and the vestibular apparatus that is associated with a sense of balance. The cochlear duct contains the organ of Corti, which contains auditory receptor cells. The auditory nerve transmits sound vibrations to the brain. The cochlear implant is a prosthetic device, a part of which is surgically implanted inside the cochlea. Cochlear implants have been found to be beneficial for children and adults with severe to profound hearing loss and steeply sloping hearing loss who do not benefit adequately with hearing aids but have an intact auditory nerve. While a hearing aid provides amplified sound energy to the ear, the cochlear implant directly provides electrical stimulation to the nerve endings in the cochlea. A Cochlear implant has an externally worn device which includes the microphone, speech processor and transmitting coil and an internal device which is surgically implanted in the skull and cochlea. The internal device consists of the receiver stimulator package and the electrodes. Before deciding whether a child or adult is a suitable candidate for a cochlear implant, detailed assessments have to be done. These include: It is only after all these detailed assessments that candidacy can be determined. If the child or adult is deriving adequate benefit through hearing aids, a cochlear implant would not be necessary. It is obvious that for a cochlear implant program to be successful a team of professionals is required. The team includes audiologists and speech language pathologists, the ENT surgeon, the paediatrician in case of children, the neurologist, the special educator, the psychologist and the social worker. Other professionals may be called to give their inputs if required for a particular patient. The surgery for the cochlear implant may take about 2 and a half hours. The surgeon makes an incision behind the pinna and then surgically implants the electrodes inside the cochlea and the receiver-stimulator package in the mastoid bone behind the ear. The patient may have to remain in hospital for a day or two. Child’s head is bandaged for a few days after the operation. Hair over the site of Implant is shaved prior to the operation and feeling of numbness in the skin also expected for some months following the surgery. After the surgery, one has to wait for the scar to heal. This period is approximately 2 to 3 weeks. During this time, the implantee cannot hear through the implant because the external part is not coupled to it yet. After this healing period is over, the implant and processor are programmed or mapped for the first time. This is called the ‘switch on’. During the switch on, the speech processor of the patient is connected to a computer which has the mapping software. The processor is worn by the patient and the transmitting coil is place on the head so that it can communicate with the internal device. Mapping is done by an audiologist to decide how much current is required for the person to hear sounds well without causing any discomfort. After the mapping is complete, the device is switched on. The maps can be stored in locations on the speech processor and upgraded at each subsequent mapping session. Initially in the first few months, the person will need frequent sessions of mapping to improve the signal which is being sent to the implant. It is also important to continue with training to improve listening skills. The roles of the special educator and the speech – language pathologist are very important in rehabilitation. While the cochlear implant helps persons with profound loss to hear soft sounds, the user still needs to be trained to understand the auditory signal. In case of children, the focus is on developing listening skills as a means to developing age appropriate speech and language skills. Cochlear implants do not make the hearing normal. Hence post-implant rehabilitation is very important for successful outcomes. Outcomes also depend on many other factors such as the age at implantation, pre-implant hearing and speech- language status and motivation of the implantee and family.In the case of young children, intensive and long term listening, speech and language therapy is required if they are to develop these skills through the use of a cochlear implant. Note that if a patient does not have their speech processor Mapped regularly, and/or they do not participate in a regular auditory rehabilitation or speech/ language therapy programme, then it is unlikely that they will obtain the maximum potential benefit from their cochlear implant. The cochlear implant , when used in conjunction with the speech processor, should provide hearing sensations. Speech and other sounds will not, however, sound the same as they do for a person with normal hearing. For post-linguistically deafened people ( i.e. those who had normal or partial hearing during childhood), it is expected that this hearing should be of benefit in : For children, it is expected that this hearing should be of benefit in: It is less likely that pre-linguistically deafened people will improve their ability to understand speech when listening with the cochlear implant, without the aid of lipreading. Gajanan Hospital ENT Clinic is one of the leading centers for the treatment of diseases of the ear, nose and throat in the Ahmednagar. Established by Dr. Gajanan Kashid in 2007.The service of humanity with the highest standards of ethics and professionalism. The clinic has many firsts to its credit:
In physics, focus is defined as that point in which the rays of light converge or meet; or diverge in the case when light is refracted or reflected. In optics, it can mean a number of things including: ? A lens? focal point ? A lens? or a telescope?s eyepiece?s focal length ? The condition by which an object being viewed through an optical system is seen; either being in or out of focus ? A device used in optical systems in order to adjust its focal length, thereby making an image clearer An image is in focus when the light from the object converges on almost one single point in the image, while an object out of focus will have light from it not converging very well. In astronomy, interest in foci is usually concentrated to telescope use. To get a good view of distant objects in the sky, a telescope must be properly focused. Focusing of telescopes is easy, and comes instinctively even to beginners. Adjusting the focus of a telescope is usually done either by moving its eyepiece or its primary mirror. This can be done by turning the wheels of a geared system, called the rack and pinion; or by turning a screw knob on the back of a telescope.
Astro-1 astronomy mission of December, 1990, Space Shuttle astronauts photographed this stunning view of the full moon rising above the Earth's limb. In the foreground, towering clouds of condensing water vapor mark the extent of the troposphere, the lowest layer of the planet's life-sustaining atmosphere. Strongly scattering blue sunlight, the upper atmospheric layer, the stratosphere, fades dramatically to the black background of space. Moon and clouds are strong visual elements of many well known portraits of planet Earth, including Ansel Adams' famous "Moonrise, Hernandez, New Mexico", photographed in 1941. NASA Web Site Statements, Warnings, and Disclaimers NASA Official: Jay Norris. Specific rights apply. A service of: LHEA at NASA / GSFC & Michigan Tech. U. Based on Astronomy Picture Of the Day Publications with keywords: Earth - Ansel Adams - Moon Publications with words: Earth - Ansel Adams - Moon
Once a week in their Culture and Communication classes, our students in Grades 2 to 5 read a chapter of The Odyssey – the Ancient Greek tale of hero Odysseus and his men trying to get home after the Trojan War. The story itself is very entertaining but the students are also asked to use the story to think about some very big ideas. Every lesson we record the wisdom of our students and I thought I would share some of that wisdom with you: - ‘Different people think different things are beautiful so that means beauty is an opinion and we can’t judge it.’ - ‘How do you even know you are happy if you are not sad sometimes?’ - ‘It is hard to write rules for every little thing and it is sometimes hard to say what is right and what is wrong. It would be easier if there was only one rule – care about others.’ - ‘It is silly to punish someone who already knows they did something wrong and have learned not to do it again.’ On Reasoning With Others - ‘It is better to help people understand what you are saying than to just tell them that they are wrong.’ In thinking about these really big ideas, students practice critical and creative thinking techniques and develop a belief that everyone’s thinking has value – including their own. These techniques can be applied to any problem or any subject and their ability to apply these techniques is strengthened by their belief that their ideas are valuable. All cultures have big stories like The Odyssey that students can read and think about big ideas. The more big stories they read the better!
Around 1650 BCE (Before the Common Era), the last mammoth on earth died on Wrangel Island, a small outpost in the Chukchi Sea off northeastern Siberia. In the same general period of time, the Shang Dynasty ruled China, the Thirteenth Dynasty began in Egypt, (it will be 300 more years before Tutankhamen is born), the Hittites sack Babylon and the world’s first wooden bridge was constructed on Lake Zurich. Frequently ice-bound, Wrangel was visited by hunters before its mammoth population became extinct, evidenced by the various stone and ivory tools the hunters left behind. Possibly a part of a vast Inuit trading culture, the hunters did what hunters do – feed their families. They might, or might not, have known mammoths were becoming very scarce. Large mammals such as reindeer and sea lions were their prey – why not the last mammoth left on earth? Wrangel Island is now a sanctuary, a breeding ground for polar bears, with the highest density of dens in the world. Most mammoth and mastodon populations became extinct during the transition from the Late Pleistocene (126,000 BCE to 12,000 BCE) to the Holocene, the age of modern man beginning at 12,000 BCE. The word Holocene derives from the Greek words holos (whole or entire) and kainos (new) and the epoch encompasses all of the period from the last glaciations throughout the worldwide population growth of the human species and up to present day. Animals and plants have not evolved much during the Holocene, but have undergone major shifts in their distributions, due to the effects of man. It was also the period where the megafauna – mammoths, mastodons, giant bears and an entire range of predators – disappeared. During the transition from the Pleistocene to the Holocene, the Stone Age came to an end with the advent of flint tool manufacturing, the first usage of advanced darts and harpoons, and the development of a modern toolkit including oil lamps, fish hooks, ropes, and eyed needles – all perfect inventions that indicate successful hunting-gathering techniques. During this transitionary period, Neanderthals became extinct, clay figures were hardened in wood-fired ovens, the bow and arrow was invented, and cave painting appeared in Europe. By 12,000 BCE, Asiatic peoples crossed from Asia to North America, entered South America as far as the Andes, and domesticated llamas. Although climate change and human predation are considered the main causes for the extinction of the Pleistocene-Holocene megafauna, the spread of disease is also an extinction theory. Scientists who believe that the catastrophic drop in mammoth and mastodon populations was due to a hyper-disease are studying frozen samples from the mammoths of Wrangel, hoping to find evidence of an Ebola-like virus in their DNA. They theorize that the virus could have jumped from fleas to mammoths, which would account for an extinction rate that increased as humans spread across the planet. (Rats, which carry fleas, caravanned right alongside us, as we propagated our way across the continents.) To date, the DNA recovered from Wrangel is incomplete and fragmentary. But climate change and overhunting are the two main theories for the Holocene extinctions. During the last glaciations of the Pleistocene (19,000 to 20,000 BCE) most of the climate of the world was colder and drier. Deserts expanded, sea level dropped, and rain forests were splintered by savannah. Twelve thousand years ago, the last Ice Age ended. The vast grasslands of Siberia froze under permafrost. Trees marched north. Humans moved into the more temperate regions, following game. As habitat collapsed with the climate change, as mammoths and mastodons were confined to shrinking islands of refuge, they became easier to hunt. Females and the young, the easiest and most numerous, died first. And with the older females went the knowledge of where and when to migrate. With fewer and fewer females, birthrates could not keep up with death rates. Mammoths couldn’t pop up every spring like wildflowers. The longer it takes to find each elusive herd makes a difference on how long you stay and how much you overhunt it. But even as places of refuge became further and further apart, it was still possible to find them. And hunt again and again and again. There’s a lot of return for killing mammoths, much more so than gathering grass seeds, which were the most abundant food item of the Pleistocene. In the last twenty years tons of evidence has been unearthed, confirming the overhunting theory. Below Krakow’s Spadzista Street in Poland, 8,000 bones of 73 individuals were found in a 40 x 40-foot square area, a mammoth mausoleum. At a hunting campsite in Czechoslovakia, archeologists excavated more than a thousand mammoths. In areas where wood was scarce, such as the Ukraine, shelter frameworks were constructed of mammoth bones, with skulls for foundations and interlocking tusks as arches. One shelter, near Kiev, contained the bones of 95 mammoths. And in Dent, Colorado, the bones of a dozen mammoths are clustered at the bottom of a cliff. Scattered among them are stone spear points and large rocks. Mammoths and mastodons survived through the Pleistocene and into the age of man. Ten thousand years ago, North America resembled Africa, with huge migrating herds of elephant, camels, horses and antelope. Following alongside the herds were Saber-toothed cats, Dire wolves, Miracinonx (the American cheetah that looked like an elongated cougar), Giant short-faced bears and American lions. Then, in the blink of an evolutionary eye, within just eight thousand years, three-fourths of North America’s large mammals disappeared.
Now that you know the fascinating history of the barcode, 2 easy ways you can obtain barcodes, and the different kinds of barcodes out there, let’s get back to the basics: how do barcodes work? To get some answers, I checked out this great video from In One Lesson, and here’s what I found out! To begin with, let’s look at a sample barcode: Barcodes are scanned—and as a result, read—by a laser. Lasers read them like many countries read books: from left to right. In the brief time this occurs, the laser will scan through the 95 evenly-spaced barcode columns and see if the columns reflect either 1) a lot of laser light or 2) absolutely none. Each laser is connected to a computer, and computers can only recognize two numbers: 1s and 0s. Any columns that reflect a lot of light are read as a 0. Meanwhile, the columns that reflect no light whatsoever are read as 1. As the barcode is read, the computer will come up with a resulting number that is 95 digits long that corresponds to each column. (The number itself, of course, will only consists of 1s and 0s.) These 95 digits are split up into 15 different sections: - 12 sections that are utilized for the numbers you see at the bottom of the barcode - 3 sections that are used as guards where barcodes begin and end: guards on the far left, center guards, and guards on the far right All codes on the left side have an odd number of 1s, and the codes on the right side have an even number of 1s. The reason that the codes on the left and the codes on the right have different numbers is so that the computer can identify if the barcode is being read right-side up (from left to right) or upside down. If the computer identifies that the barcode is being read upside down, it can immediately just flip it around so it’s scanned correctly. In addition to these guards, the barcode contains other error checks. For instance, the digits on the left always begin with a 0 and end with a 1. The right-side digits are the exact opposite, beginning with a 1 and ending with a 0. Bottom Barcode Numbers Now, let’s look at the bottom numbers of the barcode—the numbers we can actually see with our naked eyes. See that 0 on the far-left side? That tells us what kind of barcode it is. 0 is your standard barcode. Other options might be 2 for weighed items, 3 for pharmacy, 5 for coupons, etc. The next set of numbers you see—12345—represents who the product manufacturer is. Lastly, the second set of numbers—67890—presents the product’s specific code. So now we come to that last number on the far-right side: the 5. This is a bit more complicated than the other numbers in this barcode and is called the modulo check character. To get the modulo check character, we need to follow a particular formula. To start with, you’ll need to assign positions to each number at the bottom of this barcode except the last one. 1=0, 2=1, 3=2, 4=5, 6=5, and so on. Next, you need to group all of the odd number positions together and all of the even number positions together. So: position 1+position 3=0+2. This becomes: (0+2+4+6+8+0) + (1+3+5+7+9) Add them up, and you get: (20) + (25) Then you need to multiply the first sum (the odd number position sum) by 3: 3(20) + 25 Which leads to: 60 + 25 = 85 Now, you need to figure out the next highest multiple of 10 after 85. In this case, it’s 90. Afterwards, subtract the total from 90. 90 – 85 = 5 Voila! We get 5—our modulo check character. So the next time you print a barcode on a label or even scan a barcode yourself at the supermarket, keep in mind how surprisingly complicated a barcode is—and the intricate and complicated thought that went behind it. This is the same kind of thought that TSC takes to developing our printers, tools, and other technologies. Do you have any questions about barcodes, how they’re made, or barcode printers? Let us know!
The purpose of the titration is to determine the amount of acid it contains by measuring the number of mL of the standard NaOH need to neutralize it. The technique of titration will use to determine the concentration of solutions of acids and bases. Titration is a laboratory technique designed to use the reaction of two solutions to determine the concentration of one or the other. The titration with the HCl will be used to determine our NaOH solution concentration. The technique of titration can be applied to different materials and different types of reactions. In this experiment, an acid-base neutralization between two solutions is used. The standard solution is 0.1 M NaOH, and the unknown solution is the mixture resulting from dissolving the antacid in excess “stomach acid”. This solution is still acidic; The reaction is: NaOH + HCl à H2O + NaCl An indicator is used to show when the solution has been neutralized. In this case, the indicator is methyl red, an organic compound which is yellow if it is dissolved in a basic solution, and red in an acidic one. As the excess stomach acid is neutralized by the NaOH, the solution changes from red to yellow; the exact neutralization point is orange. At the neutralization point (or endpoint), the number of moles of OH-1 added exactly equals the number of moles of H+1 that remained after the antacid tablet “worked”. This number of moles of OH-1 can be found by multiplying the number of liters delivered by the burette times the molarity of the NaOH: liters NaOH sol’n x moles NaOH = moles NaOH 1 liter sol’n. The number of moles of HCl remaining after treatment with the antacid tablet is equal to the number of moles of NaOH calculated above. The total number of moles of HCl added to the flask can be found by multiplying the total volume of HCl used (in liters) by the Molarity of the HCl solution. The number of moles of HCl neutralized by the stomach acid is just the difference between these two values. moles HCl neutralized by tablet = total moles HCl in flask – moles HCl neutralized by NaOH The effectiveness of the antacid may be expressed in terms of mL of stomach acid neutralized, or moles of HCl neutralized, and may be calculated per tablet or per gram of antacid. See calculation page. Burette Use and Titration Technique: Typically, a special piece of glassware is used to measure out one or both of the solutions used in a titration. It is called a burette, and quickly and accurately measures the volume of the solutions delivered. A diagram of a burette, and instructions on how to use it are given below. Before use, the burette must be rinsed with the solution it is to contain. Close the stopcock and use a small beaker to pour about 10 mL of solution into the burette. Tip the burette sideways and rotate it until all of the inside surfaces are coated with solution. Then open the stopcock and allow the remaining solution to run out. Again close the stopcock, and pour enough solution into the burette to fill it above the “0” mark. With the burette clamped in a vertical position, open the stopcock and allow the liquid level to drop to “0” or below. Check the burette tip. It should not contain air bubbles! If it does, see your instructor. Adjust the burette so that the liquid surface is at eye level, and take the initial burette reading as shown below: M Na 2 CO 3 , 0.1 M HCl , 0.1 M NaOH and methyl orange indicator. Two flasks, burette, burette clamp, pipet, Bunsen burner 1. Determination of Concentration of HCl Solution: Firstly, 10 ml of 0.1 M of Na 2 CO 3 was place in each of two flasks using a pipet. Then two drops of methyl orange indicator was added to each of the flasks and the solution was mixed with a gentle shaking. The burette was filled until full with HCl. One of the Erlenmeyer flasks was placed under the tip of the burette. The HCl was run from the burette into the Na 2 CO 3 solution until the end point is reached. Then the mixture was boiled for a few minutes. After the solution was cooled and added a few drops of HCl until the end point is observed again. When the end point is obtained, the volume of HCl added was recorded. The solution of Na 2 CO 3 in the second flask was titrated following exactly the same procedure. The molarity of HCl was calculated. 2. Quantitative Determination of NaOH: Firstly, 10 ml of 0.1 M NaOH was placed in each of two flasks using a pipet. Then two drops of methyl orange indicator was added to each of the flasks and it was titrated as we did with Na 2 CO 3 . But the solution did not boil. The solution of NaOH in the second flask was titrated following exactly the same procedure. The molarity of NaOH solution was calculated. Given an unknown solution of NaOH was titrated similarly above.
Other map pages: [ Locations | Map themes & related | Cartographers ] The majority of maps published since the first printed maps of the late fifteenth century have been part of bound volumes. Most maps have been issued as pages in an atlas, but maps have also appeared in geographies, histories, travels, encyclopedias, novels, and many other types of books. There is, however, a much smaller subset of maps that were not part of a bound volume, but were instead separately issued. These maps were issued on their own, either as loose sheets or as folding maps or as maps to be hung on a wall. Not only have far few separately issued maps been produced over time, but they have a much lower rate of survival than those maps published within a bound volume. The covers and heft of a book provide protection for the interior pages, books are easily stored on bookshelves, books are mostly used indoors, and most people tend to treat a bound volume with respect and care. In contrast, most separate maps have flimsier covers, if any, they were made to be used in the field, and they tend to have been treated as ephemeral tools rather than as precious items for a library. Many of these maps were stuck in a saddlebag or back pocket, taken on trips through difficult circumstances, roughly handled, and then tossed in a corner or thrown out when their usefulness was over. The modern equivalent is the difference between atlases, where people still keep their 1985 National Geographic atlas carefully on the bookshelf, and folding road maps, where many a just issued AAA map lies torn or misfolded on the floor of an automobile. While separately issued maps are but a small drop in the ocean of extant printed maps, they are some of the most significant maps ever issued. These were the maps that helped to discover, settle, and develop new lands. These were the maps that were used by explorers, those moving to new places, and general travelers. Anyone heading out on the road would prefer a small, portable single map to a bulky book containing many more maps than were needed. An explorer pushing beyond previously surveyed country, a land speculator traveling in an unknown region, a military officer for whom there were no adequate official map, an emigrant arriving in the New World, a family visiting relatives across the country, a businessman on a trip to seek new markets, or anyone else traveling by horse, coach, canal or rail would have carried one or more of these separately issued maps. These maps were instrumental in the history of those places they show. Besides their historic import, these separately issued maps were also usually the most up-to-date maps of their time. Whereas absolutely current information was usually not crucial for an atlas map, which in many cases would be used well after its date of publication, a separately issued map had to be as current and accurate as possible as these were the maps that people used for immediate, practical purposes. One often finds that a map in one year of an atlas is the same as that map from the previous year, despite changes that might have occurred in the place shown. In contrast, most separately issued maps were changed each issue to show new information. The scarcity, historic import, and current detail of separately issued maps make them among the most desirable for map collectors. We have grouped these separately issued maps into different categories as follows: From the sixteenth century maps have been made to be hung on the walls of offices, public buildings, schools, etc. These maps are usually large, they were usually attached to rollers at the top and bottom for hanging, and they often were varnished to protect them from wear, smoke and bugs. Go to page on wall maps 18th century saddlebag maps It became common in the eighteenth century to produce folding maps that could fit into a pocket, pouch or saddlebag. These so-called saddlebag maps were made by dissecting the map into smaller sections, mounting the sections on linen with small gaps between, then folding the map into a compact bundle. Saddlebag maps were usually made from maps that also appeared in atlases, but these folding examples would have been the copies of those maps that were used in the field. Go to page on saddlebag maps Mail coach era road maps The improvement of roads and carriages by the second half of the eighteenth century, led to the so-called "mail coach era" between 1780 and 1850. The increased number of travelers on the roads of Europe, many in postal coaches, spurred the development of folding road maps focusing on the transportation network. Postal routes, stops, distances along routes, and much other travel related information was provided on these highly detailed maps, which were usually produced in the same format as the earlier saddlebag maps Go to page on road maps of the mail coach era 19th century travel maps Linen backed, folding maps continued to be made into the nineteenth century, but early in the century a format of smaller folding maps was developed specifically for travelers. These were made using thin, but high grade banknote paper that could be folded without too much wear or tears. These maps were usually folded into leather or cloth covers and sometimes included text. These maps generally had brighter hand coloring than atlas maps in order to aid in reading them under adverse circumstances. These maps usually focused on the travel nexus of roads, railroads, and steamboat routes, and they often displayed information on schedules, distances, and sometimes included inset maps of cities or smaller regions. Go to page on travel maps Political case maps There has always been a strong demand for maps showing on-going political, social or military events, either local or from around the world. Most atlas maps were standard references that did not reflect specific events, so these often did not provide the information sought by those interested in those events. Magazines and newspapers would include relevant maps in their reporting, but these were usually crude, hastily made maps that did not satisfy many readers. Map publishers, therefore, issued special maps that focused on the regions involved, containing as accurate, contemporary and complete information as could be desired by those following the events in question. These maps were often issued in a folding format for ease of use and carrying, in which case they are often referred to as "case maps." Go to page on political maps Working sea charts Accurate and up-to-date marine charts are more than just convenient for sailors, they are a matter of life and death. Because of this there has always been a strong demand for good charts to be taken on ship board, whether on the Mediterranean Sea or on one of the world's oceans. Sea atlases, while used, tend to be unwieldy on board and the process of updating an entire atlas very time consuming. A single sheet chart was easier to use and it was possible for publishers to issue updated versions of a single chart more regularly. Thus it is that most of the charts actually used on ships over the years have been separately issued, single sheet charts. These were sometimes folded, but more often they were either backed on linen and rolled, or backed with a distinctive blue paper that gives them the name "blue backs." Go to page of working sea charts Other map pages: [ Locations | Map themes & related | Cartographers ] To Contact us, call, write, fax or e-mail to: 8441 Germantown Avenue Philadelphia, PA 19118 USA (215) 242-4750 [Phone] (215) 242-6977 [Fax] ©The Philadelphia Print Shop, Ltd. 2008
Economics Essay”Analyse the factors affecting supply and the elasticity of supply.” IntroductionSupply refers to the quantity of a good or service which producers are willing and able to produce at a given price over a given period and offer for sale in a market. It is assumed that producers are motivated by profit maximisation, which leads them to supply their goods and services at higher prices rather than low prices. Supply can be affected by factors such as the price of the good or service, the price of other goods and services, the prices of the factors of production, and changes in technology. Elasticity of supply measures the degree of responsiveness of quantity supplied to a change in the price of the commodity. The elasticity of supply can be affected by production time, the ability to hold inventory, and the extent of excess capacity. According to economic theory, the supply of a good increases when its price rises, while the supply of a good decreases when its price decreases. SUPPLY CURVEFactors Affecting Supply Shifts in the Supply (Context)SHIFT (left figure 2 and right figure 3) ON SUPPLY CURVEShifts in the supply curve occur because of changes in supply conditions or the factors that affect supply. A shift to the right of the supply curve may result in greater production and a lower selling price in the market. An increase of supply is caused by a change in a factor affecting supply conditions, such as a fall in the prices of the factors of production or an improvement in technology. This means that not only are producers willing to sell more goods and services at the same prices, but they are also willing to sell the same quantities as before but at lower prices. A shift to the left of the curve may result in lower supply and a higher selling price in the market. A decrease in supply is caused by a factor affecting supply conditions such as a rise in production costs. This means that producers will be willing to sell less of the good or service at every possible price and only accept a higher price for any given quantity of the good or service. Changes in the Price of the Good or ServiceA producer will try to get as much mark up on a product or service and higher the price as they can because it creates greater the incentive to supply depending on supply costs. If the price of the product or service is high, then the supply would also increase as producers are always trying to maximise their profits, but if the price of the good or service is low, then the supply would also decrease as producers may not be able to meet target profit from higher supply which would cause a loss in the firm. However, if a cheaper product or service is receiving high sales and revenue, a firm may still try to increase their supply for that product or service. Although, the price would be lower, there would be an increased potential of making more profit as the sales revenue is relatively high. This is in the interest of a producer as it has the capability of providing a big return. The change of price of these products cause movements on the curve, as shown in Figure 4. Hence, the price of the good or service greatly impacts supply. MOVEMENT ON SUPPLY CURVEChanges in the Price of Other Goods and ServicesChanges in the prices of competing or alternative goods and services in the market will affect the profitability of producing an existing good or service. If a firm is producing a good or service whose price falls relative to another product or service, this may reduce profitability of the firms supply. It would be in the firm’s interest to use more resources to produce another product or service which is higher priced that would in turn provide a higher guarantee of return. In this case the supply of the existing good or service may decrease in the market, as shown in Figure 2. On the other hand, if the price of the existing good or service supplied rises relative to other goods and services, other firms may switch to its production. In this case, the supply of the existing good or service will increase in the market, as shown in Figure 3. For example, if the price of polo shirts increased relative to t shirts, then a firm may start to produce polo shirts rather than t shirts. Thus, the price of other goods and services influence supply in great amounts. Changes in the Prices of the Factors of Production A rise in the four factors of production (i.e. wages, rent, interest, and profit) will raise production costs for the firm or industry and reduce profitability. The firm may respond by cutting back production and reducing supply in the market if they are not able to increase the price of their product or service to account for the increase in costs. For example, if wages increase in the job of producing hand-stitched furniture, the firm may decide to reduce its supply or try to increase the price of the product. This will cause a shift left, as shown in Figure 2. On the other hand, if the prices of the factors of production fall, leading to lower production costs for the firm, profitability is increased. The firm may react by increasing production and supply in the market which may lead to lower market prices. This will cause a shift right, as shown in Figure 3. Also, the quality and quantity of resources available may affect the overall production costs and supply in total. Ergo, the change in the prices of the factors of production impact supply. Change in TechnologyTechnological advances may lead to lower production costs and new products which enable producers to increase supply. Usually, improvements in technology lead to increased efficiency and productivity of the factors of production, through improved production techniques, management structures, marketing techniques and a greater range of better quality products. This often leads to a shift right in the supply curve as seen earlier in Figure 3, as the production process is more efficient and reduces cost (i.e. wages) allowing firms to use the saved money to increase their supply. Nonetheless, the use of outdated technology by a producer or business relative to its competitors may lead to lower efficiency and productivity which can cause a reduction in supply in markets. In turn, this could lead to higher market prices. This would usually result is a shift left, as seen in Figure 2. Accordingly, the change in technology affects supply. Factors Affecting Elasticity of SupplyThe Price Elasticity of Supply (Context)The price elasticity of supply refers to the responsiveness of the quantity supplied due to a change in the price of a good or service. Supply is price elastic if the change in the quantity supplied is proportionately greater than the initial change in price. Supply is price inelastic if the change in the quantity supplied is proportionately less than the initial change in price. Supply is unit elastic if the change in the quantity supplied is proportionately the same as the initial change in price. Production Time PeriodsIn the market period supply cannot be adjusted due to changes in price, as the quantity supplied to the market is fixed, since inventories or stocks of unsold goods are finite. I the short run producers have both fixed and variable factors, but they are still able to adjust supply in response to small variations in price. Supply is more elastic in the short run than in the market period. In the long run, producers can very their output levels in response to small fluctuations in price. Supply is highly elastic in the long run, as it is completely variable. For example, a farmer could plant more apple trees to increase the potential supply of apples in the market in the future. Therefore, production time periods affect supply.Inventories or the Ability to Hold StocksIf producers can hold stocks of unsold goods or inventories, their supply will be more elastic than if they are not able to hold stocks which can supplement current levels of output or supply. If the prices of goods increase in the market, firms can respond by adding accumulated stocks on to the already available supply. This will eventually result in an increase in supply. The ability to hold stocks depends on the size of the good, and perishability. The size of the good depends as the business will have to store it in specific storages. If the goods are too big, the producer may have to place them in a warehouse rather than a small storage container. Also, perishability is a vital element is storing goods. Perishability refers to the fact that certain goods cannot be produced and stockpiled before consumption. For example, a cake is more perishable then an apple phone as it will go out of date faster and will not be fit for consumption. For the most efficiency, firms require their warehouses to be accessible and convenient. Therefore, production time periods affect supply.The Extent of Excess CapacityExcess capacity refers to the difference between the actual and potential output of a firm with a given level of plant capacity. If a firm has excess capacity, it may be able to take proper action when prices suddenly rise by using its existing plant to store more supply from increased production. Its supply will be more elastic than firms operating at full or maximum capacity. Firms which are operating at full capacity cannot increase output in the market period or short run. They will either must lower their prices to sell less profitable goods early to be able to stock the goods, or they will have to build supplementary storage capacity in the long run to increase production. Their supply will be less elastic relative to firms with some excess capacity in the market period of short run. Therefore, production time periods affect supply.ConclusionUltimately, supply and elasticity of supply is controlled by price of the good or service in the market. Supply can be affected by factors such as the price of the good or service, the price of other goods and services, the prices of the factors of production, and changes in technology. Elasticity of supply can be affected by production time, the ability to hold inventory, and the extent of excess capacity. Through the analysis of the factors affecting supply and elasticity of supply, it is evident how producers must react to price fluctuations to match the market with the supply.
NLPby Paul Matthews What is anchoring? When two things commonly occur together, the appearance of one will bring the other to mind. Anchoring is a basic Pavlovian conditioning of the nervous system. Any time a person is in an intense state and a stimulus is applied, then the state and the stimulus will be linked neurologically. It is a form of associative learning. The associations we make through this kind of learning have a huge impact on the quality of our lives and how successful we are, and NLP has developed several tools and techniques to create and change these associations. Nobel laureate Ivan Pavlov (1849–1936) made the concept of classical conditioning famous with his experiments on dogs. He was studying the gastric processes in dogs and noticed that they started to salivate even before they received food. This led to a series of experiments in which he used a bell to call the dogs to their food. After only a few times, the bell produced salivation, even if there was no food. He called this association a conditioned response. In effect, a neutral stimulus, the bell, had become a conditioned stimulus that produced a response based on the conditioning. In NLP, this conditioned stimulus is called an anchor. NLP also uses the terms ‘firing the anchor’ or ‘triggering the anchor’, which mean applying the stimulus that has been conditioned. The conditioned stimulus, or anchor, can be any kind of sensory input in any combinations of the representational systems. Here are some examples. Consider each one and notice your reaction or the way you would respond. The first example is a stormy sky. Imagine yourself looking out at a really stormy sky with that particular colour of the clouds. How do you feel? What is your reaction? Now go through the same process with each of the following: - Visual: red traffic light (or an ornage one), brand image (for example Nike, Coca Cola), beach and palm trees - Auditory: ambulance siren, radio jingle, ‘they are playing our tune!’, sound of waves, your mother’s voice - Touch: stroking a cat, holding a bat or golf club, sun on your face - Smell: perfumes, aviation fuel, smell of the gym - Taste: chocolate, favourite drink, a specific food Some of these will trigger a reaction; some won’t. Think of some more triggers that apply to you. This may not be so easy to do, as the responses feel so natural that we often don’t notice them even when they are occurring. The response when the anchor is triggered could be good or bad, useful or not. At one extreme, it could be a phobic reaction to the sight of something, such as a spider scuttling across the floor. At the other end of the scale, the response could be a sense of calm and peaceful happiness as the result of a certain smell, such as incense or new-mown grass. It just depends on the associations you have made. These are all learned responses. They were not there when you were born, and yet they are so obvious and widespread that you hardly notice them. They seem part of who you are. As we go through life, we all develop thousands of different anchors that associate one thing with another. There are some archetypal ones that are common to most of us, such as red for danger and green for go. An understanding of anchoring is vital if you want to make significant improvements in your life. An anchor will change your state. Your state will govern the kinds of behaviours you choose and thus your results. You, and what you do will be anchors for others and trigger responses in them, which in turn will trigger responses in you, thus determining the quality of the relationship. How anchors are created Anchors are created naturally all the time. Some last for our lifetimes, and others fade after only a short time. The longevity of an anchor depends on the conditions in place at the time it was created. Anchors can also be created on purpose and NLP has evolved several techniques that can be used to create beneficial anchors, change anchors and remove negative ones. First, let’s look at how anchors are created in everyday life as this will help us understand what we need to do to purposefully create lasting and useful anchors. Anytime someone is an intense and associated state, whether it is a pleasant state or an unpleasant one, anchors are being created. This is because we will always make associations and links between that intense state and what we are sensing around us at the time. The things that we link the state to will differ from person to person and will depend on - Their map of the world - The preferences they have regarding representational systems - How they filter their sensory information - And even previously-set anchors. Given the same situation, one person might link their intense state to the person standing in front of them, another to the ‘feel’ of the room, yet another to the smell of the flowers on the windowsill. The link could be to a full description in all the senses of what they are sensing, or it could be a single aspect, like the fact that the other person is wearing a brown baseball cap. Another variation is that some links are made to a more general class of stimuli, so instead of a brown baseball cap, it generalises to any hat. Some anchors, even though long lasting, may never get triggered, as the conditioned stimulus never occurs again in that person’s life. The anchor lies dormant. Some anchors are such common features of a person’s life – for example, blue sky or the feel of a handshake – that they get diluted by so many other links that, except in rare cases, they soon lose effectiveness. - When he looks at me like that it reminds me of my first boss who was always angry. - When people get too close to me on the tube it reminds me of being shut in a cupboard as a child. - Being in a group like this reminds me of being at school and being ashamed to say ‘I don’t understand.’ Note that these are all conscious examples. Many anchors will trigger a response although we have no conscious idea of where or when the anchor was created. Repetition will also condition a stimulus so that it becomes an anchor, even if the state is not intense. Repetition coupled with an intense state will create an incredibly strong and durable anchor. Repetition is the route that the advertisers use to make a link between a nice state and their product. They show a person having a good time or a happy successful life and hope that we will feel good and think ‘that’s what I want’. They then show their logo or brand name or play their jingle and hope that if they do this frequently enough, we will make the link. The movie Jaws created such powerful anchors that huge numbers of people who saw it developed a fear of swimming in the ocean or responded with fear to the very recognisable signature tune. The movie elicited states of fear in the audience many times with clever camera angles and music, and the audience then linked this fear with what they were experiencing. (The anchors set by a movie will be either visual or sound links.)
Automobiles are four-wheeled motor vehicles designed to carry people and cargo. Powered by an internal combustion engine fueled by gasoline (a liquid petroleum product), they are one of the most ubiquitous of modern technologies and a symbol of twentieth-century industrial society. Modern automobiles were developed in the late 1800s, when inventors such as Gottlieb Daimler, Karl Benz and Nicolaus Otto perfected the gas-powered internal combustion engine, producing automobiles capable of replacing horse-drawn carriages on the streets and highways of Europe and America. By the 1920s automobile production was a multibillion-dollar industry, and Henry Ford introduced mass-production techniques that made cars affordable for the masses. Today, the automobile is the principal means of transportation for millions of people in many countries. Its convenience and adaptability have transformed the way we live and work. Despite its drawbacks, it is the most important invention of the twentieth century. Pros: Automobiles allow people to travel at their own pace and to avoid the crowdedness of public transportation. They can also carry more luggage and other items than a bus or train, making errands much easier. They are also more comfortable and can be customized for personal style and comfort. Cons: Cars are a major source of greenhouse gases, and relying on them can make a person’s life less sustainable. It can also be costly to maintain and operate a vehicle, and it is often difficult to find a place to park. The United States had a larger land area and more equitable income distribution than European countries, so it was natural that automobile production would begin here early in the twentieth century. With cheap raw materials and a long manufacturing tradition, American manufacturers were quick to capitalize on the market.
Over the past few years, multiple studies have found a link between hearing loss and an increased risk for dementia. Whether it’s a causational or correlational relationship is still being researched, but the relationship is there. Across all studies, people with hearing loss showed greater signs of cognitive decline. There are three main reasons hearing loss could be linked to dementia—social isolation, an uneven strain on the brain’s cognitive resources, and a change in the brain’s natural function. It’s been long known that maintaining social relationships along with face-to-face communication are significant weapons against cognitive decline. According to Bryan James, of the Rush Alzheimer’s disease Center in Chicago, the human brain evolved to manage about 150 social relationships, and when we stop managing an adequate amount of relationships, that part of the brain can atrophy. Also, healthy social relationships reduce chronic stress, which is linked to cognitive decline, Alzheimer’s, and dementia. How does hearing loss fit into this? Well, hearing loss is naturally isolating. Even in a room full of people, a person with untreated hearing loss can quickly become disconnected. If a person cannot hear what is being said in a conversation, the conversation quickly fades. Over long periods of time, the undue strain of trying to hear and recognize each word eventually wins out over the desire to participate in social interactions. Not only does untreated hearing loss disengage the listener, but it also places an uneven amount of pressure on the brain’s overall cognitive resources. This strain may not seem important, but it reduces the brain’s capability to perform simultaneous, and oftentimes related, tasks. Spending too much mental energy on trying to hear what’s being said, whether consciously or unconsciously, prevents that person from storing the event as a short-term memory. Hearing loss not only interferes with listening ability, but also with our overall ability to process information. Even mild hearing loss has been shown to get in the way of processing and storing quickly communicated speech. Since hearing is actually the brain’s response to auditory signals, hearing happens in the brain rather than in the ears. Once these signals are weakened and consequentially disrupted, other areas of the brain are called in to help. As a response, the brain begins rerouting other areas of itself in an attempt to compensate for the information being lost to hearing loss. Even if what is being is successfully comprehended, as hearing decreases further and the inability to hear becomes the new normal, it modifies how the brain organizes activity, creating a change in the brain’s natural function. In multiple Johns Hopkins studies, hearing loss has been linked to an accelerated rate of brain tissue loss and this could be a result of altered brain function over time. Age naturally shrinks the brain, but hearing loss may accelerate this process. Importance of Treating Hearing Loss It is important to remember that hearing loss doesn’t automatically equate to dementia. But dementia isn’t the only reason treating hearing loss is important; it improves a person’s overall quality of life. When hearing loss is treated, a person’s cognitive improvement can be very significant, even when cognitive decline isn’t directly associated with dementia. According to Duke University’s Dr. P. Murali Doraiswamy as quoted by AARP, the benefits of treating hearing loss are twice that of any cognitive-enhancing drugs currently on the market, and should be given utmost attention when treating any cognitive issues related to age. If that’s not enough motivation to treat hearing loss, treating hearing loss also strengthens social bonds, restores relationships, and can make a person feel years younger. Max Gottlieb is the content manager for Senior Planning and Prime Medical Alert. Prime Medical Alert provides medical alert systems to help seniors maintain their independence. Senior Planning is a free service designed to help seniors find long term care.
|Fig. 1: Schematic Diagram of Railgun. (Source: Wikimedia Commons)| When it comes to launch an object into space, most people would think of a giant rocket that they have seen in the news. Conventionally, rockets carry with them chemical fuel such as liquid oxygen that is used to drive it against gravity. The fact that rockets need a large amount of massive, non-reusable fuel makes the launch inefficient and extremely expensive. For instance, the rocket Atlas V costs around $125 million to launch payloads of which mass is only about 5% compared to the fuel mass. [1,2] Given this limitation of using rockets, scientists and engineers have been trying to search for alternatives to launch objects into space with less cost. In this report, we will have a look at an inexpensive method of "railgun launch," which uses electromagnetic energy instead of heavy chemical fuel to overcome the issue of small payload to lift-off mass ratio. We will see how this railgun method can launch suborbital payloads and will also discuss its limitations that make it unlikely for stable orbit launch. The structure of the railgun is illustrated in Figure 1. The idea is to convert the energy input from the external emf into the kinetic energy of the projectile through electromagnetic mechanism. When the electric current (of magnitude I) travels through the rails and the projectile in the direction shown in the figure, according to Ampere's law, there is magnetic field curling around both rails, pointing upward in the region between the two rails. The Lorentz force law dictates that the projectile experiences the force in the direction orthogonal to the current and the magnetic field and its magnitude is given by the equation where L' is the inductance per unit length of the rails. Applying the Work-Energy theorem, we can derive the kinetic energy and the velocity of the projectile at the muzzle: |KE = 1/2 L' I2 l||and||v = I (L' l/m)1/2| where l is the distance traveled by the projectile inside the railgun. With these equations, we can calculate how much electrical energy input is needed to yield the desired muzzle velocity of the projectile. Let us say that we want to launch a sounding rocket of which primary purpose is to conduct scientific experiments. Typically, sounding rockets carry with them payloads as heavy as 500 kg to heights between 50 km and 1500 km and stay in space for up to 20 minutes. In the railgun experiment, though, we are talking about payloads of mass around a few kilograms. Consider a projectile of mass 3.9 kg carrying a 1 kg payload. To reach the apogee of 120 km, for example, a muzzle velocity of 2.1 km/s (8.6 MJ) is required. If our launcher has 33% conversion efficiency, we need supply it with 27 MJ of electrical energy for the projectile to reach the desired muzzle velocity. |Fig. 2: Hypersonic velocity of the projectile generates extreme heat load and large air drag. (Source: Wikimedia Commons)| Simulation results have shown that such suborbital payload launch is feasible given appropriate aerodynamic properties of the projectile as well as its ability to tolerate heating (around 1000 K) from atmospheric friction. In December 2010, real tests of railgun launch were successful in reaching muzzle energy of 32 MJ, which was more than enough to launch the suborbital payloads in our consideration. We can see from the previous discussion that railgun launch is very efficient. The payload to lift-off mass ratio is around 25%, which is five times more efficient than that of conventional rocket. The launcher itself can also be reused as many times as we want, allowing room for experiment repetitions with low marginal costs. Still, there are several limitations of this "simple" railgun method that prohibit higher altitude launch (see Fig. 2). To launch a payload to a low earth orbit, for instance, it would require large muzzle velocities of order 10 km/s, which in reality is not possible since the friction from atmosphere would melt the payload and the air drag would force this "no-fuel-power" projectile to cease before reaching the orbit height. Some have proposed the idea of multistage railgun launch, but more research and development are still needed for realistic use of railgun to launch stable orbit payloads. [6,7] © Saranapob Thavapatikom. The author grants permission to copy, distribute and display this work in unaltered form, with attribution to the author, for noncommercial purposes only. All other rights, including commercial rights, are reserved to the author. "Seim-Annual Launch Report: Second Half of 2009," U.S. Federal Aviation Administration, HQ-10998.INDD, May 2010. "Atlas V Launch Services User's Guide March 2010," United Launch Alliance, March 2010, pp. 1-5. J. Behrens et al., "Hypersonic and Electromagnetic Railgun Technology as a Future Alternative for the Launch of Suborbital Payloads," in 16th ESA European Space Agency Symposium on European Rocket and Balloon Programmes and Related Research, 2003 (SP-530) (European Space Agency, 2003), p 185. G. Seibert, "The History of Sounding Rockets and Their Contribution to European Space Research," European Space Agency, HSR-38, November 2006. S. Hundertmark, "Applying Railgun Technology to Small Satellite Launch, Proc. 5th Int. Conf. on Recent Advances in Space Technologies, 9 Jun 11, p. 747. O. Bozic, M. Schneider and D. Porrmann, "Conceptual Design of 'Silver Eagle' - Combined Electromagnetic and Hybrid Rocket System for Suborbital Investigations," in Proc. 61st Int. Astronautical Congress (IAC 2010), Vol. 12, p. 9735 (Curran Associates, 2011). M. Schneider, O. Bozic and T. Eggers, "Some Aspects Concerning the Design of Multistage Earth Orbit Launchers Using Electromagnetic Acceleration," IEEE Trans. Plasma Science 39, 794 (2011).
On this page Exposure is the control of light reaching the image sensor via the combination of aperture and shutter speed. The sensor's sensitivity to light can also be adjusted via the ISO setting for additional control over how bright the recorded image is. To determine what exposure will give the desired image brightness, the camera will meter the light radiating from the subject and calculate the exposure settings required for the image sensor to effectively record the scene. Irrespective of how much ambient light is available, you can safely assume that unless in total darkness there will be enough light out there to capture and therefore record an image. The final quantity of light that reaches the image sensor is determined by several primary factors (and some secondary ones too) these are as follows: Secondary factors are, The Aperture is a variable size hole that controls the amount of light that can pass through the lens when you press the shutter. The larger the area of this hole, the more light will reach the sensor whilst the shutter is open. Focal Ratio (F-Number or F.Stop) This gives use the F-Numbers or F Stops - The numbers with the sequence F2, F4, F5.6, F8, F11, F16, F22 etc etc that you find on you lens. IMPORTANT : Being ratios, F Number values get bigger as the aperture size gets smaller! So F2 is a much larger aperture than F8 (in the same way as 1/2 a pizza is a much bigger slice than 1/8th of a pizza!) The focal ratio ensures that the amount of light (and therefore the exposure) is constant for a given F Number. So for example, if you have good exposure at a focal length of say 50mm and aperture of F5.6 if you then "zoom in" to a focal length of 200mm - so long as you maintain the aperture at F5.6 you can be sure the exposure will still be good. If you would like a more thorough explanation of how aperture, focal length and F-numbers affect digital photography exposure please read Shedding the light on f-numbers. This impacts upon the quantity of light reaching the sensor by determining the period of time (normally measured in seconds) that light is permitted to pass through the aperture. A fast shutter speed (ie a short length of time) therefore allows less light than a slow shutter speed (ie a long period of time). Depending on the type of camera, shutter speeds can range anywhere from 1/8000 of a second to 30 seconds. Some cameras also have what is called a BULB setting which allows the photographer to manually determine how long the shutter is open. Just as aperture affects depth of field as well as exposure, the choice of shutter speed is also important for achieving the desired image quality and creative affect. Unlike aperture and shutter speed, the ISO setting doesn't actually control the amount of light reaching the image sensor. Rather, it electronically changes the way the image sensor manages the light it receives. In simplified terms, each time the ISO setting is doubled the image sensor's sensitivity is increased such that it will record the image as though the actual light itself had been twice as high. Unfortunately, raising the ISO has a detrimental effect upon image quality (because the higher the sensitivity of the sensor the greater the susceptibility for each pixel to record digital noise . The range of ISO settings over which image quality remains acceptable is improving rapidly as digital technology advances. The ability to have the effect of increasing exposure without the need to actually increase the amount of light via larger aperture and/or slower shutter speeds has numerous benefits. For example in situations where traditionally the only way to get enough light reaching the sensor was either via a very large aperture lens (ie bulky, expensive and shallow depth of field) or a slow shutter speed (blurred images due to camera shake or subject motion). These days with a good DSLR camera particularly when combined with an image stabilization system it is possible to capture good quality images in these conditions, even without the need to use a tripod or flash. That said, it's still best practice to keep "noise" to a minimum by whenever possible adjusting the exposure primarily via aperture and shutter speed and only increasing ISO if the exposure (or creative effect) cannot be achieved via those settings alone. The highest ISO setting that you ultimately use is dependent on As you know, your final image is the result of how much light is converted into data at each and every pixel (if you weren't aware of this, no problem, click here for BASICS OF DIGITAL PHOTOGRAPHY). So, in order for the final image to be perfectly exposed, you would ideally determine the exact amount of light needed at each pixel and adjust the camera settings accordingly. Now, if you have a fairly typical digital camera of maybe 10megapixel or above - this would mean simultaneously setting 10million individual aperture, shutter speed and ISO settings for each and every photo! Clearly this is a practical impossibility. Fortunately, in reality it is not necessary to do this, as dividing the image into a few zones (typically around 35) and then averaging the amount of light needed by each is typically good enough to calculate the exposure needed. (Don't worry, your camera's on board computer does this calculation for you). Sometimes however, it might be appropriate to consider the optimum light needed to expose some parts of the image more than others (particularly if your primary subject occupies only a small part of the frame and is significantly more dimly (or brightly) lit than the rest of the image. In these cases you will want to use partial metering. In partial metering rather than the average light needed for ALL zones of the image being computed, greater weight is given to the light demanded at the important part of the image and the exposure is calculated with this in mind. This will of course result in the other areas of the image being over or underexposed - nevertheless this is preferable to the main subject being too light or dark. Most modern cameras offer several metering options which average the exposure needs of the image in a variety of ways. These are EVALUATIVE, PARTIAL, SPOT, and CENTRE WEIGHTED. Consult your camera instructions for details of how each is used to determine exposure and for an indication of when to use each. There is no such thing as the CORRECT digital photography exposure. The more appropriate question that YOU the photographer should ask yourself is, "will or did (if you have already taken the photo and are reviewing it on your camera screen) the exposure controls selected, yield the image result desired in respect of brightness of the subject, depth of field, sharpness and image quality? When the answer to all these questions is YES, then the exposure is "correct" for that particular image. Auto versus programmeable and manual exposure. Your camera will probably have various exposure mode options as follows, So which should you use? Well thats where YOUR JUDGMENT (and skill, experience and knowledge) comes into play. Left on full auto, your camera will be programmed to give a result reasonably expected to be desirable for general everyday photos. ( ie with mid range aperture and shutter speed and ISO levels such that the final image is relatively balanced for brightness over the entire frame. The scene modes will take into consideration other factors normally relevant for that type of scene. For example, SPORTS mode takes into account that fast moving subjects need a fast shutter speed to avoid motion blur - and so will obtain the exposure with faster shutter speeds (and the correspondingly larger apertures), whereas LANDSCAPE mode, will consider the desire for a large depth of field to ensure both near and far objects are in focus - and will therefore use small apertures and therefore slower shutter speeds. The more familiar you are with the concepts of photography and the more familiar you are with the range of settings of YOUR camera (and how readily you can access and set them), the more often you will find yourself taking control of the camera's programmable options. This will come with practice - and as said elsewhere on this site, the real beauty of digital photography is that after the initial investment in equipment, taking pictures is essentially for free and of course its possible to instantly review your pictures to see if your settings are giving the results you want. However, if you are unsure and are shooting an event in real time that is important, unlikely to wait for you, or to be repeated - then there is no shame in resorting to those scene modes or even full auto - after all they are designed based on the input of photography professionals with years of experience to give the best result in the majority of situations. Don't worry with time and practice you WILL master digital photography exposure! Exposure is the control of light reaching the image sensor via the combination of aperture and shutter speed. The sensor's sensitivity to light can also be adjusted via the ISO setting for additional control over how bright the image is recorded. To determine what exposure will give the desired image brightness, the camera will meter the light emanating from the subject in and calculate the exposure settings required to allow the image sensor to effectively record the scene. Return to home page
Quadrant Math Graph. Look at the figure shown below. In this quadrant the values of x and y both are positive. Single Quadrant Graph Paper Projects to Try Pinterest from www.pinterest.com You may choose between 2 degrees, 5 degrees, or 10 degrees. The x and the y axes divide the plane into four graph quadrants: A quadrant is the area contained by the x and y axes; A graph quadrant is one of four sections on a cartesian plane. Every fifth line is bold to help them visually find a point easier and as a precursor to using graphs labeled by 5s. Students can use this graph paper before they have learned how to label or use graphs with a scale greater than 1. Browse 328 quadrant graph stock photos and images available, or search for four quadrant graph or 4 quadrant graph to find more great stock photos and pictures. Customize this single quadrant graph paper to your requirements. Both x and y happen to consist of positive values in this quadrant. The gray lined paper is most useful if you need to draw overtop of the existing lines and highlight your own figures. The x and the y axes divide the plane into four graph quadrants: Any of the 4 areas made when we divide up a plane by an x and y axis, as shown. To explain, the two dimensional cartesian plane is divided by the x and y axes into four quadrants. Simply check the solved questions on all four quadrants, signs of coordinates, plot points, graphs of simple function, etc. When we include negative values, the x and y axes divide the space up into 4 pieces: The single quadrant graph paper has options for one grid per page, two per page, or four per page. The polar coordinate graph paper may be produced with different angular coordinate increments. We have horizontal and vertical. Each page can contain 1, 2, or 4 grids each. For example, by convention the upper right quadrant is called quadrant 1 and contains only points that have both positive x and positive y values. The four quadrant graph paper can produce either one grid per page or four grids per page. A Quadrant Is The Area Contained By The X And Y Axes; It is in the second quadrant. In mathematics, a quadrant is one of the four sections of a rectangular coordinate plane. In this quadrant the values of x and y both are positive. All Questions Related To The Cartesian Plane Are Given Here To Help The Students Learn Maths. Three levels are included so you can choose the best length and difficulty for your students (great for student. These two axes divide the paper into 4 parts. Observe the blue dot on the coordinate graph. We Usually Label Them Using The Roman Num. Moreover, x has negative values in this quadrant and y has positive values. Worksheet on coordinate graph is the best source to begin your math practice. You may choose between 2 degrees, 5 degrees, or 10 degrees. After You Have Customized The Graph Paper For Your Application, It Can Be Optimized For Printing. Each part is called a quadrant. The meeting point of the two axes is called the origin. Use this paper for graphing coordinates in either quadrant i, ii, iii, or iv. Every Fifth Line Is Bold To Help Them Visually Find A Point Easier And As A Precursor To Using Graphs Labeled By 5S. Customize this single quadrant graph paper to your requirements. What is a quadrant on a graph? Knowledge graphs are widely applied in many applications.
Since beginning to study nutrition, scientists have learned a lot about how what you eat affects your weight. However, the biggest lesson picked up so far may be that there’s still a lot to learn. It’s hard to understand how someone who eats so much can keep from gaining weight, but researchers have several theories. In the Genes Up to 70 percent of the factors that make up our body weights are genetic, according to Michael Cowley, director of the Monash University Obesity and Diabetes Institute. People who seem to stay slim may be genetically predisposed to that body type, or they may have genes that influence appetite regulation in a different way than those of people who are overweight. Some people's genes spur them to eat less and feel more conscious of when they are full, says Cowley. A "Set Point" According to a 2013 article published by “Evidence Magazine,” every individual has a “set point,” or a weight range that his body attempts to achieve and maintain over time. Some people have naturally lower set points than others, and if they begin to consistently overeat, their bodies are programmed to decrease appetite signals and prevent weight gain beyond their set point range. That’s why if a group of people of similar starting heights, weights and ages all eat more calories than they need, their bodies may react very differently to those extra calories. People who are very active are able to maintain their figures despite eating a lot because their bodies need more calories and also burn more calories throughout the day. Active people and athletes typically have more lean muscle mass than sedentary individuals, and muscle mass burns more calories at rest than body fat does. Muscle mass is also denser than fat, so small but active people appear more compact and leaner even though they may actually weigh as much or more than people their height who have excess body fat. Another factor that influences your weight is your basal metabolic rate, or BMR -- otherwise known as the number of calories your body burns in a resting state every day. If you have a high metabolic rate, you may be able to eat much more than others and still not gain weight. Genes are just one variable that influence your BMR. Others include your age, height, starting weight, physical activity level and muscle mass percentage. - Jupiterimages/Photos.com/Getty Images
Rudolf may have had a glowing red nose, but real Arctic reindeer have eyes that shine a different hue depending on the season — deep blue in the cold, dark winter, and golden in the summer. For more than a decade, no one could explain the color difference. Now, a study conducted by Glen Jeffery, from the University College London, suggests a reason behind this mysterious seasonal change. Jeffery examined the Arctic reindeer’s tapetum lucidum, a special membrane at the back of the animal’s retina that acts like a mirror to reflect light and helps improve night vision. Such “eye shine,” or “cat’s eye,” is found in a number of animals that travel at night, from house cats and raccoons to opossums, crocodiles and lemurs. When the tapetum changes color, it reflects different wavelengths of light, which vary by season. In the summer, the golden tapetum casts most of the light back out of the eye. On the flip side, during the winter’s darkness, the deep blue tapetum bounces less light out of the eye. Instead, it scatters the light inside the eye, allowing it to be absorbed by photosensitive eye cells, which according to Jeffery, probably improves a reindeer’s chances of seeing moving predators in the dark. Scientists are still unsure what causes the color change, but Jeffrey theorizes that it might have something to do with a shift in pressure within the eyeball. In the winter, pressure increases in the eye because of pupil dilation, preventing fluid from draining naturally. This compresses the tapetum, which in turn, makes the tissue reflect shorter wavelengths, notably the blue light common in Arctic winters. This could be the key to a reindeer’s striking blue eyes come Christmas. Rudolf would be so jealous.
On February 8, 1887, President Grover Cleveland signed the Dawes Severalty Act into law. The Dawes Act created a process to split up Indian reservations in order to create individual parcels of land and then sell the remainder off to white settlers. One of the worst laws in American history, the Dawes Act is not only a stark reminder of Euro-American colonialism and the dispossession of indigenous peoples, but also of the role dominant ideas of work on the land have in promoting racist and imperialist ends. We might not think of the Dawes Act as labor history. But I want to make the beginning of a case that it is absolutely central to American labor history, a point I will expand upon in the future. Labor history is not just unionism. It is histories and traditions of work. The Dawes Act was absolutely about destroying traditions of Native American labor and replacing it with European notions of rural work. That it did so while opening more land to white people was a central benefit. Now, it’s worth noting that there is nothing like a “Native American tradition of work,” now or ever. There were thousands of different ideas of labor. Eventually, I’m going to try and touch on a few specific examples of 18th and 19th century Native American labor. The Dawes Act was largely directed at the Native American populations that had developed their cultures and work systems around horses and nomadism. Acquiring horses by the early 18th century, some peoples such as the Crow, Comanche, Utes, Blackfeet, and others made the conscious decision to convert to horse-bound hunting cultures, which created entirely new ideas of work that included men on long hunts, women treating bison hides, horse pastoralism, and other labors to create a bison economy. These choices allowed them to resist white encroachment with real military might. It also meant they received quite sizable reservations when the U.S. signed treaties with the tribes in the post-Civil War period. “Cree Indians Impounding Buffaloes,” from William Hornaday’s The Extermination of the American Bison. At the same time, white Americans were populating the West through the auspices of the Homestead Act of 1862. Beginning with the Northwest Ordinance, white Americans had gridded the land to sell it off in 160 acre parcels. This led to the relatively orderly (and lawsuit-free) population of the West as Native Americans had been pushed off. The Homestead Act encouraged this process across the Great Plains. Although it had little immediate effect because of the Civil War, beginning in the late 1860s, white Americans began pouring into the Plains. White ideas of rural labor on the Great Plains. So when whites saw relatively few Native Americans holding legal title to vast tracts of lands on the Great Plains and American West, it offended both their notions of race and work. Whites saw land as something to be “worked” in very specific ways. Work meant the individual ownership of land or resources that create capital accumulations as part of a larger market economy. Proper labor “improved” upon the land; because Native American conceptualized the land differently, they did no legitimate work. The actual tilling of land for cash crops was the only appropriate labor upon the land, once existent resources like timber, furs, or minerals were extracted. The land did all sorts of work for Native Americans before 1887. It fed the bison upon which they had based their economy since they acquired horses in the early 18th century. It provided the materials for their homes and spaces for their camps. It also provided fodder for those horses. To whites, this was not work. It was waste typical of a lesser people. The Dawes Act split up the reservation lands so that each person received 160 acres of land, the amount a white settler would receive under the Homestead Act. After allotment, the remainder of the reservations could be divided under the normal methods of the Homestead Act. Native Americans could not sell their land for 25 years. At the end of that time, they had to prove their competency at farming, otherwise the land reverted back to the federal government for sale to whites. By trying to turn Native Americans into good Euro-American farmers, the Dawes Act also upset the relationship between gender roles and work among many tribes. To generalize, men hunted and women farmed. But with the single-family breadwinner ideology of whites thrust upon them, it turned farming into men’s work, which many Native Americans resisted and resented. Naturally, there was the usual language of concern for Native Americans in creating the Dawes Act. Cleveland claimed he saw this as an improvement on Native Americans wandering around their desolate reservations. I don’t want to underrate how tough those lands were by 1887; with the decline of the bison, an intentional effort by the federal government to undermine food sources and the willingness of Indians to resist conquest, poverty and despair was real. But of course, whites had created this situation and the “solution” of dispossessing Native Americans of the vast majority of their remaining lands was hardly a solution at all. Allotted land for sale. The Dawes Act devastated Native American landholdings. In 1887, they held 138 million acres. By 1900, that had already fallen to 78 million acres and by 1934 to 48 million acres. About 90,000 people lost all title to land. Even if Native Americans did try to adapt to Euro-American notions of labor on the land, the land itself was mostly too poor, desolate, and dry to farm successful crops. The Indian schools such as Carlisle continued this reshaping of Native American work, theoretically teaching students skills they could take back to the reservations, but there was little use for many of these skills in the non-existent post-Dawes Act indigenous economies. Plus that goal was always secondary to killing Indian languages, religions, and traditional workways. The Dawes Act finally ended in 1934 with the U.S. Indian Reorganization Act. There were many acts and events that ruptured the relationship between indigenous labor and the land in the late 19th century American West. The Dawes Act is among the most important. By thinking of the Dawes Act in terms of the relationship between nature, labor, and racial notions of proper work upon the land, we can expand our understanding of both labor history and the history of Euro-American conquest of the West. This is the 51st post in this series. The rest of the series is archived here.
Synthesize Knowledge into Understanding We live in the Information Age. Each of us, equipped with network access, has the ability to retrieve so vast an array of facts and figures that the storage of these data in the form of printed text would fill one’s local library to overflowing. In a recent achievement, a computer nicknamed Watson was able to defeat human opponents in the game show Jeopardy using the knowledge base that is the “World Wide Web.” Watson’s victory signals a remarkable advance in computer technology because the game poses clues riddled with puns and complex wordplay. The engineers showed that the computer can sort and access a broader information base than the human mind – and can even be taught to dissect the vagaries of human language. Despite this advance, can we consider Watson “educated” in any sense of the term? Many descriptions of the education process refer to “Levels of Cognition.” Simple knowledge, as in facts and numbers, is the lowest level of cognition. However, a successfully educated student draws inferences and conclusions based upon his or her knowledge, extrapolating to new levels of understanding. While the internet can provide all people with access to vast quantities of data, only the educated individual has the ability to synthesize knowledge into understanding. Therefore, our goal is to build ‘knowledge’ of human behavior and its products, the diversity of peoples and cultures, and of the natural and physical world through the study of sciences, technologies, humanities, arts, and social sciences. This progression is illustrated in Bloom’s Taxonomy. Levels of Cognition from Bloom's Taxonomy We have identified five areas for which the student should acquire knowledge and understanding: - Study the Natural World - Appreciate the Fine and Performing Arts - Address Problems Using Critical Analysis and the Methods of the Humanities - Understand, Observe, and Analyze Human Behavior and the Structure and Functioning of Society - Understand Significant Links between Technology and the Arts, or between Science and Society. Having acquired this knowledge and understanding, students in the General Education system will follow up on their interests in several of the areas they have studied at the Versatility level.
Common pests can cause serious health problems! Pests such as weeds, cockroaches and rodents, as well as the chemicals we use to control them, can cause and trigger allergies and asthma by contaminating our air indoors. What is IPM? Integrated Pest Management (IPM) is a method that focuses on knowing the pest in order to prevent pests from getting out of control. IPM is safer because non-chemical methods are the first line of defense. If chemicals must be used, always choose less hazardous products. Be sure to read warning labels before using any chemical products. Use IPM to eliminate pests safely Step One: Find out what kind of pests you have and where they are coming from. Each pest has different habits so it's important to "know your enemy!" For rodents and roaches, sticky traps can tell you what and where they are. Step Two: All pests look for food, water and shelter. If you understand what they want, you can take it away. This is the most important step in IPM and prevention! - Keep living areas clean and uncluttered. - Put food in tightly sealed containers. - Keep trash in a closed container. - Fix plumbing or water leaks - Seal entry points such as gaps in walls, pipes, pavement and other surfaces using caulking, steel wool, or other pest-proof materials for. Step Three: Use traps and baits first, along with less-toxic dusts such as boric acid. - Put the bait close to the pest's hiding place. - Do not spray any pesticides. This will keep the pests away from the bait. - Choose and use chemicals very carefully! - Read the label - it has valuable information on proper use. Step Four: Continue monitoring with appropriate methods to track progress or need for further steps such as bait rotation, treatment of adjacent units, etc. Ongoing monitoring is one of the most important steps in effective pest management.
Obsessive-compulsive disorder (OCD), the most severe of the anxiety disorders, is characterized by recurrent, intrusive thoughts or obsessions—irrational fears of danger, illness or germs, for example—and repetitive, ritualistic behaviors and compulsions, such as constant hand washing. The symptoms of OCD generally begin manifesting in childhood or adolescence, suggesting that there may be something abnormal occurring during the brain’s development. The posterior medial frontal cortex (PMFC) and the ventral medial frontal cortex (VMFC) are regions of the brain that have been found to be hyperactive in people with OCD. In tests that Dr. Fitzgerald and her team conducted with OCD children and normal controls performing cognitive tasks, the OCD children performed correctly, but imaging studies showed that their brains had to work harder to filter out distracting information to get the correct response—they had too much activity in both regions. In the resting state, the PMFC was insufficiently connected to a posterior part of the brain important for task control. For both filtering out distracting information and responding to errors, the VMFC was activated in the OCD children, when it should have been turning off. The VMFC, too, was abnormally connected within brain networks. A next step will be to study larger numbers of children with OCD at each age, following them to see how their brains change over time. This is especially important because recent studies suggest that OCD may, in some cases, remit. With good treatment, in particular cognitive behavioral therapy (CBT), with or without medications, up to 40 or 50 percent of OCD children get better. A particularly effective form of CBT for OCD is what is known as exposure and response prevention. There are also good medications, but many people have to go through three or four different drugs to find one that works for them. What is not known as yet is how to predict which people will get better. If it becomes possible to characterize the brain, and how development may be going awry in the particular regions or networks of brain regions to give rise to OCD, it may be possible to use imaging tools to predict who is at risk for developing the disorder and to come up with early interventions and possibly preventative measures.
By Erica Vendituoli and Ricardo Bercerria In 2010, a new standard for education was adopted among many states in the United States. The Common Core, an English language arts and mathematic standard curriculum, was created to provide stable learning goals that would prepare America’s students for their professional careers after public education. These standards have caused serious debate among those in education in regards to the role of standardization in education; some believe standardization allows administrators to monitor success across large groups of students whereas others believe that standardization wrongfully treats students as the same and does not emphasize creative thinking and the arts. A romantic writer, Friedrich Schiller, wrote his opinion on education policy in 1794 in Letters on the Aesthetic Education of Man which would have a strong opinion on the Common Core. Schiller’s arguments give us insight into how a policy like the Common Core would stifle student’s exposure to arts. Schiller would strongly oppose the common core and would propose for education that incorporates more arts because he believes arts creates freedom of thought and a prosperous society. In his second letter, Schiller describes his thought process as to what should be important in society. Schiller says about the important type of art he desires, “This kind of art must abandon actuality, and soar with becoming boldness above our wants and needs… for art is the daughter of freedom.” Schiller values freedom in order to have a prosperous society. This is where Schiller’s philosophy would strongly oppose the Common Core. The Common Core represents the strictness and uniformity that Schiller is so against. Schiller believes that art must ‘soar with boldness’; this curriculum takes away power from teachers to create bold teaching plans that incorporate art and new ways of thinking. Teachers are very focused on math and English to achieve desired test scores that they do not focus on creative aspects of education like science and the arts. Schiller believes that the arts inspire greatness and are at the core of freedom. In Schiller’s letter, he directly opposes the kind of structure that this standard is creating while acknowledging that his society has similar values. He states, “utility is the great idol of our age, to which all powers are in thrall and to which all talent must pay homage.” Schiller knows that society often values things it can measure such as test scores. Yet, he proves that thinking beyond just utility is the key to attaining true freedom. In the case of schools, students must learn beyond the results and content of standardized tests in order to truly obtain knowledge that will help society become a better place. The arts, history, science, and exploratory fields as well as education on morals, ethics, and philosophy are needed in schools to cultivate innovation. Though he was writing to his people of the 18th century, Schiller provides a meaningful message to those in education today; a well-rounded education that promotes bold thinking and values the unique aspects of each student is the only system that will endorses the freedom and innovation that represents the culture of the United States.
late summer or fall, the resident queen wears out and dies, and the nest ceases to function as a colony. The workers and males die when the weather turns cold. Sweat bees (Halictinae) are Missouri's other group of social bees. There are at least 70 species of sweat bees in Missouri. Most are small and inconspicuous and have not been studied. Some are solitary, but a number are known to be social. Like the bumblebees, they live throughout the state in all habitats. True to their name, they have the habit of landing on human skin to lick perspiration. Bees are among the few insect groups that build nests, and the females do all the work. Nests contain one to many cells. A cell is where a single bee usually develops to adulthood. The architecture of these nests varies tremendously, depending on the kind of bee. Many ground-nesting bees build simple, shallow holes in the ground, sometimes with side branches, while some social sweat bees create elaborate underground labyrinths with many branches. Certain leafcutter bees string together a linear series of leafy cells inside a hollow twig. Other species related to the leafcutters attach fragile, resinous globes to twigs or under rocks. Some bumblebees may convert a mouse or bird's nests into a wax-based dwelling. Nests are valuable in the bee world and are sometimes aggressively stolen by other bees. Certain bees take this behavior a step further. While most bees make their own nests, there are some kinds of bees that mimic the habits of the cowbird. Referred to as cleptoparasites (or parasitic bees), these bees do not collect pollen or make nests, but use the nests of non-parasitic species as their own. The "host" bee usually doesn't even realize it. Unlike the cowbird, which parasitizes the nests of many different kinds of birds, parasitic bees are usually much more particular about their hosts. Bees that attack leafcutter bee nests don't attack sweat bee nests, and vice versa. Because of the absence of pollen-collecting hairs on their bodies and their often slender physiques, parasitic bees often look more like wasps than bees. Nearly 100 species of bees in Missouri are parasitic. Bees are important pollinators of plants in Missouri, and nearly everywhere else. Without pollination, plants could not reproduce sexually, and there would be far fewer seeds or fruits. To produce their full complement of seeds, most flowers require multiple visits by pollinators, not just one. Bees are superior pollinators because of their reproductive requirements for pollen and nectar, their adaptations for collecting pollen and their habit of visiting flower after flower. They transfer many pollen grains to plant stigmas (female reproductive organs in plants) and effect cross-pollination more dependably than any other insects. In addition, bees display flower constancy, which is the habit of visiting flowers of the same species consecutively during a foraging trip. In this way, plants are cross-pollinated with "the right kind" of pollen. Over time, many plants have developed pollination systems that depend on bees. Missouri examples include most of the flowers in the bean family (Fabaceae), the daisy family (Asteraceae), the rose family (Rosaceae), gentians and many others. Many native fruits eaten by birds and other wildlife develop from flowers that are bee-pollinated, such as blueberries, plums, serviceberries, buckthorns and ground cherries. Some flowers require certain kinds or sizes of bees for pollination. Wild indigos, which are common on prairies and glades, have large flowers that can be pollinated only by large, strong bees, primarily queen bumblebees. Far from being threatening, our native bees are the core component of a pollinator force that powers much of natural Missouri. They carry it-and us-across the void of winter into the growing seasons, from one year to the next, from generation to generation.