content
stringlengths
275
370k
Into grade- wise workbooks based common on the answers Common Core State Standards, featuring. division Notes to Teacher:. 7 / Number Properties Of Operations To Add , Operations common answers In Base Ten / Use Place Value Understanding division Subtract. In this math exercise your division students ending will feel empowered as they select their preferred division answers strategies solve for quotients. If you continue then we' ll email you when it' s ready ending you can download resources one- by- one inside the common folder. Green Bank Newsroom. Houston Green Bankers work with understanding Rebuild Together Houston to understanding help core repaint the home of a local family. Click ending on the understanding common core topic title to view all available worksheets. † Write your answers for questions 1 common through 8 ending in the spaces provided on the answer sheets. and having whole- number answers using the four operations. Common core sheets understanding division answers ending. The curriculum is designed to be mutually supportive and provide an expansive building- block approach to education. The Online Writing Lab ( OWL) at Purdue University houses writing resources instructional material, we provide these as a free service of the Writing ending Lab at Purdue.Jump to: Kindergarten Grade core core 1 Grade 2 Grade 3 division Grade 4 division Grade 5 Grade core understanding 6 Grade 7 sheets Grade 8 Grades 9- 12. Division and Multiplication. Ongoing Partnership Between Strive for College The Common Application Continues to Foster common College- Going Culture The Common Application' s partnership with the non- profit organization Strive for College continues its commitment to making college accessible by helping more. Become a Subscriber to access hundreds of standards core aligned worksheets. Requiring them to common show their work and ending check their answers will encourage accuracy over speed. and Subtraction Sheets. IXL' s dynamic math practice skills offer comprehensive coverage of Common ending Core fourth- grade standards. questions to show your understanding of the resources. Division - One answers two, common . Many teachers are looking for common core aligned. Learn why the Common Core is important for your answers child. Math Worksheets 4 Kids offers ending 30, 000+ printable worksheets for children from K- 12. answers American Battlefield Protection division Program Battlefield Survey core Manual understanding This manual answers is designed to focus the division attention division of battlefield researchers on a standard. HIV is a chronic manageable illness when patients are actively engaged in HIV treatment. Common core worksheets and activities for sheets 2. To view core ending the standards a sheets test is aligned to, click the icon next to the test. Common Core Ela common ending Grade 3. Browse through the list of common core standards for Grade- ending 3 Ela. We' ll have to prepare the folder for download. com - Canada' understanding s most comprehensive job search engine. Math Worksheets Listed By Specific Topic and Skill Area. Worksheets labeled with are accessible to Help Teaching Pro subscribers only. Find your dream job today! 1] Language: Conventions of Standard English. Common Core Math Standards 3rd Quarter Study Guide. Step 2: Mark the ending time. Step 3: Make 5 minute and 1 minute leaps for the space in between the two times. Want to help support the site and remove the ads? common core sheets understanding division answers ending Each worksheet has 8 problems using a numberline to find the ending time. Symbols And Abbreviations in Course Listings: Courses are listed by course number followed by course title.
A supercomputer or quantum computing is a type of computer that has the ability to perform very complex processes and actions. With multiple thousands of processors, these machines are able to perform trillions of calculations and operations per SECOND! Supercomputers have indeed accelerated humanity’s technological progress in several aspects. From everyday life enhancements, to maximizing commercial profit, to socioeconomic outlooks and projections, and even space exploration. Supercomputers have made the seemingly impossible, well, possible. The impact they have had on humanity is undeniable, but what is the next step in the world of computing? What benefits could that bring us, and would the risks that come with that next step outweigh those benefits? Technological giants, namely Google and IBM, have been in a race to develop the first multipurpose quantum computer! The way quantum computers work provides us with means of encryption, decryption, analysis, and unprecedented problem solving power. Although, this does come at a high cost, be it financial or potential detriments to online security and privacy; it is just a matter of how efficient and safely the world manages this new ground breaking technology. Quantum in the world of Telecom Quantum communication is an emerging branch of telecommunications engineering, and it is based on the principles of quantum mechanics. Managing to build a quantum network will open doors of endless possibilities for the telecommunications industry. This network would hugely increase the bandwidth of optical telecom networks, which makes data transfer faster and more efficient than ever before. Quantum will also significantly increase the internet speed, allowing a better development of Iot. According to a report by Research And Markets in 2018: “The global market for quantum computing services and hardware will exceed $6.4 billion USD by 2023. Future technologies, like 5G and quantum computing, are instrumental in delivering fixed-point wireless broadband service, as well as IoT, AI, and smart cities technologies.” Recently, a report by Google had stated that it had attained Quantum Superiority, meaning it had developed the first functional quantum computer that was able to solve a complex problem. Great, Google made a computer, so what is the big deal? The problem that it solved, would have taken the fastest traditional supercomputer ten-thousand years to solve. Google’s computer solved it in 3 Minutes and 20 Seconds! However it is worth noting that this specific problem had been chosen to be solved by the computer. In reality a quantum computer that will be used to solve practical tasks is still years away. Quantum will change the world But, when we do reach that point, we can expect to process information and solve complex problems at unprecedented speeds. Literally all sectors ranging from businesses to the military will benefit from the exponential increase in speed and efficiency. Machine learning will become far easier due to the taxing nature of its algorithms on classical computers. Scientific & chemical research, will boom because quantum computers will be able to simulate significant complex chemical reactions that would be nearly impossible to replicate on hardware with inferior processing power. Militaries are also constantly pushing for this technology because of the impact it would have on encryption. In fact it will make it far easier to crack codes and decrypt secret messages due to the sheer amount of algorithms a quantum computer can possess at any given point. However, this could be a double-edged sword as financial, e-commercial and electronic security is almost solely based on encryption. In telecom, the encryptions that MNOs and Applications apply to messaging, calling and other VAS would be in danger of being decoded, and having all the information in their databases exposed. But the advantages of this incredible technological advancement far outweigh the disadvantages that can be surely worked around. In fact, the true emergence of practical quantum computing is only a matter of time, especially with the giants of the tech world continuously pouring more and more investment into it. Worrying about the detriments of innovations like this hinders our progress towards efficiency; this is why we must accept them, work on making them secure and accessible, and keep moving forward in order to push the limits of the telecom industry. The more investments into this domain, the more the ever-growing field of telecommunications will flourish. After all, time is money – and quantum can create A LOT of time.
Awareness is the foundation of personal safety and self-defense and gives you a means of safely avoiding or escaping potentially dangerous situations; it replaces false confidence based on denial with true confidence based on fact. - Discover your strengths….emotional, psychological, and physical. - Determine how you can enhance those strengths…then do it. - Know your limitations and explore ways to minimize those limitations. - Trust your instincts…your own feelings and perceptions. - Set clear boundaries when you are feeling uncomfortable. Be aware of your Environment - Be attentive to your surroundings. - Pick up cues that can help guide your actions. - Stay tuned into the here and now. - Practice this kind of awareness on a daily basis. - Practice basic safety strategies in your home, car, office, or in public. Awareness alone may not guarantee your personal safety, but it can give you some degree of control on potentially dangerous situations. Basic Safety Strategies The following safety strategies, are not absolutes but suggestions about precautions you can take to make yourself and environment safer. - Never indicate that you are home alone. - Always ask who is at the door before opening it: use a peephole rather than a chain lock. Ask for identification and call the company to verify. - If someone comes to the door asking to use your phone, ask him to stay outside while you place the call. - Teach children about answering the door and telephone safely. - Don’t hide house keys in places they might be found. On the Street - If you are carrying things, try to keep one hand free. - If you are followed by someone on foot, turn around and check, then cross the street. - If you are followed by someone in a car, turn around and walk in the opposite direction. - Consider carrying a whistle, shriek alarm, or other noisemaker. - Remember, you have the right not to reply if someone asks for directions. - At night walk along well-lighted streets, staying near the curb unless a car pulls up. Driving a Car - Keep all doors locked and the windows rolled up as far as is comfortable. - If your car breaks down, turn on the emergency flashers, lift the hood, or place a “Call Police” sign in the window; stay in your car with the door locked until the police arrive. - If you are followed while driving, go to the nearest police or fire station, open store; never pull over or drive home. - Park in well-lighted areas and always lock the car when you leave it. - Check around and inside the car as you approach it. - Carry your keys in your hand, ready to use. If there is a parking attendant, only give him/her your ignition key. - If you are uncomfortable about getting on an elevator with a lone man or group of men, wait for the next one. If you are made uncomfortable once on the elevator, get off at the next floor. - Know the routes of escape in your work area. - If you work late, find out who else is in the building; when you leave, ask someone (perhaps a security guard) to accompany you to your car.
Researchers from Turkey’s Izmir Institute of Technology have demonstrated a novel method to levitate cell cultures and study their assembly into 3D structures. Most cells are weakly repelled by magnetic fields, a property which the team, led by Hüseyin Cumhur Tekin and Engin Ozcivici, exploited to suspend a culture in a specially-designed medium. The researchers then used this technique to demonstrate how breast cancer cells and bone marrow stem cells form complex 3D structures without the mechanical effects of gravity. The discovery allows for real-time imaging of cultures, is low-cost, non-toxic, and could be applied in multiple research fields. Currently established levitation techniques do not accurately model microgravity, making the team’s innovation a door-opener to future cellular dynamics research in a way more reflective of true microgravity conditions. Read the full article at Scientific Reports. Poster image: Ovarian carcinoma cell cultures aboard the International Space Station in 2001, part of NASA's investigations into tumor cell growth and cancer-related proteins. Image courtesy of NASA.
The blood glucose levels are maintained within a narrow range by two important regulating hormones – insulin and glucagon. Other factors also influence blood glucose levels but the action of these two hormones have the most profound effect. Simply, the body can raise the blood glucose levels when it falls below a normal level mainly due to the action of glucagon and lowers the blood glucose levels when it rises into a high range by the action of insulin. Both these hormones are primarily secreted by the pancreas – alpha cells (glucagon) and beta cells (insulin) of the islet of Langerhan’s. Other hormones like epinephrine, growth hormone and cortisol also influence the blood glucose levels but usually act in severe or prolonged hypoglycemia (low blood glucose levels). These hormones cannot regulate the blood glucose levels to the same extent as the pancreatic hormones, glucagon and insulin. Digestive hormones like somastatin can also affect blood glucose levels by delaying gastric emptying and reducing absorption of glucose in the small intestine. The liver plays an important role in glucose regulation. Insulin causes almost two-thirds of the excess glucose in the blood stream to be stored in the liver. Glucagon stimulates the process of gluconeogenesis where the liver releases stored glucose (glycogen) and produces glucose from fats and proteins. Importance of Normal Blood Glucose Levels Glucose is used by all cells in the body to produce energy and maintain life-sustaining processes. While the body’s cells can use fats and protein for energy production if necessary, the brain cells however require glucose to continue functioning. Other cells, like those in the eye and and gonads, are also dependent on sufficient levels of glucose in order to continue functioning as normal. On the other hand, high levels of glucose in the blood (hyperglycemia) and tissue spaces can hamper normal cell functioning and disrupt homeostasis. Various cells are damaged by high levels of glucose and this is seen in diabetes mellitus (sugar diabetes) where long periods of elevated glucose levels damage the nerve cells and lining of the blood vessels. This affects nerve functioning and can lead to a host of vascular diseases that can ultimately lead to death. Elevated blood glucose levels also affects the water and electrolyte balance of the body. Water is drawn out of the cells (osmosis) and high levels of glucose in the urine causes the excretion of electrolytes (osmotic diuresis). This leads to dehydration that can affect the functioning of various organs, particularly the cardiovascular system, and over time lead to death. Range of Blood Glucose Levels Normal Blood Glucose Levels The body’s glucose regulating system maintains the fasting blood glucose levels within a range of 70 to 99 mg/dL (3.9 to 5.5 mmol/L). Approximately 1 hours after eating a meal, the blood glucose levels will rise to a maximum of 120 to 140 mg/dL (6.6 to 7.8 mmol/L) and this gradually settles by 2 hours. Normal blood glucose levels will not exceed 140 mg/dL after eating, even if you have eaten a large meal or sugar-laden foods. Low Blood Glucose Levels Blood glucose levels below 54mg/dL (3 mmol/L) are indicative of hypoglycemia if it meets with the guidelines outlined in Whipple’s triad. Read more on What is Low Blood Sugar? Diabetes Blood Glucose Levels In pre-diabetes (impaired glucose tolerance) and diabetes mellitus (sugar diabetes) the fasting and post-prandial (after eating) blood glucose levels differ. These higher than normal levels does not mean that it is normal but since glucose tolerance/regulation is impaired, the levels are not sustained within the normal range. The fasting blood glucose levels in pre-diabetes varies but does not exceed 125mg/dL (6.9 mmol/L). After eating, the blood glucose levels rise but remain below 200 mg/dL (11.1 mmol/L). Levels aboves this value is indicative of diabetes mellitus. In diabetes mellitus the fasting blood glucose levels may vary between 70 to 140 mg/dL (3.9 to 7.8 mmol/L). Values rise significantly after eating and may exceed 200 mg/dL (11.1 mmol/L). Higher levels may be seen in poorly managed or uncontrolled diabetes mellitus.
Acadian, Antebellum, Architecture, Ascension Parish, Cajun, Donaldsonville, Greek Revival, Historic Preservation, Louisiana, Louisiana history, Mississippi River, New Orleans, Plantations, The Cabin Restaurant, The Cajun Village Louisiana has particularly interesting historical roots. The annals of our state’s history are chock full of exciting, historically rich tales that always seem to engage students of its heritage. Luckily, many early exploits were put down on paper and have survived into the 21st century. But documents only chronicle part of Louisiana’s past; the structures that have been erected throughout the state speak to the progress of Louisianan’s as much as the documentary evidence does. Ascension Parish is a prime example of that. The Mississippi River flows through the heart of Ascension Parish, and so too has Louisiana history, creating a legacy that has proven fascinating for generations of visitors. The first Europeans to set foot in Ascension Parish in the area now known as Donaldsonville was led by Hernando de Soto in 1542. After the initial de Soto quest, Europeans were largely absent from Ascension Parish and the Louisiana territory until the French explorer, LaSalle, sailed down the Mississippi in 1682. He named the area Lafourche des Chetimaches, or “the fork in the river of the Chitimachas Indians”. Hence the name “Bayou LaFourche”. It was from this point forward that the built environment in Ascension Parish and Louisiana began to take shape. The French originally colonized Louisiana in 1718, but eventually gave it to Spain in 1763. In 1766 the Spanish Government issued land grants to Acadians from Nova Scotia and Canary Islanders. This part of Ascension and St. James Parishes is what came to be known as the Acadian Coast, giving us our noted Acadian heritage. The Acadians, or “Cajuns”, were responsible for constructing most of the vernacular structures in our parish, simple Cabins that were made of local materials. However, the grand plantation homes that are spread along the Mississippi River today weren’t constructed until the 19th century. In the 1820’s, Ascension Parish was known as the Gold Coast due to its prosperous farming operations. Ascension was home to some of the most productive sugar plantations in Louisiana, evidence of which still exists on River Road. The sugar barons erected mansions such as Ashland, Bocage, Hermitage (the state’s earliest-known Greek Revival mansion), Tezcuco and Houmas House. These were and are grand examples of Louisiana’s antebellum plantation homes. But those aren’t the only buildings that tell the story of Ascension Parish. Numerous vernacular structures dot the landscape of Ascension, with a few large concentrations at the Cajun Village and The Cabin Restaurant. Donaldsonville also has quite a large collection of historic structures. It is the second largest historic district in Louisiana behind New Orleans. In 1825, Donaldsonville was selected as the Capital City of the Commonwealth of Louisiana. However, the capitol existed only for one year, in 1830-31, before being moved to New Orleans on January 8, 1831. Donaldsonville is also home to the Civil War era Fort Butler, a building that stands as a distinct reminder of the end of the old antebellum days. All of these buildings serve to tell the story of the people who inhabited them and who constructed them. In order to understand Louisiana, one must understand the role of architecture in its development. If you are a student of history, a proponent of historic preservation, or just a curious person in general, come visit Ascension Parish’s plethora of historic buildings. It will be a boon to your knowledge, and a trip you’ll never forget!
The Revised Common Lectionary is a three-year cycle of Holy Scripture readings that follow the Sundays, festivals, and seasons of the Christian liturgical year. It was compiled in 1992 by the ecumenical Consultation on Common Texts to provide a balanced scriptural guide for weekly worship that ensures exposure to the main themes of Christian faith and worship. It is ecumenical in nature and overlaps heavily with the Roman Catholic lectionary. Four texts are assigned for each Sunday and festival: - The first reading usually comes from the Old Testament but is replaced by a reading from Acts during the season of Easter - A reading from an Epistle or other New Testament writing - A reading from a Gospel (Matthew, Mark, Luke, or John). The three-year lectionary cycle (A, B, and C) focuses on different portions of the Gospels in each year: - Matthew in Year A - Mark in Year B - Luke in Year C - John featured at certain times in each year
What is Video Modeling? Video modeling is a fun instructional technique where the child watches a video of himself or herself doing a desired behavior or skill. For example, let’s say Johnny always throws a fit on the way from the bus to his classroom at the beginning of each day. You could use video modeling to show Johnny a video of himself walking all the way from the bus to the classroom without having a meltdown and without any assistance. You may be asking, how is this possible? If I could take a video of Johnny doing this, I wouldn’t need the video! Well, the power of technology has allowed us to create these videos through the magic of editing, and I’m going to show you how! You take a video of a child doing a skill and you give him as much help as you need to so he can get through it. Then, you edit out all of the parts where you had to help him. The end result is a video that looks like he did it all by himself even if he didn’t. Listen to the Audio Version Here or scroll down to keep reading: Does Video Modeling Work? The American Speech-Language Hearing Association (ASHA) website has links to a systematic review where researchers looked at all of the research on video modeling for children with autism. Here’s what they found: “Overall, results of the review indicated positive gains in social-communicative skills, functional skill, perspective-taking skills, and problem behavior. However, the authors cautioned that: “A small pool of studies was reviewed, and treatment effects were not measured. Consequently, it is unclear at this time whether video modeling is more or less effective than other models of instruction for learners with autism, and too soon to make detailed recommendations for practitioners” (p. 41).” Furthermore, video modeling works the same way that social stories work. (For more information on using social stories with children with language disorders, click here!) And social stories have been shown to be helpful for children with autism, learning disabilities, cognitive impairments, and language impairments. What Can You Use Video Modeling For? Video modeling is used to teach a child to do a new skill or to perform a current skill at a higher level or with fewer supports. This can be applied to behaviors that you would like the child to learn as well as behaviors that you would like the child to stop. Here are some examples of challenges that can be addressed using video modeling: - How to help a child get through transitions without melt downs - How to help a child learn to use communication to get his needs met instead of inappropriate behaviors (like saying hi instead of hitting a kid) - How to help a child participate in a routine activity independently - How to help a child respond to social situations appropriately, such as responding to others - How to help a child use language appropriately in social situations How to Use Video Modeling Step One: Choose a Target The first thing you will need to do is choose a target behavior. What skill do you want the child to learn how to do? Map out exactly what you will want the child to do in steps. Step Two: Video Tape the Target Next, you will want to video tape the child doing that skill. Since the child cannot actually do the skill yet (or at least not independently), you will need to stage the situation so that he is most likely to do it on his own and then provide prompts, guidance, and support as needed. You may have to break the overall skill down into smaller steps and record each step separately. Break it down into the smallest parts necessary to make it look like the child is doing it on his own. For example, if the target behavior is cleaning up the play room, you may set it up so that the child is holding a toy and the basket is right in front of him, then you video tape him putting that one toy into the basket. Then, you have to take another video of him putting a second toy in the basket. Don’t expect him to do the whole skill in one go. I recommend that you video tape the entire process because we will be editing it down later. I use my smart phone to video tape but you could use any device that you have to record video. Step Three: Edit the Video Down This is probably the intimidating part for you. Well don’t worry, it doesn’t need to be! First, upload the video to your computer. If you’re using your camera, you should have a cord that will connect to your computer and allow you to upload. If you used a smart phone, you should be able to email the video to yourself and then open it on a computer. If you used an iPhone or iPad to record the video, you can download the iMovie app from the app store and edit it right on your device, no need to transfer to your computer unless you really want to. Once you get it on the computer, open the video in whatever video editing software your computer already has. Most computers come with some basic video editing software. If you cannot find any such software on your computer, you can search the internet for “free video editing software”. Just make sure what you download is legitimate and not a scam. I use Camtasia Studio but that is a paid video editing software. Once you’ve put the video into your video editing software, you’ll want to edit out (delete) any parts where it looks like you’re helping the child. You should be left with a video that just includes the parts where the child did the skill independently. It may be a bit choppy but don’t worry, the child won’t mind. He’ll be too focused on the fact that he’s watching himself do something awesome! Here’s a video that will show you how to edit down the videos: Would you like the step-by-step instructions for how to edit down a video to use for video modeling? Click this button to download my free PDF guide to creating video modeling videos: How to Use Video Modeling in Instruction Once you get your video done, it’s time to show it to the child. Step One: Show the Video at a Different Time Show the child the video during a low-stress time that is not near the time that you want the target skill to occur. Show the video and then talk to the child about what happened in the video. Have the child watch the video as many times as he wants. Show the video to him several different times (different days) before attempting the actual routine with it. Step Two: Show the Video Right Before the Expected Behavior Once the child is familiar with the video, you’ll want to try to use it to make that actual routine or activity go smoothly. Show the child the video right before you are going to expect him to do it. For example, if the video is about getting the child off the bus, you will get on the bus with the video and show him the video before you unbuckle him. Step Three: Show the Video While the Child is Doing the Behavior As soon as you’ve shown the video to the child, tell him that it’s time to do what he saw in the video. Help him go through the steps just as they were presented in the video. If it’s at all possible, let him watch the video of him doing the skill while he’s actually doing it. In our example above, this would look like you holding the iPad with the video playing in front of him as he walks down the hallway toward his classroom. Step Four: Reinforce and Practice! Reinforce any success that the child had during the activity and then practice, practice, practice. You will want to have the child continue watching the video multiple times per day as well as right before and during the activity. Keep doing this until the child can complete the behavior successfully. There you have it! That’s all the steps to use video modeling. Just like any therapy technique, this isn’t guaranteed to work for every child and on every behavior, but it’s definitely a great tool to try, especially if you have a low functioning kiddo who isn’t responding well to more traditional instruction. If you’d like to download my PDF guide to creating and using Video Modeling videos, please click the button below: More Resources for Speech-Language Pathologists: Looking for more therapy ideas and resources to help you provide the BEST services to your clients? Join us in The SLP Solution, our membership program for speech-language professionals! Inside the membership, you’ll find: - Step-By-Step Guides for teaching a variety of speech/language/communication skills - Pre-Made Worksheets and Therapy Activities for hundreds of different topics - Training Videos for dealing with difficult disorders or problems - Answers to Your Questions in our exclusive SLP community - Tools and Resources to help you with your paperwork and admin tasks - Continuing Education through our monthly webinars and webinar recordings To join us in the full SLP Solution, or to snag a free membership, click on the button below!
To calculate a value in Knots to the corresponding value in Feet/Second, multiply the quantity in Knots by 1.6878098570997 (conversion factor). Feet/Second = Knots x 1.6878098570997 The conversion factor from Knots to Feet/Second is 1.6878098570997. To find out how many Knots in Feet/Second, multiply by the conversion factor or use the Knots to Feet/Second converter above. The knot is a unit of speed equal to one nautical mile (1.852 km) per hour, approximately 1.151 mph. The ISO Standard symbol for the knot is kn. The same symbol is preferred by the IEEE; kt is also common. The knot is a non-SI unit that is "accepted for use with the SI". Worldwide, the knot is used in meteorology, and in maritime and air navigation—for example, a vessel travelling at 1 knot along a meridian travels approximately one minute of geographic latitude in one hour. Etymologically, the term derives from counting the number of knots in the line that unspooled from the reel of a chip log in a specific time. The foot per second (plural feet per second) is a unit of both speed (scalar) and velocity (vector quantity, which includes direction). It expresses the distance in feet (ft) traveled or displaced, divided by the time in seconds (s, or sec). The corresponding unit in the International System of Units (SI) is the metre per second. Abbreviations include ft/s, ft/sec and fps, and the rarely used scientific notation ft s−1.
Baby pictures are often boring to everyone but the parents who show them. But if a baby picture of the universe doesn’t inspire your imagination, what can? The European Space Agency recently released the first detailed all-sky images taken from its Planck satellite mission, the latest satellite to probe the “afterglow” of the Big Bang. This is the radiation coming toward us from all directions from a time when the universe was only 380,000 years old, just after it had cooled sufficiently so that the protons in the hot gas could capture electrons to form neutral hydrogen and the universe then became transparent, and the ambient thermal background of radiation could travel unimpeded to us today. In the intervening 13.7 billion years or so this radiation has cooled to close to 3 degrees above absolute zero and comes to us in the form of microwaves. In fact for those of us old enough to remember television before cable, when the TV stations went off the air and the screen filled with static, about 1 percent of the static visible on the screen was due to this radiation from the Big Bang. In spite of this, this signal actually remained hidden until it was accidentally found in 1965 by Robert Wilson and Arno Penzias, who later shared the Nobel Prize for its discovery, which confirmed the Big Bang origin of our universe. Because this radiation is so cold, it took almost 30 years before a satellite was launched into space by NASA — to get away from the warm background coming from the Earth and the absorption of radiation in our atmosphere — with a sensitivity great enough to actually image this signal. George Smoot, who along with John Mather was awarded a Nobel Prize for this work, exclaimed that looking at this image was like staring at the “face of God.” This hyperbole can perhaps be forgiven, given the excitement of discovery, but any structure Smoot may have claimed to see was not unlike searches for images of animals in the clouds. The sensitivity of the experiment at the time was barely enough to separate the signal from other random backgrounds in the detector. Another 20 years and now Planck has produced an exquisite picture whose fine-grained detail displays “hotspots” and “coldspots” in this background over the whole sky that represent variations in temperature of less than 1/10,000th of a degree from place to place. These miniscule fluctuations nevertheless reflect small excesses of matter that would later grow due to gravity to form all the structures we observe today — galaxies, stars, planets, and everything they house. Of more interest is the question of where these lumps of matter and energy came from. This is perhaps the most exciting question of all, because we currently have many reasons to think they hearken back to a time much earlier than 380,000 years after the Big Bang, and may have been created a millionth of a billionth of a billionth of a billionth of a second into the history of our universe. In this case they represent a signal from almost the very beginning of time itself! As long as humans have been human, we have been fascinated by cosmic questions. How did the universe begin? Where did we come from? Are we alone? Attempting to answer these questions may not produce a better toaster or a faster airplane, but it is nothing short of remarkable that modern science is revealing facets of our universe that are changing our perspectives on such foundational cosmic questions. Like great art, music, and literature, science changes the way we think about ourselves and our place in the cosmos. That, to me, is as important as the remarkable technological advances that science has brought about that make our modern world possible. It is too early to assess what new changes will result from the Planck data. When compared with earlier results they seem to suggest a slightly older universe, maybe 13.8 billion years old instead of 13.7 billion years old. And there are intriguing anomalies, as there usually are in data at the edge of our detection abilities. The northern hemisphere of the sky apparently has some larger hotspots and coldspots than the southern hemisphere, and there is one particularly large coldspot in the north that seems out of place. Is this significant, representing some new properties of the early universe, or is it just a cosmic accident? If history is any guide, most of these anomalies will disappear. It is an unfortunate facet of science reporting that it isn’t often made clear that most anomalies in experiments tend to go away, just as most theoretical ideas turn out to be wrong. Instead of attributing significance to potentially strange results, it is the business of science to try and prove them wrong before we blindly move forward. Skepticism is the business of the day, and it is wise to remember this next time you read an astounding discovery claimed in the press. Nevertheless, when a new window on the universe is opened I am usually surprised when I am not surprised. The imagination of nature far exceeds the human imagination, which is why, if we want to keep on learning more about nature and want to come closer to addressing those cosmic questions, we will keep on having the courage and fortitude to probe nature on ever finer and ever grander scales. I expect we will never achieve answers to all the fundamental questions we may have; each new result breeds new questions, after all. But to live in times such as these, when we are plausibly exploring realms of nature that previously may have been thought to be in the domain of philosophy or theology is for me unbelievably tantalizing. Looking out at the vast universe, it appears that our own existence in the cosmos may be more capricious and insignificant than we may have thought. But this should not be cause for despair, but rather a source for awe and wonder. Truth remains stranger than fiction. New baby pictures of the universe can remind us of this fact, and can also help prepare us to be amazed. Lawrence M. Krauss is director of the Origins Project at Arizona State University, and the author, most recently, of “A Universe from Nothing.”
As a parent, we have the responsibility to teach our kids how to interact with others effectively. Social skills are key to navigate through life and can be an incredible asset for future success. The link below has 101 social skill activities! FOOD FOR THOUGHT The Real and Lasting Impacts of Social-Emotional Learning with At-Risk Students Finding a way to reach at-risk students who are struggling in various ways can be difficult, but social-emotional learning can open doors. Copious research has shown that the impact of social-emotional learning (SEL) runs deep. SEL programs are shown to increase academic achievement and positive social interactions, and decrease negative outcomes later in life. SEL helps individuals develop competencies that last a lifetime. The five components of social-emotional learning are: - social awareness - relationship skills - responsible decision-making “When students are struggling and school performance is poor, they are more likely to find school and learning as a source of anxiety, manifesting in diminished self-efficacy, motivation, engagement, and connectedness with school,” says Dr. Christina Cipriano. Therefore, when it comes to our nation’s most at-risk students, receiving SEL training in the classroom can make a huge difference in preparing them for a healthy and successful life well beyond school. One of the most extensive studies of the long-term impacts of SEL was completed by researchers from the Collaborative for Academic, Social, and Emotional Learning (CASEL); Loyola University, the University of Illinois at Chicago, and the University of British Columbia. Their work reviewed over 213 studies on the impacts of SEL. According to CASEL, they found that students who were part of SEL programs showed 11 percentile-point gains in academic achievement over those who were not a part of such programs. Compared to students who did not participate in SEL programs, students participating in SEL programs also showed: - Improved classroom behavior - An increased ability to manage stress and depression - Better attitudes about themselves, others, and school These student perceptions coupled with developed emotional intelligence lead to long-term academic success. SEL has the ability to give at-risk students the tools they need to overcome obstacles and plug into their education for long-term achievement. Positive life outcomes A 2015 study published in the American Journal of Public Health looked at students 13 to 19 years after they received social skills training through the Fast Track Project. Fast Track, which was run in four communities: Durham, Nashville, rural Pennsylvania, and Seattle, describes its work as “based on the hypothesis that improving child competencies, parenting effectiveness, school context, and school-home communications will, over time, improve psychopathology from early childhood through adulthood.” The study also found that teaching social skills in kindergarten leads to students being less likely to live in public housing, receive public assistance, or to be involved in criminal activity. “At age 25, people who were assigned to the program are happier, have fewer psychiatric and substance abuse problems, are less likely to have risky sex, and are arrested less often for severe violence and drug-related crimes,” according to Child Trends. Early interventions of SEL show outcomes far into adulthood, reducing the life risks for impoverished and at-risk students. Researchers have also found that SEL reduces aggressive behaviors in the classroom, freeing teachers and students to focus more on learning. Research shows that students who receive SEL training are 42% less likely to be involved in physical aggression in schools. Mindfulness practices, a staple of SEL, were shown to reduce reactive stress responses in students. One study examined breathing techniques as a means to calm students with behavioral and emotional difficulties. The study revealed that mindfulness exercises can have a noticeable and positive impact on reducing reactive behavior and aggression. Research shows that children with a stronger social-emotional skill set were less likely to experience health problems, struggle with substance abuse, or engage in criminal activity as they got older. A gradient of childhood self-control predicts health, wealth, and public safety Additional research further illustrates how early education programs promote social mobility within and across generations, helps prevent obesity, reduce health care expenditures and leads to overall higher-quality of life.
Hans A. Krebs first proposed a series of breakdown of pyruvate in the presence of oxygen, therefore the cycle is called Krebs Cycle. The cycle is also called citric acid cycle because formation of an important intermediate citric acid. krebs cycle takes place in the matrix of mitochondria. Entry of Pyruvate in to Mitochondria The pyruvate produced in the cytosol enters through impermeable inner membrane of mitochondria with the help of a trans-locater (pyruvate trans-locater) that catalyze exchange of pyruvate and OH” across the inner membrane. Acetyl Coenzyme A (CO A) Formation The pyruvate produced in glycolysis does not enter the Krebs cycle directly, rather it is decarboxylated in the presence of oxygen in a coordinated series of reactions catalyze by a multicomponent complex of several enzymes to a 2-C compound, acetyl Co A (CH3Co – CoA). The reaction requires participation of several coenzymes including NAD+ and coenzyme A (Co A). the enzyme catalyzing the reaction is pyruvic acid dehydrogenase. NADH2 is generated during the reaction. The overall reaction can be summarized as following: Pyruvic acid + Co A + NAD+ ——————– Acetyl Co A + CO2 + NADH2 Biochemistry of Krebs Cycle Krebs cycle involves the following steps: - Formation of Citric Acid The Krebs cycle starts with condensation of 2-C acetyl group (CH3CO) in acetyl coenzyme A with oxaloacetic acid (a 4-C acid produced during the cycle) and water to yield citric acid (a 6-C tricarboxylic acid). Enzyme citrate synthase catalyzes the steps. - Isomerization of Citric Acid to Isocitrate Citric acid undergoes isomerization to Isocitric acid by enzyme aconitase. A molecule of NADH2 is produced during this step. - Oxidative Decarboxylation Next two steps involve successive oxidative decarboxylation, each of which produces molecule of NADH2 and releases a molecule of CO2. Isocitric acid is converted to alpha-ketogluteric acid by isocitrate dehydrogenase. Alpha-ketogluteric acid produces succinyl Co A in the presence of Co A and alpha-ketogluetric acid dehrogenase. - Regeneration of Oxaloacetic Acid Up to this point three molecules of CO2 have been produced for each pyruvate, so complete oxidation of glucose has actually taken place. The remainder of Krebs cycle involves conversion of succinyl Co A to oxaloacetic acid to allow continued operation of the cycle. The following steps involved in the regeneration of oxaloacetic acid. Steps Involved in Regeneration of Oxaloacetic Acid - Succinyl Co A under the catalytic activity of succinyl Co A synthetase is converted to succinate. Coenzyme A is released and ATP is generated. - The succinate is oxidized to fumarate (fumaric acid) by succinate dehydrogenase. The electron removed from succinate reduce FAD+ to FADH2. - Fumaric acid is hydrated to produce malate (malic acid) by fumarase. - The malate is oxidized by malate dehydrogenase to oxalacetic acid. A molecule of NADH2 is produced during this step. The oxaloacetic acid is produced is now able to react another acetyl Co A and continue the process the cycle. Products of Krebs Cycle The stepwise oxidation of each pyruvate in the Krebs cycle give rise to the following products. - Three molecules of CO2. - Four NADH2 and one FADH2 molecules which store free energy released during these oxidations. - One molecule of ATP is produced through substrate level phosphorylation.
HTML: The Hyperlink Tag The Hyperlink Tag Click here to download the Lesson 1 HTML template, which you will need for this tutorial. This tutorial teaches the basics of the <a> tag. An <a> tag can surround text, images, or other inline elements (elements that can occur within a paragraph of text). Here is an <a> tag surrounding text: <a href="index5.html" title="XoaX.net"> XoaX.net Video Tutorials. </a> Here is an <a> tag surrounding an image: <a href="http://xoax.net" title="XoaX.net"> <img src="xoaxdotnet.png" border="0"/> </a> There are 3 attributes on an <a> tag that are important: The title attribute is not required, but it is good practice. If you have an <a> tag that surrounds an image, then the title value will appear as a tool tip on top of the image when you hover your mouse over it. The target attribute is not required either, but it's helpful if you want to change how the browser loads the URL in the href value of your link. if you don't put the target attribute in your <a> tag, then web browsers will open the link in the same tab. If you use the target attribute and put "_blank" as its value, like this: <a href="index5.html" target="_blank" title="XoaX.net"> XoaX.net Video Tutorials. </a> Then web browsers will open the link in a new tab or window, depending on how you have the browser options set. The href attribute is required, and its value can be either a relative or absolute URL. The href value tells the browser where to take the user when they click on the hyperlink. URLs in an <a> Tag We'll use this image as an example of our website. Let's assume that the red and blue folders and the redfile.html are all in the domain root, which is where your main website page (i.e., index.html) lives. The href attribute of the a tag can take either relative or absolute URLs. An absolute URL looks like this, with the http:// and the full domain name before the full path to the file: Relative URLs assume the http:// and the full domain name and start at the domain root folder. For example, let's say we're in the yellow folder on the website. And let's say we have an a tag in the yellowfile.html that is linking to the whitefile.html in the same folder. We can put either of these URLs in the href value: <a href="./whitefile.html">White File</a> <a href="whitefile.html">White File</a> The "./" tells the operating system (OS) of the machine to start in the current folder. The OS will assume it if we don't put it there, but it's best practice to include it always. As another example, let's assume we have an a tag in the whitefile.html and we want to link to the redfile.html. For this, we have to go back up 2 folders from where we are. We write the href value like this: <a href="./../../redfile.html">Red File</a> This relative URL starts in the current folder, then the "../" tells the OS to go back up one folder, out of the yellow folder, and another "../" tells the OS to go back up one more folder, out of the orange folder. Now we are in the red folder and we can access the redfile.html. For a third example, let's assume we're in the bluefile.html in the blue folder, and we want to link to the yellowfile.html in our a tag. So, we have to go back up out of the blue folder, and then into the red, orange, and yellow folders to get to our file. This is how we would write it: <a href="./../red/orange/yellow/yellowfile.html">Yellow File</a>
R.CCR.9 — that's the ninth College/Career Readiness anchor standard within the Reading strand of the Common Core State Standards (CCSS) for ELA/Literacy — reads as follows: Analyze how two or more texts address similar themes or topics in order to build knowledge or to compare the approaches the authors take. There are two key teacherly tasks in preparing to teach this standard. Choosing multiple related texts For R.CCR.9, texts can be related topically or thematically. Let's look at a few examples of how this might look. TOPICALLY LINKED, MULTI-GENRE For example, when reading Chinua Achebe's Things Fall Apart, my ninth grade students and I also examine Rudyard Kipling's poem “The White Man's Burden,” and this coming school year I'm going to add open letters to and from King Leopold II during Belgium's imperial heyday (taken from The Human Record: Sources of Global History since 1500, by Alfred J. Andrea and James H. Overfield). Altogether, these texts will consist of various genres, and some of them will contain starkly contrasting themes, but they will all topically deal with colonization and imperialism around the turn of the 20th century. TOPICALLY LINKED ARTICLESAll right, so maybe their slogan is a bit over the top, but this is still my #1 resource for finding R.CCR.9 articles. For finding multiple related articles about the news and issues of today, I haven't found a better resource than The Week. This news magazine is totally free and totally awesome. I've heard it described as the Reader's Digest version of the Wall Street Journal, and I can see why: every day, The Week posts articles that summarize the various takes on a given hot topic. Some examples from today's articles: - 4 perspectives on the disappointing June jobs report for the USA (great for an economics class) - Various speculations on whether “iGlasses” will beat out Google's computer specs (great for use in a technology class) - Various takes on 50 Cent's offensive tweets (great for current events classes, American culture discussions, or simply for a quickie debate in an ELA class) Basically, you could call R.CCR.9 the anchor standard proudly sponsored by TheWeek.com. (And no, they don't pay me to say that; they should, but they don't). THEMATICALLY LINKED, MULTI-GENRE But let's say you're interested in having students explore thematic relationships between texts. I did something like this last year when I taught John Knowles' A Separate Peace. Our guiding question for our reading of the book was, “Is Gene evil, or did he simply do an evil thing?” This got us thinking and talking about the theme I wanted students to explore: are humans basically evil, or are they basically good people who do evil things? Unfortunately, this unit fell right before winter break, so I did not have time to develop it as fully as I would have liked. However, if I had, here are some ideas of texts that could have linked thematically with our study of A Separate Peace: - Looking at the writings of various Enlightenment thinkers on human nature - Looking at an article or excerpts from Freud dealing with the id, ego, and superego - Studying a current events case that relates to our thematic driving question, such as the George Zimmerman/Trayvon Martin tragedy Choosing the purpose of your text comparisons The other half of this standard deals with two purposes for analyzing multiple texts: either to build knowledge or to compare the approaches authors take. TO BUILD KNOWLEDGE In the colonialism/imperialism example above, my purpose for analyzing all of these related texts has nothing to do with the authors' approaches, but it has everything to do with helping my students build knowledge (remember, building knowledge is one of the six big “shifts”that the CCSS calls for). I love that the CCSS makes the vital connection between reading multiple sources and building knowledge. This is the only way my students can truly understand the complexities and the dark underbelly of colonialism; they need to get elbow-deep in the grime of multiple, conflicting texts. TO COMPARE AUTHORS' APPROACHES On the other hand, it's also valuable to compare how different authors approach a given topic. This looks like a job for — insert superhero music — TheWeek.com! Because journalists provide such a wide array of tacks toward the same topic, articles are a great method for achieving this purpose. I also fantasize about having my students read not only Ray Bradbury's Fahrenheit 451, but also Orwell's 1984. It would be amazing to have students compare each author's approach satirizing the direction that they saw their worlds moving. Just get started As I contemplate each anchor standard, I keep coming away with the strong impressions that, first, these are doable and valuable, and, second, they aren't difficult to start trying. That's the key — just start.
Pune (ISJ) - A group of Indian scientists have predicted how the Sun's atmosphere, the corona, will look like on 21 August, when it will be visible during the total solar eclipse to occur on that day. The corona is still not fully understood. Solar storms originate in the corona, and affect the space weather near the Earth, where they can be dangerous for satellites and telecommunication. Hence, it is important for us to understand the corona, and comparing this prediction with actual observations during the eclipse will tell us how well we understand it. In what has come to be referred as the Great American Eclipse, a total solar eclipse will sweep across the United States on 21 August 2017. From the ancient times to the present, human beings have viewed such magnificent astronomical events with a mixture of awe and curiosity. Given the scientific and technological advances made in astronomical observations in modern times, this eclipse will offer unprecedented opportunities for studying the Sun's million degree outer atmosphere known as the corona. Scientists across the US have geared up to make a diversity of scientific observations of this eclipse. A team led by Indian scientist Dibyendu Nandi from Center of Excellence in Space Sciences India (CESSI) at Indian Institute for Science Education and Research (IISER), Kolkata has predicted the expected structure of the solar corona which will become visible during this eclipse. The team comprises of graduate and undergraduate students of IISER Kolkata as well as a scientist from Durham University, United Kingdom. Over many decades, scientists have been puzzled by the extremely high temperature of the Sun's corona, which can be hotter than a million degrees. It is now understood to be due to the presence of Sun's magnetic fields, although the exact process of coronal heating is still hotly debated. Magnetic field structures in the Sun's corona also generate violent solar storms that create hazardous space weather. When these storms reach the Earth, they can threaten our satellites, telecommunication and GPS networks and can even bring down electric power grids. Hence it is very important for humanity to be able to prepare for adverse space weather through prior predictions of such solar storms from the corona. Coronal magnetic fields are notoriously difficult to measure under normal circumstances since the corona is much more faint than the disk of the Sun. This necessitates the development and use of theoretical and computational models to understand the problem of coronal heating, and in turn, the origin of solar storms and severe space weather. This India-led study and modelling that aims to predict the Sun's coronal structure is important for the world in this respect. Their prediction of the appearance of the corona during the eclipse is one of only two predictions, highlighting the difficulty of this challenge that many groups around the world are not yet ready for. Solar eclipses offer a way around the problem of observing the corona. In a total eclipse, when the Moon blocks out the Sun's disk, the faint light from the corona becomes visible! This allows us to see coronal structures and understand the magnetic fields that produce them. Therefore the Great American Eclipse will allow testing of our theoretical models and lead to their refinement through an assessment of what went right and what went wrong with the predictions. The India-led team has predicted an intricate structure of the corona that is a result of computing the evolution of the magnetic fields of sunspots on the solar surface over many years. Their predictions shows lotus petal-like shapes extending from the surface in some parts of the Sun's corona, while in other parts, magnetic field lines fan out like spokes of a wheel from the Sun and into inter-planetary space. Scientists expect that comparison of eclipse observations with these simulations will help them understand our star and how it influences our space environment. Source: Inter University Centre for Astronomy and Astrophysics Image credit: Inter University Centre for Astronomy and Astrophysics
Keywords in C Note: All the keywords must be written in lower case. Constants : Any unchanged value in a program during the program execution is called constant. There mainly two types of Constants in C language. Numeric Constants : There are two types of numeric constants. (i) Integer Constant (ii) Real or floating point constant (i) Integer Constant : An integer constant is a signed or unsigned whole number. Example : -25, +25 (ii) Real or floating point constant : Any signed or unsigned number with fractional part is called real or floating point constant. A real constant can be written in decimal or exponential form.
Written by Debra Morrall Our children are the future of our world but pollution could be a distinguishing factor in changing the health of a generation, forever. We don’t yet fully know the effects of long term exposure to particulate matter, nitrogen dioxide, complex chemicals such as VOC’s, formaldehyde and benzene to name but a few, but all the current indicators show that effects could include diminished breathing capacity, increased fatigue and other chronic symptoms relating to general health and wellbeing. The most lethal of all airborne pollutants fall into three categories, particulate matter, ground level ozone and nitrogen dioxide, which are described as such: - Particulate matter (PMs). The most dangerous tiny particles of air pollution can penetrate deep into our lungs, and can even get into the bloodstream. Particulates worsen heart and lung disease. Fine particle air pollution is responsible for 29,000 early deaths a year in the UK. - Nitrogen dioxide (NO2). A toxic gas that you might sometimes notice as an orange haze over a city. High levels of NO2 can cause a flare-up of asthma or symptoms such as coughing and difficulty breathing. - Ground level ozone (O3). Ground level or “bad” ozone created by chemical reactions between oxides of nitrogen (NOx) and volatile organic compounds (VOC) in the presence of sunlight. It can irritate the eyes, nose and throat. We care about what happens to our children and more and more we are noticing when our children are exposed to pollution, on the school run, playing in the playground at school or outside their home, running, walking, existing, all of within a breathe of airborne pollutants. Every year in the UK “around 40,000 deaths are attributable to exposure to outdoor air pollution, which plays a role in many of the major health challenges of our day. These health challenges include cancer, asthma, strokes and heart failure, heart disease and diabetes, Vitamin D deficiency and altered immunity. In an adult with fully formed lungs, these illnesses are hard to fight, in a small child with developing organs, they could be the singular factor that changes the long term course of their health or even kills them. It is well documented that children that grow up in highly polluted areas have 10% less lung capacity than those with less exposure to pollution, a problem that exists all across Europe. So, what can we do to protect our children from air pollution? - Walk, run jump, hop skip, anything really, just leave the car at home. - Stay away from the road edge when walking, this is where the most polluted bit of the street lies. Avoid busy routes when walking or cycling. - Spend as much time as you can outdoors, in green, open spaces. - Remove unnecessary pollutants from your home environment, including toxic cleaning materials and ordinary household paint. - Educate your children about air quality so the cycle ends now. Greater knowledge leads to greater power and orchestrating change is the only real way forward. As adults we need to think about the cars we buy, the purchasing choices we make when it comes to cleaning materials, beauty products, carpets and soft furnishings and the paint we use to decorate our homes. We need to instigate change and invest in our planet and the future health of all children.
A Sea Glider unmanned underwater vehicle with an underwater microphone began patrolling the coast of Hawaii in late October and will finish up its initial mission in mid-November. By then, it will have collected half a terabyte of data. By applying software that automatically picks up beaked whale sounds from the rest of the sounds of the ocean, the researchers hope to gain a deeper understanding of how these rare whales live. The Sea Glider is one of a host of new acoustic tracking tools that are helping scientists better understand the behavior of deep sea whales. Using autonomous underwater gliders, hydrophones, and sophisticated algorithms, they are a key tool in the race to map where whales live. The beaked whale appears to be particularly sensitive to the powerful sonar used by the world’s naval fleets. Over the past decade, dozens of these rare whales have died in a series of incidents that seem linked to naval exercises, even if it is hard to prove the connection. The carcasses that wash up on shore are consistent with the hypothesis that the whales respond to the sonar by surfacing too quickly, inducing the bends. Nitrogen, and other gases that had been dissolved into the liquids inside their bodies by the high-pressure at depth, transform back into gas as the pressure is released. If they rise too fast, the amount of gas overwhelms the body’s natural systems for expelling it, causing bubbles to form in the bloodstream and tissues. The U.S. Office of Naval Research, though, has been pouring money into learning more about the whales. This particular project received $1.5 million. The next step could be to integrate and calibrate their data with information from U.S. Navy hydrophone arrays.
The GIFTS activity involves the juxtaposition of denotative and connotative meanings of the same word in order to demonstrate how complex decoding the code and assigning the encoder’s intended meaning to a word can be. Students are randomly put into groups. Students are given the word “dog” and each group uses a dictionary to provide its denotation. Then each group has to generate connotations, cultural or slang meanings for the word “dog.” The class discusses the difference between the two types of meaning, and what impact if any this difference has on interpersonal communication. Then each group chooses a word, provides its denotative and connotative meanings, and determines if the meaning in the word is fixed or not. Then as a class we discuss the meanings of the words. Students realize the meaning words have is never fixed. Therefore, the encoder must consider the audience to reduce ambiguity during decoding. Keys, Truman Ryan "Say What You Mean: Confronting Ambiguity in Language," Proceedings of the New York State Communication Association: Vol. 2010 , Article 14. Available at: http://docs.rwu.edu/nyscaproceedings/vol2010/iss1/14
Dictionary.com Word FAQs What is the difference between an outbreak, epidemic, and a pandemic? An epidemic is a disease that affects many people at the same time, such as the flu. The US Centers for Disease Control and Prevention's official definition of epidemic is: 'The occurrence of more cases of disease than expected in a given area or among a specific group of people over a particular period of time'. A pandemic is a very extensive epidemic, like a plague, that is prevalent in a country, continent, or the world. There is also the word endemic, which is a disease native to a people or region, which is regularly or constantly found among a people or specific region. The term outbreak describes the sudden rise in the incidence of a disease, especially a harmful one. An outbreak is characterized by a disease's bypassing of measures to control it. Often, the difference between these terms is determined by the percentage of deaths caused by the disease.
Active Volcanoes of Our Solar System Activity Occurs on Earth and on the Moons of a Few Planets Article by: Hobart M. King, Ph.D., RPG Volcanoes Are Not Confined to Earth Evidence of past volcanic activity has been found on most planets in our solar system and on many of their moons. Our own moon has vast areas covered with ancient lava flows. Mars has Olympus Mons and Tharsis Rise, the largest volcanic features in our solar system. The surface of Venus is covered with igneous rocks and hundreds of volcanic features. Most of the volcanic features discovered within our solar system formed millions of years ago - when our solar system was younger and the planets and moons had much higher internal temperatures. Geologically recent volcanic activity is not as widespread. Based upon observations from Earth and from space vehicles, only four bodies in the solar system have confirmed volcanic activity. These are 1) Earth; 2) Io, a moon of Jupiter; 3) Triton, a moon of Neptune; and, 4) Enceladus, a moon of Saturn. Evidence for possible volcanic activity on Mars, Venus, and Europa has been observed, but no direct eruption observations have been made. What is an Active Volcano? The term "active volcano" is used mainly in reference to Earth's volcanoes. Active volcanoes are ones that are currently erupting or that have erupted at some time in human history. This definition works fairly well for volcanoes on Earth because we can observe some of them easily - but many are located in remote areas where small eruptions could go unnoticed, or below remote parts of the oceans where even large eruptions might not be detected. Beyond Earth, our abilities to detect volcanic eruptions did not begin until the invention of powerful telescopes and made a great leap when space vehicles were able to carry telescopes and other sensing devices close to other planets and their moons. Today a number of telescopes are available to detect these eruptions - if they are large enough and facing in the proper direction. However, small eruptions might not be noticed because there are not enough telescopes to watch all areas of the solar system where volcanic activity might occur. Although only a few extraterrestrial eruptions have been detected, much has been learned about them. Perhaps the most interesting discovery has been the cryovolcanoes in the outer region of the solar system. What is a Cryovolcano? Most people define the word "volcano" as an opening in Earth's surface through which molten rock material, gases, and ash escape. This definition works well for Earth; however, some bodies in our solar system have a significant amount of gas in their composition. Planets near the sun are rocky and produce silicate rock magmas similar to those seen on Earth. However, planets beyond Mars and their moons contain significant quantities of gas in addition to silicate rocks. The volcanoes in this part of our solar system are usually cryovolcanoes. Instead of erupting molten rock, they erupt cold or frozen gases such as water, ammonia, or methane. Jupiter's Moon Io - The Most Active Io is the most volcanically active body in our solar system. This surprises most people because Io's great distance from the sun and its icy surface make it seem like a very cold place. However, Io is a very tiny moon that is enormously influenced by the gravity of the giant planet Jupiter. The gravitational attraction of Jupiter and its other moons exert such strong "pulls" on Io that it deforms continuously from strong internal tides. These tides produce a tremendous amount of internal friction. This friction heats the moon and enables the intense volcanic activity. Io has hundreds of volcanic vents, some of which blast jets of frozen vapor and "volcanic snow" hundreds of miles high into its atmosphere. These gases could be the sole product of these eruptions, or there could be some associated silicate rock or molten sulfur present. The areas around these vents show evidence that they have been "resurfaced" with a flat layer of new material. These resurfaced areas are the dominant surface feature of the moon. The very small number of impact craters compared to other bodies in the solar system is evidence of Io's continuous volcanic activity and resurfacing. "Curtains of Fire" on Io On August 4, 2014 NASA published images of volcanic eruptions that occurred on Jupiter's moon Io between August 15 and August 29 of 2013. During that two-week period, eruptions powerful enough to launch material hundreds of miles above the surface of the moon are believed to have occurred. Other than Earth, Io is the only body in the solar system that is capable of erupting extremely hot lava. Because of the moon's low gravity and the magma's explosivity, large eruptions are believed to launch tens of cubic miles of lava high above the moon and resurface large areas over a period of just a few days. The accompanying infrared image shows the August 29, 2013 eruption and was acquired by Katherine de Kleer of the University of California at Berkeley using the Gemini North Telescope, with support from the National Science Foundation. It is one of the most spectacular images of volcanic activity ever taken. At the time of this image, large fissures in Io's surface are believed to have been erupting "curtains of fire" up to several miles long. Triton - The First Discovered Triton was the first location in the solar system where cryovolcanoes were observed. The Voyager 2 probe observed plumes of nitrogen gas and dust up to five miles high during its 1989 flyby. These eruptions are responsible for Triton's smooth surface because the gases condense and fall back to the surface, forming a thick blanket similar to snow. Some researchers believe that solar radiation penetrates the surface ice of Triton and heats a dark layer below. The entrapped heat vaporizes subsurface nitrogen, which expands and eventually erupts through the ice layer above. This would be the only known location of energy from outside of a body causing an eruption - the energy usually comes from within. Enceladus - The Best Documented Cryovolcanoes on Enceladus were documented by the Cassini spacecraft in 2005. The spacecraft imaged jets of icy particles venting from the south polar region. This made Enceladus the fourth body in the solar system with confirmed volcanic activity. The spacecraft actually flew through a cryovolcanic plume and documented its composition to be mainly water vapor with minor amounts of nitrogen, methane, and carbon dioxide. One theory for the mechanism behind the cryovolcanism is that subsurface pockets of pressurized water exist a short distance (perhaps as little as a few tens of meters) beneath the moon's surface. This water is kept in the liquid state by the tidal heating of the moon's interior. Occasionally these pressurized waters vent to the surface, producing a plume of water vapor and ice particles. Evidence for Activity The most direct evidence that can be obtained to document volcanic activity on extraterrestrial bodies is to see or image the eruption taking place. Another type of evidence is a change in the body's surface. An eruption can produce a ground cover of debris or a resurfacing. Volcanic activity on Io is frequent enough and the surface is visible enough that these types of changes can be observed. Without such direct observations, it can be difficult from Earth to know if the volcanism is recent or ancient. Will More Activity be Discovered? Cryovolcanoes on Enceladus were not discovered until 2005, and an exhaustive search has not been done across the solar system for this type of activity. In fact, some believe that volcanic activity on our close neighbor Venus still occurs but is hidden beneath the dense cloud cover. A few features on Mars suggest possible recent activity there. It is also very likely, perhaps probable, that active volcanoes or cryovolcanoes will be discovered on moons of icy planets in the outer portions of our solar system such as Europa, Titan, Dione, Ganymede, and Miranda. This is an exciting time to watch space exploration! |Solar System Volcanoes| |Mount St. Helens - 30 Years| Find Other Topics on Geology.com:
EnchantedLearning.com is a user-supported site. As a bonus, site members have access to a banner-ad-free version of the site, with print-friendly pages. Click here to learn more. (Already a member? Click here.) The cell is the basic unit of life. Plant cells (unlike animal cells) are surrounded by a thick, rigid cell wall. The following is a glossary of plant cell anatomy terms. amyloplast - an organelle in some plant cells that stores starch. Amyloplasts are found in starchy plants like tubers and fruits. ATP - ATP is short for adenosine triphosphate; it is a high-energy molecule used for energy storage by organisms. In plant cells, ATP is produced in the cristae of mitochondria and chloroplasts. cell membrane - the thin layer of protein and fat that surrounds the cell, but is inside the cell wall. The cell membrane is semipermeable, allowing some substances to pass into the cell and blocking others. cell wall - a thick, rigid membrane that surrounds a plant cell. This layer of cellulose fiber gives the cell most of its support and structure. The cell wall also bonds with other cell walls to form the structure of the plant. centrosome - (also called the "microtubule organizing center") a small body located near the nucleus - it has a dense center and radiating tubules. The centrosomes is where microtubules are made. During cell division (mitosis), the centrosome divides and the two parts move to opposite sides of the dividing cell. Unlike the centrosomes in animal cells, plant cell centrosomes do not have centrioles. chlorophyll - chlorophyll is a molecule that can use light energy from sunlight to turn water and carbon dioxide gas into sugar and oxygen (this process is called photosynthesis). Chlorophyll is magnesium based and is usually green. chloroplast - an elongated or disc-shaped organelle containing chlorophyll. Photosynthesis (in which energy from sunlight is converted into chemical energy - food) takes place in the chloroplasts. christae - (singular crista) the multiply-folded inner membrane of a cell's mitochondrion that are finger-like projections. The walls of the cristae are the site of the cell's energy production (it is where ATP is generated). cytoplasm - the jellylike material outside the cell nucleus in which the organelles are located. Golgi body - (also called the golgi apparatus or golgi complex) a flattened, layered, sac-like organelle that looks like a stack of pancakes and is located near the nucleus. The golgi body packages proteins and carbohydrates into membrane-bound vesicles for "export" from the cell. granum - (plural grana) A stack of thylakoid disks within the chloroplast is called a granum. mitochondrion - spherical to rod-shaped organelles with a double membrane. The inner membrane is infolded many times, forming a series of projections (called cristae). The mitochondrion converts the energy stored in glucose into ATP (adenosine triphosphate) for the cell. nuclear membrane - the membrane that surrounds the nucleus. nucleolus - an organelle within the nucleus - it is where ribosomal RNA is produced. nucleus - spherical body containing many organelles, including the nucleolus. The nucleus controls many of the functions of the cell (by controlling protein synthesis) and contains DNA (in chromosomes). The nucleus is surrounded by the nuclear membrane photosynthesis - a process in which plants convert sunlight, water, and carbon dioxide into food energy (sugars and starches), oxygen and water. Chlorophyll or closely-related pigments (substances that color the plant) are essential to the photosynthetic process. ribosome - small organelles composed of RNA-rich cytoplasmic granules that are sites of protein synthesis. rough endoplasmic reticulum - (rough ER) a vast system of interconnected, membranous, infolded and convoluted sacks that are located in the cell's cytoplasm (the ER is continuous with the outer nuclear membrane). Rough ER is covered with ribosomes that give it a rough appearance. Rough ER transport materials through the cell and produces proteins in sacks called cisternae (which are sent to the Golgi body, or inserted into the cell membrane). smooth endoplasmic reticulum - (smooth ER) a vast system of interconnected, membranous, infolded and convoluted tubes that are located in the cell's cytoplasm (the ER is continuous with the outer nuclear membrane). The space within the ER is called the ER lumen. Smooth ER transport materials through the cell. It contains enzymes and produces and digests lipids (fats) and membrane proteins; smooth ER buds off from rough ER, moving the newly-made proteins and lipids to the Golgi body and membranes stroma - part of the chloroplasts in plant cells, located within the inner membrane of chloroplasts, between the grana. thylakoid disk - thylakoid disks are disk-shaped membrane structures in chloroplasts that contain chlorophyll. Chloroplasts are made up of stacks of thylakoid disks; a stack of thylakoid disks is called a granum. Photosynthesis (the production of ATP molecules from sunlight) takes place on thylakoid disks. vacuole - a large, membrane-bound space within a plant cell that is filled with fluid. Most plant cells have a single vacuole that takes up much of the cell. It helps maintain the shape of the cell. Over 35,000 Web Pages Sample Pages for Prospective Subscribers, or click below Enchanted Learning Search Search the Enchanted Learning website for: Copyright ©2001 EnchantedLearning.com ------ How to cite a web page
Fainting (syncope) is a temporary loss of consciousness ("passing out"). It occurs when blood flow to the brain is reduced. Your doctor believes that your episode was due to a common vagal reaction. A vagal reaction is a reflex response that causes the pulse to slow down. If the pulse is low enough, the blood pressure falls and causes fainting or near-fainting. Lying down usually stops the reaction within 60 seconds. This reflex response can occur during sudden fear, severe pain, emotional stress, overexertion or suddenly standing up after sitting or lying for a long time. 1) Rest today and resume your normal activities as soon as you are feeling back to normal. 2) If you become light-headed or dizzy, lie down immediately or sit with your head lowered between your knees. with your doctor as instructed. Get Prompt Medical Attention if any of the following occur: -- Another fainting spell occurs, which is not explained by the common causes listed above -- Chest, arm, neck, jaw, back or abdominal pain -- Shortness of breath -- Weakness, tingling or numbness in one side of the face, one arm or leg -- Slurred speech, confusion, difficulty walking or seeing -- Blood in vomit, stools (black or red color) -- (In women) unexpected vaginal bleeding
Thanks to specialised microscopes, we have long been able to see the beauty of single atoms. But strange though it might seem, imaging larger molecules at the same level of detail has not been possible – atoms are robust enough to withstand existing tools, but the structures of molecules are not. Now researchers at IBM have come up with a way to do it. The earliest pictures of individual atoms were captured in the 1970s by blasting a target – typically a chunk of metal – with a beam of electrons, a technique known as transmission electron microscopy (TEM). Later refinements of this technique, such as the TEAM project at the Lawrence Berkeley National Laboratory in California achieved resolutions of less than the radius of a single hydrogen atom. But while this method works for atoms in a lattice or thin layer, the electron bombardment destroys the arrangement of atoms in molecules. Other techniques use a tiny stylus-like scanning probe to explore the atom-scale world. One method uses such a probe to measure the charge density associated with individual atoms – a technique called scanning tunnelling microscopy (STM). Another, called atomic force microscopy (AFM), measures the attractive force between atoms in the probe and the target. The image is created by bumping the probe over the atoms of the molecule – much in the way we might feel our way around in a dark bedroom. Both methods build up a picture of a target's surface and should be suitable for imaging individual molecules. But they have not been able to approach the detail of TEM. Leo Gross and his colleagues at IBM in Zurich, Switzerland, modified the AFM technique to make the most detailed image yet of pentacene, an organic molecule consisting of five benzene rings (see picture). The molecule is very fragile, but the researchers were able to capture the details of the hexagonal carbon rings and deduce the positions of the surrounding hydrogen atoms. One key breakthrough was finding a way to stop the microscope's tip from sticking to the fragile pentacene molecule because of attraction due to electrostatic and van der Waals forces – van der Waals is a weak force that operates only at an intermolecular level. The team achieved this by fixing a single carbon monoxide molecule to the end of the probe so that only one atom of relatively inactive oxygen came into contact with the pentacene. Although van der Waals force attracted the tip to its target, a quantum-mechanical effect called the Pauli exclusion principle pushed back. This happens because electrons in the same quantum state cannot approach each other too closely. As the electrons around the pentacene and carbon monoxide molecules are in the same state, a small repulsive force operates between them. The researchers measured the repulsive force the probe encountered at each point, and from this they could construct a "force map" of the molecule. The level of detail available depends on the size of the probe: the smaller the tip, the better the picture. The image is "astonishing", says Oscar Custance of Japan's National Institute for Materials Science in Tsukuba. In 2007, his team used AFM to distinguish individual atoms on a silicon surface, but he acknowledges that the IBM team has surpassed this achievement. "This is the highest resolution I have ever seen," he says. The IBM researchers believe their technique may open the door to super-powerful computers whose components are built with precisely positioned atoms and molecules. The work may also provide insights into the actions of catalysts in reactions, allowing researchers to understand what is happening at the atomic level, says Gross. Journal reference: Science, DOI: 10.1126/science.1176210 If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Optical Mirrors Information Optical Mirrors Information Optical mirrors have a smooth, highly-polished, plane or curved surface for reflecting light. Usually, the reflecting surface is a thin coating of silver, or aluminum on glass. Product specifications for optical mirrors include diameter, radius of curvature, thickness focal length, and surface quality. The diameter or height of an optical mirror is measured straight on. If the optical mirror’s curvature was extrapolated into a sphere, then the radius of that sphere is the radius of curvature for the mirror. There are two thickness measurements for optical mirrors: center thickness and edge thickness. Units of measure include inches, feet, and yards; nanometers, centimeters, and millimeters, and miles and kilometers. With optical mirrors, focal length is the distance from the mirror at which light converges. Surface quality describes digs and scratches. A dig is a defect on a polished optical surface that is nearly equal in terms of length and width. A scratch is a defect whose length is many times its width. Optical mirrors are made from many different materials, each of which influences the mirror’s reflectivity characteristics. Choices for materials include borosilicate glass, copper, fused silica, nickel, and optic crown glass. Borosilicate glass is also known as BK7 and boro-crown glass. Copper is used in high-power applications because of its high thermal conductivity. Fused silica has a very low coefficient of thermal expansion and is suitable for use with moderately-powered lasers or changing environmental conditions. Ultraviolet (UV) grade optical mirrors are also commonly available. Nickel is used in applications which require resistance to both thermal and physical damage. Proprietary materials for optical mirrors include Pyrex (Corning Inc.) and Zerodur (Schott Glaswerke). Optical mirrors are sometimes coated to enhance their reflectivity. Choices include bare, enhanced, and protected aluminum; silver, bare gold and protected gold; and coatings made from rhodium and dielectric materials. Enhanced aluminum coatings are used to improve reflectance in the visible and ultraviolet regions. Protected aluminum coatings provide abrasion resistance while protecting the surface of the aluminum, an excellent reflector in the upper UV, visible and near-infrared (IR) regions. Optical mirrors with bare gold and protected gold coatings are used in the near-IR to far-IR regions. Silver coatings provide better reflectance than aluminum; however, silver’s tendency to oxidize and tarnish requires thorough sealing from the atmosphere. Rhodium coatings have a reflectivity of approximately 80% of the visible spectrum.
Red huckleberry is a shrub that can grow up to four meters tall, and has bright green branches and small (up to three centimetres long) oval leaves. RANGE & HABITAT Red huckleberry is found growing in coastal coniferous forests, and often grows out of rotting logs and stumps. In B.C., this plant grows in the Coast and Mountains and Georgia Lowlands ecoprovinces. Red huckleberry will sometimes keep its leaves over winter.Its greenish-yellow bell-shaped flowers turn into bright-red, round berries. Many different birds and animals eat the red berries. Red huckleberry is often found growing on rotting logs and stumps where birds, who after having eaten the berries, have stopped to perch and while doing so, spread the berry seeds through their droppings. TRADITIONAL FIRST NATIONS USES Red huckleberries were eaten by many coastal First Nations peoples. Sometimes wooden combs were used to rake the berries off the branches. The berries were eaten fresh, or mashed and then dried. A few groups, such as the Kwakwaka’wakw boiled the berries with salmon spawn in cedar boxes and then sealed the tops of the boxes with eulachon (a type of fish) and skunk cabbage (another plant) leaves. This way the berries could be kept for many months. Many people today pick the berries to freeze, can or make into jam. COSEWIC: Not at Risk
Aristotle defined them: 1. Invention: finding ways to persuade. 2. Arrangement: putting together the structure of a coherent argument. 3. Style: presenting the argument to stir the emotions. 4. Memory: speaking without having to prepare or memorize a speech. 5. Delivery: making effective use of voice and gesture. One could make a strong case for the first three being all you need to know about producing anything written. The fourth, “Memory,” is important in order to be able to summon up appropriate examples and analogies at a moment’s notice without having to stop and look something up. This can interrupt an otherwise easy flow of words, thoughts and ideas.[via mbs] Powered by Sidelines
Possession, in the context of linguistics, is an asymmetric relationship between two constituents, the referent of one of which (the possessor) in some sense possesses (owns, has as a part, rules over, etc.) the referent of the other (the possessed). Possession may be marked in many ways, such as simple juxtaposition of nouns, possessive case, possessed case, construct state (as in Arabic, and Nêlêmwa), or adpositions (possessive suffixes, possessive adjectives). For example, English uses a possessive clitic, 's; a preposition, of; and adjectives, my, your, his, her, etc. There are many types of possession, but a common distinction is alienable and inalienable possession. Alienability refers to the ability to dissociate something from its parent; in this case, a quality from its owner. When something is inalienably possessed, it is usually an attribute. For example, John's big nose is inalienably possessed because it cannot (without surgery) be removed from John; it is simply a quality that he has. In contrast, 'John's briefcase' is alienably possessed because it can be separated from John. Many languages make the distinction as part of their grammar, typically by using different affixes for alienable and inalienable possession. For example, in Mikasuki (a Muskogean language of Florida), ac-akni (inalienable) means 'my body', but am-akni (alienable) means 'my meat'. English does not have any way of making such distinctions (the example from Mikasuki is clear to English-speakers only because there happen to be two different words in English that translate -akni in the two senses: both Mikasuki words could be translated as 'my flesh', and the distinction would then disappear in English). Possessive pronouns in Polynesian languages such as Hawaiian and Māori are associated with nouns distinguishing between o-class, a-class and neutral pronouns, according to the relationship of possessor and possessed. The o-class possessive pronouns are used if the possessive relationship cannot be begun or ended by the possessor. Obligatory possession is sometimes called inalienable possession. A semantic notion and so it largely depends on how a culture structures the world, and obligatory possession is a property of morphemes. In general, nouns with the property of requiring obligatory possession are notionally inalienably possessed, but the fit is rarely, if ever, perfect. Another distinction, which is similar to alienable and inalienable possession, is inherent and non-inherent possession. In languages that mark the distinction, inherently-possessed nouns, such as parts of wholes, cannot be mentioned without indicating their dependent status. Yagem of Papua New Guinea, for instance, distinguishes alienable from inalienable possession when the possessor is human, but it distinguishes inherent from non-inherent possession when the possessor' is not human. Inherently-possessed nouns are marked with the prefix ŋa-, as in (ka) ŋalaka '(tree) branch', (lôm) ŋatau '(men's house) owner' and (talec) ŋalatu '(hen's) chick'. Adjectives that are derived from nouns (as inherent attributes of other entities) are also so marked, as in ŋadani 'thick, dense' (from dani 'thicket') or ŋalemoŋ 'muddy, soft' (from lemoŋ 'mud'). Many languages, such as Maasai, distinguish between the possessable and the unpossessable. Possessable things include farm animals, tools, houses, family members and money, but wild animals, landscape features and weather phenomena are examples of what cannot be possessed. That means basically that in such languages, saying my sister is grammatically correct but not my land. Instead, one would have to use a circumlocution such as the land that I own. Many languages have verbs that can be used to form clauses denoting possession. For example, English uses the verb have for that purpose, French uses avoir etc. There are often alternative ways of expressing such relationships (for example, the verbs possess and belong, among others, can be used in English in appropriate contexts: see also have got). Since a dog is animate/and a computer is not, different verbs are used. However some nouns in Georgian, such as car, are treated as animate even though they appear to refer to an inanimate object. In some languages, possession relationships are indicated by existential clauses. For example, in Russian, "I have a friend" can be expressed by the sentence у меня есть друг u menya yest drug, which literally means "at me there is a friend". Including Latvian, Irish and Turkish as well as Uralic languages such as Hungarian and Finnish, they use an existential clause to assess a possession since the verb to have does not have that function. For more examples, see Existential clause § Indicating possession.
Overview of autism There is a great deal of diversity on what autism is: a complex neurobiological condition, an illness, a gift, a personality type and more. Autism is a hard condition to describe as it covers such a wide range and the terminology of autism also can be bewildering. Diagnosis and terminology has changed over the years. Today Autism, Asperger syndrome and Pervasive Developmental Disorder-Not Otherwise Specified (PDD-NOS) together make up the autism spectrum disorders (ASD). Today the diagnosis of autism is given in South Africa only by a medical professional such as a GP or psychiatrist, or a psychologist (clinical, counselling or educational). This means that many professionals who work with children, such as teachers, occupational therapists, speech therapists and others may not give a diagnosis, although their input may be invaluable. Autism is not uncommon: it is estimated that nearly 1 in 68 births result in some form of autism (statistic from CDC, Atlanta, 2014). Autism is also 4 times more likely to be diagnosed in boys than girls, although it is not yet clear if this is due to increased diagnosis or other factors. As it covers a wide spectrum of traits and behaviours it is difficult to describe to the layperson. Individuals with autism range from individuals with severe impairments, and who need extensive support for daily living, through to those who may not be obviously “autistic” to anyone except those close to them. The picture widens when autistic savants such as “Rainman” are considered (savants, by the way, are rare!). Every individual with autism is just that: an individual, with unique characteristics.
Home > Preview The flashcards below were created by user on FreezingBlue Flashcards. What is microbiology? study of microorganisms, including bacteria, viruses, and fungi What are bacteria? prokaryotic organisms; prokaryotes lack a true nucleus and membrane-bound organelles. What are viruses? are not considered living organisms, since they cannot carry out metabolism outside of a host cell. What are bacteriophages? are viruses that infect bacteria What are fungi? lack chlorophyll but are eukaryotic organisms and therefore have membrane-bound organelles. what are the two major groups of prokaryotes? bacteria and archaea What does archaea include? - methanogens (prok. that produce methane), extreme halophiles (prok. that live at very high concentrations of NaCl), - extreme thermophiles (prok. that live at very high temp.) - *on test day, the term prok. should make you think of bacteria* What are the basic structure/components of prokaryotes? - -simple single-celled organisms - - have plasma membrane but lack organelles and cytoskeleton - -have cell wall, cytoplasm, ribosomes(differ slightly from euk. ribo.) - -genome, also referred to as bacterial chromosoms, is found in the nucleoid region. - - lack of organelles means that the interior of prok. is one continuous compartment, the cytosol. - -may contain plasmids, which are small circular extrachromosomal segments of DNA that replicate independently of the bacterial chromosome - -sometimes have flagella that used for locomotion. what is the purpose of the cell wall on prok.? serves to maintain the cell's shape and provide protection and rigidity to the cell. Bacteria can be divided into two major group based on the structure of their cell wall, what are they? gram-positive and gram-negative What are gram + bacteria cell wall like? Gram + bact. have a thick cell wall composed of peptidoglycan. What are gram - bacteria cell wall like? thin layer of peptidoglycan sandwiched btwn layers of periplasm and coated w/ a layer of lipopolysaccharide. Bacteria may also be classified by their _____. What are they? - morphology (shape) - 1) cocci: round or spherical - 2) bacilli: rod-shaped bacteria - 3) spirilla: spiral-shaped bacteria Another means of classifying bacteria is by their _____ _____. What are they? - oxygen requirements; - 1) obligate anerobe: cannot survive in the presence of oxygen - 2) facultative aerobes: can survive w/ or w/o oxygen - 3) obligate aerobes: require oxygen to survive. Bacterica can also be classified according to their mode of _________. What are they? What are photoautotrophs? photosynthetic; use light energy to produce their own nutrient molecules. Photosynthetic bacteria use the plasma membrane as the site of photosynthesis. What are chemoautotrophs? use energy derived from inorganic molecules such as ammonia (NH3) or hydrogen sulfide (H2S) to drive nutrient production. What are photoheterotrophs? can use light to generate energy but must obtain their carbon in organic form (ex glucose) What are chemoheterotrophs? must consume organic molecules both as an energy source and a source of carbon. How do prok. reproduce? binary fission: cell replicates its DNA and divided in two. -circular chromosome replicates and a new plasma membrane and cell wall grow inward along the midline of the cell, dividing it into two equal daughter cells, each contain a duplicate of the parental chromosome. what is transcription? The transfer of information from DNA to RNA what is translation? The transfer of information from an RNA molecule into a polypeptide. What is polycistronic? a single mRNA often contains more then one coding region; where does transcription and translations occur in prok? in the cytosol (since there are no separate membrane bound nucleus) Where does transcription and translation take place in euk.? - transcription: nucleus - post-transcriptional modifications: includes splicing of introns (non-coding sequences in the mRNA) take place before the mature mRNA leaves the nucleus - translation: takes place outside e of the nucleus in the cytosol. Which of the following may be found in a prokaryotic cell? A) polycistronic mRNA B) multiple chromosomes A) polycistronic mRNA (this multiple choice question has been scrambled) when does genetic transformation take place and what are the three methods by which prokaryotes transfer genetic material? takes place when DNA is incorporated (contained) into a recipient cell. what happens in transformation? DNA is taken up from the environment and integrated into the bacterial genome. what happens in transduction? bacterial genes are transferred from one bacterial cell to another by a virus. what happens in conjugation? genetic information is directly transferred from one bacterial cell to another via a temporary connection known as a conjugation bridge. What would you like to do? Home > Flashcards > Print Preview
Dr Meyer, from the Centre for Australian Weather and Climate Research, said the 2003 bushfires burnt an area of 1.1 million hectares and the 2006 fires 1.3 million hectares. This compares with approximately 450,000 hectares burnt during the February 2009 fires in Victoria. From December 2006 to February 2007 bushfires in the Great Divide burned for 69 days. On several occasions, thick smoke haze was transported to the Melbourne CBD and particulate matter concentrations at several Environmental Protection Agency Victoria air quality monitoring sites peaked at four times the National Environment Protection Measure 24-hour standard. Analysis of the measurements showed: - High concentrations of fine particles between 0.1 and 0.5 µm, diameter, largely composed of non-volatile organic material. Particles of this size are easily respired and can cause significant health impacts. - High concentrations of carbon monoxide and nitrogen dioxide. - Elevated concentrations of ozone. - The fingerprint is distinctly different from industrial and vehicular pollution sources. Dr Meyer said that under a changing climate the frequency of bushfires, the duration of the bushfire season and the severity of bushfires are expected to change. Current projections for south-eastern Australia suggest an increase in the frequency of very high and extreme fire days and that periods suitable for prescribed burning would move towards winter. In addition to the health impacts of increased fire intensity, duration and frequency, biomass burning also results in the emission of significant quantities of trace gases and aerosols to the atmosphere, and these subsequently can influence cloud processes. “They also reduce visibility, influence atmospheric photochemistry and can be inhaled into the deepest parts of the lungs impacting on human health,” Dr Meyer said Savanna forest burning in northern Australia accounts for the majority of carbon emissions from burning, and 90-95 per cent of this is from wildfires. These savannah fires contribute about eight per cent of global carbon emissions from vegetation fires.
ADESINA, A.O. ... et al, 2014. Touch arithmetic: a process-based computer-aided assessment approach for capture of problem solving steps in the context of elementary mathematics. Computers and Education, 78, pp.333-343 Technology today offers many new opportunities for innovation in educational assessment and feedback through rich assessment tasks, efficient scoring and reporting. However many Computer-Aided Assessment (CAA) environments focus on grading and providing feedback on the final product of assessment tasks rather than the process of problem solving. Focusing on steps and problem-solving processes can help teachers to diagnose strengths and weaknesses, discover strategies, and to provide appropriate feedback. This study explores a method that uses trace links on an interactive touch-based computing tool for the capture and analysis of solution steps in elementary mathematics. The tool was evaluated in an observational study among 8 and 9 year old primary school children (N=39). The approach yielded similar performance scores as compared to paper-and-pencil tests while providing more explicit information on the problem-solving process. The output data was useful for scoring intermediate and final answers as well as feedback information on types and time efficiencies of strategies used. An implication of this study for teachers and researchers is that they can more accurately assess students’ understanding of important concepts, and be in a better position to provide rich and detailed feedback while motivating students with interactive tasks.
Click for "Microbes After Hours" videos Archaeans are single-celled creatures that join bacteria to make up a category of life called the Prokaryotes (pro-carry-oats). Prokaryotes' genetic material, or DNA, is not enclosed in a central cellular compartment called the nucleus. Bacteria and archaea are the only prokaryotes. All other life forms are Eukaryotes (you-carry-oats), creatures whose cells have nuclei. (Note: viruses are not considered true cells, so they don't fit into either of these categories.) However, while archaeans resemble bacteria and have some genes that are similar to bacterial genes, they also contain other genes that are more like what you'd find in eukaryotes. Furthermore, they have some genes that aren't like any found in anything else. For more details about the differences between bacteria and archaea, see this website. (Note: this site contains some pretty technical information and language.)
The earthquake and tsunami that rocked Chile in 2010 unleashed substantial and surprising changes on ecosystems there, yielding insights on how these natural disasters can affect life and how sea level rise might affect the world, researchers say. The magnitude 8.8 earthquake that hit Chile struck off an area of the coast where 80 percent of the population lives. The massive quake triggered a tsunami reaching about 30 feet (10 meters) high that wreaked havoc on coastal communities: It killed more than 500 people, injured about 12,000 more, and damaged or destroyed at least 370,000 houses. It makes sense that such earthshaking catastrophes would have drastic consequences on ecosystems in the affected areas. However, if researchers lack enough data about the environment before a disaster strikes, as is usually the case, it can be difficult to decipher these effects. With the 2010 Chile quake, scientists were able to conduct an unprecedented report of its ecological implications based on data collected on coastal ecosystems shortly before and up to 10 months after the event. The sandy beaches of Chile apparently experienced significant and lasting changes because of the earthquake and tsunami. The responses of ecosystems there depended strongly on the amount of land level change, how mobile life there was, the type of shoreline, and the degree of human alteration of the coast. For instance, in places where the beaches sank and did not have manmade sea walls and other artificial "coastal armoring" to keep water out, intertidal animal populations — ones living in the part of the seashore that is covered at high tide and uncovered at low tide — all dropped, presumably because their habitats were submerged. The most unexpected results came from uplifted sandy beaches. Previously, intertidal species had been kept from these beaches due to coastal armoring. After the earthquake, these species rapidly colonized the new stretch of beach the quake raised up in front of the sea walls. "This is the first time this has been seen before,"said researcher Eduardo Jaramillo, a coastal ecologist at the Southern University of Chile. "Plants are coming back in places where there haven't been plants, as far as we know, for a very long time," said researcher Jenny Dugan, a biologist at the University of California, Santa Barbara. "This is not the initial ecological response you might expect from a major earthquake and tsunami." These findings could help inform future human alterations to coastlines. For instance, as sea levels rise globally, it might be wise to consider how beach habitats in front of sea walls might get changed. "Around the Pacific coast, there may be another earthquake tomorrow, the day past tomorrow, we don't know," Jaramillo told OurAmazingPlanet. "With this kind of research, hopefully we can learn something from them." [7 Ways Earth Changes in Blink of an Eye] The scientists detailed their findings online May 2 in the journal PLoS ONE. - Images: Chile's Altered Coast - Image Gallery: This Millennium's Destructive Earthquakes - The 10 Biggest Earthquakes in History Copyright 2012 OurAmazingPlanet, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. ALSO ON HUFFPOST: The Morning Email helps you start your workday with everything you need to know: breaking news, entertainment and a dash of fun. Learn more
Real Connected Cursive Fonts! You can use any of our Schoolhouse Fonts sets to design your own handwriting worksheets. The worksheets can be used not only as a handwriting regimen, but also to provide lessons in vocabulary, geography, history, literature and more! Which Schoolhouse Fonts set should I use? You should choose the set that matches what is taught at your child’s school. When children are first learning to read and write, they cannot easily recognize more than one style of writing, so you should avoid teaching a different method than the school, as it may confuse and frustrate your child. As adults, we can recognize variations of a character as the same character, but when children are first learning to read, each character variation is a completely different shape that is unrelated to the other shapes. This graphic example of the lowercase a and g shows different variations in writing styles; as adults we know that these are a’s and g’s, but to a child, each shape is a different character that must be learned separately. Here are some basic guidelines for creating and using handwriting worksheets for children. Children with Autism may not have the motor skills to handle writing instruments very well. Doing excercises to improve motor skills will help them develop strength and dexterity in their hands. Capital letters seem to be easier for them to write, compared to lowercase letters. They also tend to start letter shapes from the bottom, instead of from the top. The important thing for them to learn is to write letter shapes that are recognizable and legible.
Learning to use the IPA to transcribe speech can be very challenging, for many reasons. One reason we’ve already talked about is the challenge of ignoring what we know about how a word is spelled to pay attention to how the word is spoken. Another challenge is simply remembering which symbols correspond to which sounds. The tables in Units 2.4 and 3.2 may seem quite daunting, but the more you practice, the better you’ll get at remembering the IPA symbols. A challenge that many beginner linguists face is deciding exactly how much detail to include in their IPA transcriptions. For example, if you know that Canadian English speakers tend to diphthongize the mid-tense vowels [e] and [o] in words like say and show, should you transcribe them as the diphthongs [eɪ] and [oʊ]? And the segment [p] in the word apple doesn’t sound quite like the [p] in pear; how should one indicate that? Does the word manager really begin with the same syllable that the word human ends with? Part of learning to transcribe involves making a decision about exactly how much detail to include in your transcription. If your transcription includes enough information to identify the place and manner of articulation of consonants, the voicing of stops and fricatives, and the tongue and lip position for vowels, this is usually enough information for someone reading your transcription to be able to recognize the words you’ve transcribed. A transcription at this level is called a broad transcription. But it’s possible to include a great deal more detail in your transcription, to more accurately represent the particulars of accent and dialect and the variations in certain segments. A transcription that includes a lot of phonetic detail is called a narrow transcription. The rest of this chapter discusses the most salient details that would be included in a narrow transcription of the most widespread variety of Canadian English.
Sea level change Students calculate sea level change due to melting ice caps, thermal expansion and changes in ridge spreading rates. Used in a Marine Geology course for marine science majors. Physical geology is the prerequisite for the course. Skills and concepts that students must have mastered Isostacy, spreading rates, graph reading. How the activity is situated in the course This is part of the lessons on sea level change. The exercise is started in class, to help students develop a strategy to solve the problems. It is completed as homework. Content/concepts goals for this activity To extend student understanding of sea level change to geological time scales. To practice making simplifications so that a problem is solvable. To practice making and using a sketch to solve problems. To practice reading graphs. Higher order thinking skills goals for this activity Other skills goals for this activity Description and Teaching Materials Sea level change (Microsoft Word 129kB Jun20 13) Teaching Notes and Tips References and Resources
Low Density Inflationary Universes How Much Matter Does the Universe Contain? An important question today in cosmology is how much mass is contained in the universe. If there were no matter filling the universe, the universe would expand forever and the recession velocity of objects at rest with respect to the expansion of the universe would not change as the universe expands. We know, of course, that the universe is not empty but filled with matter, and ordinary matter through gravity attracts other matter, causing the expansion of the universe to slow down. If the density of the universe exceeds a certain threshold known as the critical density, this gravitational attraction is strong enough to stop and later reverse the expansion of the universe, causing it eventually to recollapse in what is known as the “Big Crunch.” On the other hand, if the average density of the Universe falls short of the critical density, the universe expands forever, and after a certain point the expansion proceeds much as if the universe were empty. A critical universe lies precariously balanced between these two possibilities. Why a Universe of Critical Density? For quite some time it has been known that the mean density of our universe agrees with the critical density to within better than a factor of ten. Even with such large margin of error this agreement is remarkable. Establishing initial conditions so that the mean density remains close to the critical density for more than a fleeting moment is much like trying to balance a pencil on its point. A universe initially with slightly subcritical density rapidly becomes increasingly subcritical and soon virtually indistinguishable from an empty universe. Similarly, an ever so slightly supercritical universe rapidly collapses into a Big Crunch, never reaching the old age of our universe—somewhere around twelve billion years. To obtain a universe like ours seems to require fine tuning of the initial density to agree with the critical density to an accuracy around one part in 1060! For a long time it was regarded simplest and aesthetically most pleasing to postulate that our universe is now of exactly critical density. The versions of inflation developed in the early 1980s provided a mechanism for setting the density of the universe near the critical density with nearly unlimited precision. For many years an exactly critical universe was touted as one of the few firm predictions of inflation. Geometry and the Density of the Universe In Einstein’s General Theory of Relativity, formulated in 1915, gravity is understood in terms of geometry rather than as just another ordinary force. Matter tells spacetime how to curve and the resulting spacetime curvature tells bodies how to move. For the special case of an expanding universe, idealized as filled with a uniform density of matter, a good approximation on large scales, General Relativity establishes an intimate connection between the density of the universe in comparison with the critical density and its geometry. A universe of critical density (at constant cosmic time) has the familiar Euclidean geometry so well known to us from every experience and from classical perspective as taught in art class. However, a universe of subcritical or supercritical density has a non-Euclidean geometry—hyperbolic if the density is subcritical, or spherical if the density is supercritical. On small scales these different geometries are much alike. An ant on the surface of an apple might view its immediate surrounding as quite flat and might experience difficulty in figuring out that the apple is round. Likewise, if the curvature of the universe would become apparent only on scales beyond several billion light years we might be deceived into believing that its geometry is Euclidean. Only on large scales—larger than the so-called curvature scale—do the differences between the geometries become large effects. The following three plates illustrate the difference in perspective between the three possible geometries: a hyperbolic geometry, a Euclidean geometry, and a spherical geometry. In all three cases, space is divided into identical cells, whose edges are indicated by the rods. The balls within the cells are of identical size, and increasing distance is indicated by reddening. In the Euclidean geometry space is divided into cubes and one experiences the ordinary, familiar perspective: the apparent angular size of objects is proportional to the inverse of their distance. Hyperbolic space shown here is tiled with regular dodecahedra. In Euclidean space such a regular tiling is impossible. The size of the cells is of the same order as the curvature scale. Although perspective for nearby objects in hyperbolic space is very nearly identical to Euclidean space, the apparent angular size of distant objects falls off much more rapidly, in fact exponentially, as can be seen in the figure. The spherical space space shown here is tiled with regular dodecahedra. The geometry of spherical space resembles the surface of the earth except here a three-dimensional rather than two-dimensional sphere is being considered. Perspective in spherical space is peculiar. Increasingly distant objects first become smaller (as in Euclidean space), reach a minimum size, and finally become larger with increasing distance. This behavior is due to the focusing nature of the spherical geometry. [The three above figures were prepared by Stuart Levy of the University of Illinois, Urbana-Champaign and by Tamara Munzer of Stanford University for Scientific American. Copyrighted and reprinted with permission.] What is the Geometry of Our Universe? During the 1980s observations remained sufficiently crude so that a universe of critical density was quite plausible. But more recent observations have made it increasingly difficult to reconcile a critical universe with the observations. It is known that in addition to the luminous matter seen in the form of stars the universe contains a large amount of “dark” matter, in particular in the halos around galaxies. The presence of this dark matter is inferred from its gravitational pull on the surrounding matter. Since the dark matter is distributed in a less clustered manner than the luminous matter, the apparent average density seems to increase as larger and larger scales are probed. For a long time it was hoped probing sufficiently large scales would uncover a critical density of dark matter. Today it seems unlikely that this hope will ever be realized. It is now possible to probe the average density of the universe on scales large enough to compromise a fair sample of the universe. We present the so-called “cluster baryon fraction” as one illustrative example of the strong evidence in favor of a universe of subcritical density. Rich clusters of galaxies are the largest gravitationally bound systems in the universe. Although rare, these systems are excellent laboratories for studying the composition of the matter filling the universe. Using nuclear physics one can determine the baryon density of the universe. With the density of baryonic matter known, the total density can be determined from measuring the baryon fraction. The baryonic mass of a cluster can be determined by adding the masses of the constituent galaxies inferred from their light to the mass of the hot intracluster gas, which can be determined from X-ray observations of emission from the gas. The total mass can be determined by a variety of methods. The motions of the constituent galaxies allow one to determine the depth of the potential well and hence the total mass of the cluster. X-ray observations allow the same to be done with the gas, and gravitational lensing of background objects by the gravitational field of the cluster, resulting in the distortion in appearance of background galaxies, provides a completely independent check of the total mass. These techniques, and a number of independent techniques as well, suggest a universe with approximately one third of the critical density. Although a universe of critical density cannot yet be ruled out definitively, the possibility of a critical universe now appears like quite a long shot. Reconciling a Low Density Universe with Inflation If the universe is in fact of subcritical density, does this require abandoning inflation? If a flat universe really is a “prediction” of inflation as once claimed, one would have to give up inflation. There however exists an escape from this dilemma. Inflation within a single bubble can create a smooth universe with a hyperbolic geometry, just as is required for a universe of subcritical density. Single bubble open inflation, based on ideas of S. Coleman and F. de Luccia and of J.R. Gott, III, in the early 1980s, was further developed in the mid-1990s by M. Bucher, A.S. Goldhaber, and N. Turok and later by M. Sasaki, T. Tanaka, and K. Yamamoto. Inflation smooths the universe by postulating an early epoch of extremely rapid expansion during which whatever irregularities may have existed prior to inflation are virtually erased. In ordinary inflation, as developed by Guth, Linde, Albrecht, and Steinhardt, this smoothing flattens the universe as well, yielding a universe of critical density. In ordinary inflation, a critical universe could in principle be avoided by shortening the amount of inflation, but in that case the smoothness on large scales remains a mystery, causing inflation to lose most of its appeal. The Creation of a Single Bubble Open Universe. The vertical direction indicates time and the horizontal directions are spatial. The value of the inflaton field is constant on the various slices and the colors indicate the cooling down of the universe as one passes into the bubble interior. The bubble is expanding into the surrounding inflating spacetime stuck in the false vacuum. We live inside the bubble interior. In single bubble open inflation there are two epochs of inflation. In inflation the rate of expansion is controlled by a scalar field, known as the inflaton field. The inflaton field wants to roll down the hill to the bottom and as the field descends the rate of expansion of the universe decreases, eventually ending the epoch of inflationary expansion. In open inflation the inflaton field at first remains stuck in a local minimum of the potential. While the field is stuck there, a first epoch of inflationary expansion takes place during which the universe is smoothed. In fact during this epoch the symmetry of the spacetime is so large that no particular time direction is preferred over any other. According to classical physics, once stuck in the local minimum the inflaton field never escapes. However, quantum mechanics allows the field to tunnel through the barrier. This tunneling occurs through the nucleation of a bubble that subsequently expands, somewhat as an expanding bubble in a pot of boiling water. Subsequently, the bubble expands at the speed of light. It cannot have any velocity other than the speed of light, for else a preferred time direction would be required to exist. The surfaces on the bubble interior on which the scalar field is constant have a hyperbolic spatial geometry, and these are the surfaces that we inside the bubble later perceive as surfaces of constant cosmic time. As one passes inside the bubble, the interior continues to inflate, creating a universe with a large curvature radius. Further inside the bubble the energy of the inflaton field is converted into ordinary matter and radiation, and the hyperbolic universe continues to expand and cool down. How Can We Test Open Inflation? Microwave Anisotropy as a Function of Angle. Plotted is the level of anisotropy as a function of angle and various measurements thereof. The curves indicate theoretical predictions for various models. The solid curve indicates a universe of critical density whereas the dot-dash-dot-dash curve indicates a low density universe. Note how the position of the first peak shifts to the right to smaller angular scales in the low density universe. The best hope for testing open inflation derives from measuring the geometry of the universe, which can be determined through observing the ripples in the cosmic microwave background radiation. The 3K cosmic microwave background radiation emanates from an epoch approximately three hundred thousand years after the Big Bang, when the universe was approximately one thousandth its present size. At this time the electrons, because of the cooling of the universe, combined with protons and other nuclei to form neutral hydrogen and other elements. Because of this change in composition from a highly ionized plasma to a neutral gas, the formerly opaque universe becomes virtually transparent. The non-uniformities in the microwave background provide a snapshot of the ripples at that time, which later developed into galaxies and the structure that we observe today. Inflation in general, and open inflation on scales much shorter than the curvature scale, imprints essentially scale free fluctuations on the matter filling the universe. At recombination, however, the physics at that time, believed to be well understood, introduces a preferred scale of known length on which the first acoustic oscillations of the plasma occur. This scale is of known physical size, and from its angle subtended in the sky today, we can determine the geometry of the universe. More General Open Inflation The above models for open inflation provide a counter-example to the standard lore on inflation, but they rely upon the presence of a local minimum in the potential energy of the inflaton field. At our present level of understanding, we simply cannot tell whether this is what is predicted by a more fundamental theory such as M-theory or supergravity. But in the model theories for which we can calculate the inflaton potential energy, such local minima do not usually appear. Hawking-Turok Instanton. A bubble universe emanates from a Hawking-Turok instanton. The vertical direction indicates time and the horizontal directions are spatial. E indicates the Euclidean region, where time becomes spacelike, and I is the bubble interior. The heavy line to the left indicates the mild singularity occurring in these solutions. Last year, Hawking and Turok realised that open inflation was in fact much more general, and could even occur in a theory where there is no local minimum in the inflaton potential energy. In fact, they showed that for essentially any potential energy function allowing inflation, an open universe similar to that obtained in the expanding bubble described above could be formed. Hawking and Turok’s calculation was performed in the framework of a proposal for the initial conditions made in 1983 by Hawking and James Hartle. They proposed that the initial condition for the universe should be that it possessed no initial boundary. One can picture the spacetime of an expanding universe as the surface of a cone, placed vertically with its sharp tip down. Time runs up the cone: space runs around it. Time and space end at the sharp tip. The tip is `singular’ in mathematical terms and if this were a model of the universe we would find all our equations break down there. Instead, Hartle and Hawking proposed that the tip be rounded off. This rounding off is only possible if the nature of spacetime changes in the vicinity of the tip. In effect, all directions must become `horizontal’ near the tip, which is to say that all directions are spacelike. This is just what we need to explain how time began. In effect the distinction between space and time is blurred and space is then rounded off. The region where time becomes spacelike is technically termed the instanton region. Instantons are solutions to the equations of general relativity and matter (here, the inflaton field) which have four spacelike directions. Hawking and Turok showed that for essentially any theory which allows inflation, there is a family of instanton solutions each one of which describes the formation of an inflating open universe. The Hawking-Turok instantons do actually possess a singularity, but only at a single point. Unlike the singularity in the standard hot big bang, which is so severe that we cannot predict anything that happened in its presence, the singularity in the Hawking-Turok instantons is so mild that, as for the singularity in the electric field at the centre of a hydrogen atom, it does not affect our ability to make predictions. The beauty of the instanton solutions is that they not only enable one to compute the probability of formation of open universes from first principles, but one can also compute the spectrum of quantum fluctuations present in the open universes, predicted by the no boundary proposal. Turok and DAMTP students Steven Gratton and Thomas Hertog have recently completed these calculations. The calculations have revealed a potential observational signature in the cosmic microwave sky that will, if the universe has less than critical density, enable one to check which form of open inflation (i.e., with or without a local minimum of the potential) was actually involved. S.W. Hawking, A Brief History of Time, New York: Bantam, 1998. M. Bucher and D. Spergel, “Inflation in a Low Density Universe,” Scientific American, January 1999. N. Turok, “Before the Big Bang,” in The Daily Telegraph, Saturday, March 14, 1998.
Seeing beneath the soil There is much more to archaeology than just digging! We have already seen some of the methods archaeologists use to interpret sites without excavation - for example, we looked at the role of aerial photography. Archaeological geophysical survey can play a vital role in research. It has the tremendous advantage of being non-intrusive, it does not dig into sites and does not damage or destroy buried evidence. Geophysical survey comes in several forms, each of which has its own advantages and disadvantages. A wise and well-resourced archaeologist will use a combination of methods to get the best results. We often like to refer to these methods as ‘Seeing beneath the soil’, but this is somewhat misleading. The results have to be carefully processed to reveal anomalies (elements that deviate from the broad background readings) and these in turn have to be interpreted. Computer power, appropriate software and the expertise of the archaeological geophysicist are all therefore necessary to be able to turn the raw data into something that can be used for research purposes – and even when all those are available, the impression that emerges can sometimes prove very misleading. The three most common methods used in the study of Roman frontier landscapes are: Newcastle MA student using RM 15 at Maryport © J. Alan Biggins Timescape Surveys A resistivity meter detects resistance to the flow of an electric current through the soil between two or more probes. Some archaeological features will be marked by stronger or weaker resistance than the earth around them. Thus, for example, a stone wall might be characterised by strong resistance, an old waterlogged ditch, by low resistance. 2. Ground Penetrating Radar (GPR) Newcastle MA student using Ground Penetrating Radar at Maryport © J. Alan Biggins Timescape Surveys GPR uses electromagnetic waves to record buried features. Radar data can be particularly useful when a 3D image of conditions beneath the ground are required. But radar operators can also process their data to provide ‘time slices’, flat plan like images of anomalies found at specified depths (the depth is calibrated from the fractional unit of time it takes the electromagnetic wave to hit a feature). The Bartington 601-2 Magnetometer at work in the Maryport Playing Fields © J. Alan Biggins Timescape Surveys A magnetometer can detect small changes in the magnetic field beneath its sensors. Magnetometry can be particularly good for locating burnt or fired features, but the disturbance of the soil involved in the creation of pits, ditches and other similar features can also leave detectable variations in the local magnetic field. In the next two steps we would like you to have a look at an example of a magnetometry survey. The one we are working with is taken from work by Drs Alan Biggins and David Taylor of Timescape Surveys for the Senhouse Museum Trust. © Newcastle University
MAC and IP Address MAC (Media Access Control) address is a hardware address for a networking device. For example, an Ethernet port in a laptop would have its own MAC address as well as the wireless card in the same laptop will have its own MAC address. The format of a MAC address is a sequence of six two–digit hexadecimal numbers separated by colons; for example, 00:2E:71:CF:A1:98 The format of an IP address is a 32-bit numeric address written as four numbers separated by periods. Each number can be zero to 255. For example, 18.104.22.168 could be an IP address. The same laptop mentioned in the example above would have a single local IP address that is assigned by your router. The router assigns each local networked device a IP address in the range of: - 10.0.0.0 through 10.255.255.255 with subnet mask 255.0.0.0 - 172.16.0.0 through 172.31.0.0 with subnet mask 255.255.0.0 - 192.168.0.0 through 192.168.255.0 with subnet mask 255.255.255.0 The last one on the list (192.168.0.0 through 192.168.255.) is the most common. The protocol used to assign these numbers is called Dynamic Host Configuration Protocol (DHCP). No two device can have the same IP address or a conflict will occur when the router tries to send out data. Furthermore, your cable or DSL modem will assign a global IP address to your router. Therefore, any computer out on the internet will see one of the computers located on your LAN as the router’s global IP address and not the local IP address. A good analogy for MAC and IP address is to think of an apartment building’s street address as the IP address and the individual apartment’s numbers as MAC addresses. In 1997, the Institute of Electrical and Electronics Engineers (IEEE) created the first WLAN (Wireless Local Area Network) standard. They called it 802.11. At a maximum bandwidth of 2 Mb/s, the original 802.11 protocol was pretty slow. The following formats emerged as technology developed: - 802.11n – the latest IEEE wireless networking standard. Although the 802.11n standard is not finalized, it is in the second draft stage and no major changes are expected. 802.11n increases transmission speeds to a maximum of 248 Mb/s and also increases the range to a maximum of 230 feet. The increased transmission speeds and range is attributed to the additional antennas and using MIMO (Multiple Inputs Multiple Outputs) technology. - 802.11g – is designed to take the best qualities of 802.11a (bandwidth) and 802.11b (range) and put them together. Therefore, 802.11g has a maximum bandwidth of 54 Mb/s (without extra manufacture’s technology such as Air Plus Extreme), and is located at the 2.4 GHz frequency range. Also, 802.11g is backwards compatible with 802.11b. - 802.11b – Has a maximum bandwidth of 11 Mb/s (without extra manufacture’s technology such as Air Plus Extreme), and is located at the 2.4 GHz frequency range. Since this is a unregulated frequency range, interference can come from other appliances such as cordless phones and microwaves. However, 802.11b does have a long distance range. - 802.11a – Has a maximum bandwidth of 54 Mb/s (without extra manufacture’s technology such as Air Plus Extreme), and is located at the 5 GHz frequency range. The higher frequency allows more data throughput, but makes the signal more susceptible to loss from interference with walls and other objects. Therefore, the distance range of 802.11a is less than 802.11b. - Bluetooth – is an alternative wireless technology to 802.11, but is mostly used for peripherals such as PDAs or wireless keyboards. It has a short range of only 30 feet and low bandwidth of 2 Mb/s.
An "invasive plant" is usually defined as a non-native (or alien) species whose introduction causes or is likely to cause economic or environmental harm or harm to human and wildlife health. The worst invasives are often called "noxious" weeds or pests. Invasive plants are introduced by many pathways. They can be deliberately planted in gardens or along roadsides and spread into surrounding habitats. They can be contaminants in crop seeds imported from other countries. Aquatic plant pests are often introduced into wetlands when people dump the contents of an aquarium into a pond. In several cases, seeds have been accidentally introduced from abroad when plant parts have been used as packing materials. What can you do about invasive plants? Start by using native plants in your garden. Monitor exotic plants to be sure that they don't spread beyond your yard. Avoid non-native plants that produce fruits that are eaten, and thus widely dispersed, by wildlife. Check the Invasive Plant Finder to get a list of invasives for your state, and avoid introducing such plants into your local area.
‘Deafness could be reversed using stem cells’ *Treatment that can allow people to hear again may be ready in 10 years Scientists believe they are ‘on the brink’ of a cure for hereditary deafness using stem cells. Researchers have grown new human ear hair cells, which can be used to replace faulty ones in sufferers of genetic deafness. They hope a treatment will be available within ten years. The new research was published in the journal Stem Cell Reports. Stem cells are a basic type of cell that can change into another type of more specialised cell through a process known as differentiation. Stem cells have been the focus of lots of medical research in recent decades because they can be used to grow almost any type of cell. Human inner ear hair cells are found in the cochlea – the spiral part of the inner ear – and form a vital component of our ability to hear sound. If these ‘cochlea cells’ are genetically mutated, patients can be born with severe loss of hearing. Those born this way are currently treated with an artificial cochlear implant or ‘hearing aid’, which helps transfer sound to the patient’s hearing nerves. But now a team has engineered and grown stem cells that don’t carry any deafness mutation. They hope to one day place these stem cells surgically into the ear, where they will develop and function as normal non-faulty ear hair cells. The work, which is being carried out at Juntendo University in Tokyo, Japan, aims to correct a mutation in a gene called Gap Junction Beta 2. This gene accounts for deafness or hearing loss for one in every thousand children. In some parts of the world, mutations of the cochlea cells are responsible for as many as half the cases of genetic deafness. Stem cells are a basic type of cell that can change into another type of specialised cell – such as a muscle cell or a cochlea cell – through a peculiar process known as differentiation. Many scientists believe stem cells could offer a new solution to genetic deafness by restoring the normal function of the ear hair cells and, as a result, the patient’s hearing. Humans are born with about 11,000 hair cells in each ear that are vital to transmitting sound from the ear to the brain. As the body ages, it experiences the slow progression of hearing loss due to the death of these cells from excessive noises, exposure to certain drugs, and ageing. Currently, there are no cures for most types of hearing loss. The new research follows work from Dr Sarah Boddy of the University of Sheffield, who has been investigating the potential of human bone marrow stem cells as a way to reverse hearing loss. Boddy’s team has shown that human bone marrow stem cells can be converted into ear-like cells after exposure to a cocktail of natural chemicals produced by foetal cochlear cells. About a third of 65-year-olds say they are hearing impaired, a number that rises to half by age 75. No Comments yet
Peace and Conflict The United Nations (the UN)- founded in 1945 after WW2, has 193 member staes not including Vatican City. why important for world peace?- 1. imposing sanctions on countries threatening world peace. 2. authorises use of force by member states to stop an aggresser. 3. can send UN peacekeeper to: prevent outbreak of conflict or spill over of conflict to borders. Stabilise conflict situation after ceasefire & create environment for parties to reach lasting peace keeping agreement. assist in putting peace agreements into practice. lead states to stable government based on democratic principles & economic development, after have been in conflict. how world peace promoted by UN & religious organisations->War crimes= UN keep peace through laws that must be upheld and prosecutes those who break them. War Crimes include murder/ill-treatment/deportation of civilians/hostages, wanton destruction of villages/towns/cities. etc. How UN sorted kosovo problem- UN bombed Serbia to defeat Serbian army, then sent forces to prevent serbian army going into Kosovo (and reverse), sent army to protect towns with serbs in kosovo (and reverse). Sent in militians to help run government and create political parties. After UN gave kosovo political parties they had an election in 2008. Religion & World Peace- Even though religious people fight in wars, all religions believe in World Peace. How they work towards peace- 1. organising public debates on horrors of war & encourage followers to support political parties opposed to war. Could bring World Peace since more people learn about war, more likely to vote for peace. 2. making public statements about war, could bring peace since public opinion be changed & government take notice of public opinion. 3. organising & attending inter-faith conferences to help religions work together to promote peace. Many conflicts causes by religious dispute so if work together peace easier to obtain. 4. Working for economic justice and global recognition of human rights lead to wars stopped before start. Wars also stop because religions feel members treated right. Why Wars occur- 1. Economics-a. some think Iraq invaded because west wanted to make sure would have access to enormous oil reserves. b. economic problems in 1 country lead to conflict in another. 2. Religion- a. difference within a religion and 1 religious group attacks another for having different beliefs. b. majority of area is 1 religion but country as whole different religion. c.1 country might feel another country treating their followers of their religion badly so invade to protect them. 3. Nationalism and Ethnicity- a. can lead to genocide. b. leads to minority group fighting civil wars to establish an independent state c. tensions between ethnic groups often arise in countries which been artifically created as result of colnialism. d. One form of nationalism is belief that each separate ethnic group should have its own country because different culture. e. Another form of Nationalism to insist only minority ethnic groups should be removed from the country so that nation only 1 ethnic group. Nature and importanceof theory of just war- developed…
This section is from the book "Cyclopedia Of Architecture, Carpentry, And Building", by James C. et al. Also available from Amazon: Cyclopedia Of Architecture, Carpentry And Building. Direct steam heating is used in all classes of buildings, both by itself and in combination with other systems. The first cost of installation is greater than for furnace heating but the amount of fuel required is less, as no outside air-supply is necessary. If used for warming hospitals, schoolhouses or other buildings where ventilation is desired, it must be supplemented by some other means for providing warm fresh air. A system of direct steam heating consists of a furnace and boiler for the combustion of fuel and the generation of steam; a system of pipes for conveying the steam to the radiators and for returning the water of condensation to the boiler; and radiators or coils placed in the rooms for diffusing the heat. Various types of boilers are used, depending upon the size and kind of building to be warmed. Some form of cast iron sectional boiler is commonly used for dwelling houses, while the tubular or water-tube boiler is more usually employed in larger buildings. Where the boiler is used for heating purposes only, a low steam pressure of from 2 to 10 pounds is carried and the condensation flows back by gravity to the boiler which is placed below the lowest radiator. When, for any reason, a higher pressure is required, the steam for the heating system is made to pass through a reducing valve and the condensation is returned to the boiler by means of a pump or return trap. The methods of making the pipe connections between the boiler and radiators vary for different conditions and in different systems of heating. These will be taken up later under the head of design. Direct radiating surface is made up in different ways: Fig.2 shows a common form of cast iron sectional radiator; these can be made up in any size depending upon the height and number of sections used. Fig. 3 is made up of vertical wrought iron pipes screwed into a cast iron base and is a very efficient form. Fig. 4 shows a type of cast iron wall radiator which is often used where it is desired to keep the floor free from obstruction. Fig. 5 is a special form of dining-room radiator provided with a warming closet. Wall and ceiling coils of wrought iron pipe are often used in school rooms, halls and shops or where the appearance is not objectionable.
In this section we clarify several concepts that we will come across throughout our text. Nominal and Real Values Nominal values, such as nominal prices, nominal earnings, nominal wages, nominal interest rates, and nominal Gross Domestic Product, refer to the actual dollar value of these variables. A person who earns $10 per hour in today’s dollars earns a nominal wage of $10. Real values are values in comparison, or relative, to price changes over time. You may earn $10 this year and you may earn $10 five years from now. Your nominal income remains the same, but $10 five years from now is not worth as much as $10 now. The real value of $10 five years from now is less than $10 in today’s dollars. We also distinguish between real and nominal when we discuss interest rates. Real interest rates are nominal rates adjusted for inflation. If you pay your bank 12% in nominal interest, you are only paying 2% in real interest, if prices are rising by 10%. Positive and Normative Economic Statements Positive economic statements are facts, or statements, which can be proven. Normative economic statements cannot be proven. They are opinions or value judgments. A positive statement does not have to be a true statement. The statement could be proven false, in which case, it is a false positive statement. Predictions are neither positive nor normative statements. Predictions, such as “The New York Mets will win the World Series next year,” or “Unemployment will fall below 5% next month,” are neither normative nor positive statements. They are predictions unrelated to facts or value judgments. Examples of positive economic statements are - The federal government experienced a budget surplus this past year (this is a false positive statement, but, by definition, a positive economic statement). - When the value of the dollar falls, Japanese products imported into the United States become more expensive (this is a true positive statement). - Legalizing drugs will lower the price of drugs and reduce the drug profits that illegal drug dealers make (this is a true positive statement). - The United States does not have a federally mandated minimum wage (this is a false positive statement). Examples of normative economic statements are - The government should raise taxes and lower government spending to reduce the budget deficit. - We need to try to lower the value of the dollar in order to discourage the imports of Japanese goods into this country. - Our government should legalize the use of drugs in this country. - The minimum wage should be at least $15.00. This Latin term means “if no other things in the economy change.” For example, when college tuition increases, our chapter on supply and demand predicts that student enrollment (the number of course sign-ups) will decrease. Economists, indeed, predict this with the condition of “ceteris paribus,” or if no other things in the economy change. But if students’ (or their parents’ or guardians’) real incomes increase, then college enrollment may increase, despite the tuition increase. Tuition increases are still predicted to decrease college enrollment, but in this case, other things in the economy (incomes) did change, and the “ceteris paribus” condition was violated. The Fallacy of Composition You are subject to the fallacy of composition if you state that what is good for one is necessarily good for the entire group. If a college has a shortage of parking spaces for its students, it may be beneficial for a number of students to arrive very early and secure a parking space. However, if everyone arrives very early, the parking problem remains an issue. The Broken Window Fallacy The economist Henry Hazlitt, in his book Economics in One Lesson, provides another good example of the fallacy of composition. In Chapter 2, the “Broken Window Fallacy,” he describes that when a person throws a brick through a baker’s window, it may seem that this stimulates the economy, because it provides a job for a glazier (window repair person). According to Hazlitt, the fallacy occurs when we do not take into account the additional expenditures due to the replacement of the window. This expense lowers the baker’s spending on other goods and services. If the baker would have bought a suit from the tailor without the expense of repairing his window, then the tailor loses a job compared to if the window had not been broken. So if the window is broken, the glazier gains a job, but the tailor loses one. Overall, there is no gain in employment if someone throws a brick through a window. Additionally, the baker loses, because he is without a suit compared to if the window had not been broken. Analogously, hurricanes, floods, and wartime activities do not provide a net gain in employment. They create jobs in one area of the economy, but take away jobs in another. Overall, they destroy wealth and are harmful to the economy. The following section, written by the late Bob Russell (pictured), former Journalist, Writer, and Professor of English at Howard Community College, explains the Broken Window Fallacy in more detail. “It is difficult to predict the impact of serious hurricanes on the U.S. economy, but there are a few things we can conclude. A lot of money and activity that might ordinarily travel to the hurricane affected areas will go to other areas of the country or the world. For instance, just consider the impact that these storms have had on the conference and meeting industry, vacations, sporting events, etc. Many of these expenses are being diverted to other locations. On the other hand, lots of government spending, insurance claim payments, and private construction money go to the hurricane-affected areas, mostly to cover reconstruction and rebuilding expenses. In 2005 all of our pocketbooks were affected by Katrina and Rita –– especially at the gas pumps. These increased costs slowed the economy a bit. Fuel, heating, and transportation costs all rose, causing a reduction in output. Of course, reconstruction of the devastated areas provided a bit of an uplift to the construction industry and supply lines of repair items, wood and other building supplies, furniture, etc. Dollars spent on the reconstruction effort is money that will have to be diverted from money which would have been spent in other areas and with other goals. This line of thinking provides us with an opportunity to talk a bit about the “Broken Window Fallacy,” a fascinating economic theory. It goes like this: If someone throws a stone through a shop window, the owner needs to fix it. The cost to do so is, hypothetically, $250, selected to fit with Hazlitt’s example below. The repair puts people to work and increases total output. Since this creates jobs, might we do well to break lots of windows and repair them? Most folks think this is nonsense since, although it would employ labor, there would be no benefit to the society at large. Yet there are many similar schemes, promoted by politicians and supported by the general public in the name of JOBS. Long ago, this fallacy was exposed by the French economist Frederic Bastiat in an essay entitled “What is seen and what is not seen.” Bastiat teaches us to understand the economic reality beneath the superficial appearance of everyday economic life. What is seen is the broken window repaired, the workers working and the money they spend. What is not seen is that these workers and resources would have been employed in something else if not for the broken window. What ultimately benefits society is not jobs, but goods. In this instance, the glass store gains, but the broken window store owner loses (she probably would have spent the money on something else) – and the person that owns the shop that sells what she would have bought has a loss. According to the late Henry Hazlitt in Economics in One Lesson, “Instead of [the shopkeeper] having a window and $250, he now has merely a window. Or, as he was planning to buy [a] suit that very afternoon, instead of having both a window and a suit he must be content with the window or the suit. If we think of him as a part of the community, the community has lost a new suit that might otherwise have come into being, and is just that much poorer.” The Broken Window Fallacy endures because of the difficulty of seeing what the shopkeeper would have done. We can see the gain that goes to the glass shop. We can see the new pane of glass in the front of the store. However, we cannot see what the shopkeeper would have done with the money if he had been allowed to keep it, precisely because he wasn’t allowed to keep it. We cannot see the new suit foregone. Since the winners are easily identifiable and the losers are not, it’s easy to conclude that there are only winners and the economy as a whole is better off. Overall, the economy will suffer due to the hurricanes, not benefit as some media pundits have suggested, although the intensity and duration of the suffering is up for grabs.” From one of Bob Russell’s newsletters (reprinted with permission). For a video explanation of the Broken Window Fallacy, please watch the following: What is Good for One Industry is not Necessarily Good for the Country Let’s look at the farming industry as an example of the fallacy of composition. Currently, the United States government (and governments of many other industrialized countries) supports farmers in the form of direct subsidies and other programs. These subsidies benefit most farmers and seem to be beneficial for the farming industry. Many people believe that what is good for the farming industry must automatically also be good for the entire country. It is certainly possible that this is the case. However, to automatically conclude this is to suffer from the fallacy of composition. Farm subsidies and other farm support programs costs the government money. This increases taxes and hurts citizens. Furthermore, some farm programs (price supports) increase the price of certain agricultural products to consumers. Some economists also claim that the subsidies to farmers do not even benefit farmers themselves because it makes them weaker and less competitive in the long run. The subsidies may help the farmers in the short run, but not in the long run. For more information about farm programs and their economic effects, see our Microeconomics text, Unit 6. Does a Demand Increase Stimulate the Economy? George Reisman, in his book Capitalism, discusses another example of the fallacy of composition. He states that an increase in the demand for one product causes a price increase for that product. Assuming the cost of making the product does not increase, the product’s profitability increases. Does this mean that if aggregate demand (demand for all products) increases, profitability of all products increases? Well, it depends. If a nation’s total nominal income is constant, it is actually not possible for demand of all products to increase. Demand for one product may increase, but then the demand for other products must, mathematically speaking, decrease. So prices of some products increase, but prices of others decrease. The only way for demand of all products to increase is if total nominal income increases. This is only possible if the nation’s total money in circulation increases. This is possible if the nation increases its money supply. But in this case, prices increase, and if profits increase, it means merely that nominal profits increase and not real profits. An important implication of this realization is that if the government decides to “stimulate” the economy by encouraging people to spend more on consumer goods (by printing more money, or by distributing money through social programs, creating public works jobs), it does not really increase total aggregate demand. The demand for one particular good or category of goods (those bought by the elderly, for example, in the case of higher Social Security paychecks for the elderly) may rise, but the demand for other goods will have to fall. Nominal (the monetary amount of) spending may increase, but real spending will not. The only way to increase real profits is to increase productivity. This lowers costs and decreases prices, which allows increases in real profits and real demand. The Fallacy of Cause and Effect Cause and Effect Fallacy Because A happens before B, A must necessarily be the cause of B. It is tempting to conclude that if one event occurs right before another, the first event must have caused the second event. Let’s say your basketball team wins its first three games while you are out with an injury. The fourth game, you are back, and your team loses. You conclude that it is your fault. Of course, your presence could have something to do with it, but you cannot automatically conclude this. Other variables may have played a role: the game conditions, the referees, the opponent, your other teammates’ performance that day, the coach’s performance (even though the coach is always right :), or bad luck. Similarly, in economics, people sometimes conclude that if one event follows another, the other must have caused the one. The period following World War II has seen a rising standard of living in industrialized countries around the world. This period has also been accompanied by much greater government involvement in these countries. Can we conclude that greater government involvement has caused higher standards of living? It may have contributed, but it would be a fallacy to automatically conclude this. We must also look at all other variables, such as technology changes and political and socio-economic changes.
What are tsunamis? The word “tsunami” comprises the Japanese words “tsu” (meaning harbour) and “nami” (meaning wave). A tsunami is a series of enormous waves created by an underwater disturbance usually associated with earthquakes occurring below or near the ocean. Volcanic eruptions, submarine landslides, and coastal rock falls can also generate a tsunami, as can a large asteroid impacting the ocean. They originate from a vertical movement of the sea floor with the consequent displacement of a water mass. In deep ocean, waves travel at a speed of about 800 km/h and are only a few tens of centimetres high. In the ocean normally waves are generated by wind and can be described through their amplitude, which is the height of the wave, and wavelength which is the distance from one wave crest to the other. The wavelength is a factor which distinguishes tsunamis from wind waves: a tsunami wavelength is considerably longer than a wind wave wavelength; it can be more than 200 km long. The wavelength is closely-linked to the sea depth. As the sea depth decreases, the wavelength decreases. At the same time, the height of the wave increases. Near the shore line the wave can assume the shape of a wall, up to tens of metres high, with a massive destructive power. The speed of a tsunami wave can be simply expressed by the formula v= where g is the acceleration of gravity (9,8 m/s2), and h is the depth of the sea expressed in metres. For example at an ocean depth of 4000 m the waves travel at about 700 km/h, i.e. the speed of a plane. From the area where the tsunami originates, waves travel outward in all directions. Once the wave approaches the shore it builds the height. The size of the wave is as well influenced by the topography of the coastline. There may be more than one wave and the succeeding one may be larger that the one before. That is why a small tsunami at one beach can be a giant wave few kilometres away. Earthquake-induced movement of the ocean floor most often generates tsunamis. If a major earthquake or landslide occurs close to shore, the first wave in a series could reach the beach in few minutes, even before a warning is issued. Areas are at greater risk if they are less than 7 meters above sea level, and within less than 2 km of the shoreline. Most deaths caused by tsunamis are because of drowning. Associated risks include flooding, contamination of drinking water, fires from ruptured tanks or gas lines, damages to key infrastructures. What causes tsunami Different kinds of events can produce a tsunami. The least probable, but still possible, cause is the impact of an asteroid or a meteorite in the sea, while the more frequent causes are underwater disturbances: a volcanic collapse, a landslide or an earthquake. Earthquakes can be generated by movements along fault zones associated with plate boundaries. Lithospheric plates, which cover the entire surface of the earth, and contain both the continents, and the seafloor, move relative to each other at rates of up to ten cm/year. The region where two plates come in contact is a plate boundary, and the way in which one plate moves relative to another determines the type of boundary: spreading, where two plates move away from each other, subduction, where two plates move towards each other and one slides beneath the other, and transform where two plates slide horizontally past each other. Most strong earthquakes occur in subduction zones where an ocean plate slides under a continental plate or another younger ocean plate Not all earthquakes generate tsunamis. To generate a tsunami, the fault where the earthquake occurs must be underneath or near the ocean, and cause vertical movement of the seafloor (up to several meters) over a large area (up to a hundred thousand square meters). Shallow focus earthquakes (depth less 70 km) along subduction zones are responsible of the most destructive tsunamis. The amount of vertical and horizontal motion of the seafloor and horizontal motion of the sea floor, the area over which it occurs, the simultaneous occurrence of slumping of underwater sediments due to the shaking, and the efficiency with which energy is transferred from earth’s crust to the ocean water are all part of tsunami generation mechanism. Resulting in rockfalls, icefalls, or underwater (submarine) landslides or slumps can generate displacement of water to create a tsunami. More often than thought, submarine landslides are often caused by earthquakes, large and small, therefore strengthening the force of an earthquake induced tsunami. The most notable example of a landslide-induced tsunami can be traced to Southern France in the 1980’s where the movement of a significant amount of earth for the construction of an airport triggered an underwater landslide, which resulted in destructive tsunami waves hitting the harbor of Thebes. Tsunamis caused by extraterrestrial collision (i.e. asteroids, meteors) are an extremely rare occurrence. Although no meteor/asteroid induced tsunami have been recorded in recent history, scientists realize that if these celestial bodies should strike the ocean, a large volume of water would undoubtedly be displaced to cause a tsunami. Scientists have calculated that if a moderately large asteroid, 5-6 km in diameter, should strike the middle of the large ocean basin such as the Atlantic Ocean, it would produce a tsunami that would travel all the way to the Appalachian Mountains in the upper two-thirds of the United States. On both sides of the Atlantic, coastal cities would be washed out by such a tsunami. An asteroid 5-6 kilometers in diameter impacting between the Hawaiian Islands and the West Coast of North America, would produce a tsunami which would wash out the coastal cities on the West coasts of Canada, U.S. and Mexico and would cover most of the inhabited coastal areas of the Hawaiian islands. Although relatively infrequent, violent volcanic eruptions represent also impulsive disturbances, which can displace a great volume of water and generate extremely destructive tsunami waves in the immediate source area. Volcanic disturbances can generate waves by the sudden displacement of water caused by a volcanic explosion, by a volcano's slope failure, or more likely by a phreatomagmatic explosion and collapse and/or engulfment of the volcanic magmatic chambers. The majority of tsunamis that occur in the Pacific Ocean happen around the “Ring of Fire” Area surrounding the Hawaiian Islands. The periphery has also been dubbed the 'Ring of Fire' because of the extraordinarily high number of active volcanoes and seismic activity located in the region. Since 1819, over 40 tsunamis have struck the Hawaiian Islands. One of the largest and most destructive tsunamis ever recorded was generated in August 26, 1883 after the explosion and collapse of the volcano of Krakatoa (Krakatau), in Indonesia. This explosion generated waves that reached 135 feet, destroyed coastal towns and villages along the Sunda Strait in both the islands of Java and Sumatra, killing 36, 417 people.
Archaeology, anthropology, history, paleontology.... Foreign words to many young people? Perhaps, or maybe they are just words that they must add to their vocabulary in order to pass a test or a class. That is not the motivation or interpretation most educators would want for those words. It would be far more desirable if the words could come alive, if the learner could participate or become actively involved with the principles of these areas within a context that has meaning for them. Such an opportunity is available through the use of "Exploring Oregon's Past: A Teacher's Activity Guide for Fourth Through Seventh Grades." This activity guide provides background information and activities in Oregon prehistory, history, archaeology, anthropology, and paleontology. It provides an opportunity for young people to do some of the things that researchers do while gathering and interpreting data about Oregon's past. It allows students to appreciate some of the challenges earlier Oregonians had to overcome in order to survive and succeed. These skills and experiences can be transferred to new settings with new conditions for the learner. It is a hands-on, integrated, student-oriented resource for teachers. I would recommend the use of the activity guide for teachers in history, science, language arts, art, and mathematics classes. Sections of the guide can be extracted and used individually or as part of a larger curriculum. I hope the teachers that use the activity guide feel free to use it as it best fits their needs for their students. Have fun, learn, grow, and enjoy!Helen E. Woods, PhD School of Education Western Oregon State College
“I believe that before we evolved language, our communication was more musical than it is now,” says cognitive archaeologist Steven Mithen at the University of Reading in England, author of “The Singing Neanderthals: The Origins of Music, Language, Mind and Body.” Unlike Darwin, Dr. Mithen is convinced that music was crucial to human survival. “Using music to express emotion or build a sense of group belonging would have been essential to the function of human society, especially before language evolved prior to modern humans.” The discovery of the world’s oldest musical instrument—a 35,000-year-old flute made from a wing bone—highlights a prehistoric moment when the mind learned to soar on flights of melody and rhythm. Researchers announced last week in Nature that they had unearthed the flute from the Ice Age rubbish of cave bear bones, reindeer horn and stone tools discarded in a cavern called Hohle Fels near Ulm, Germany. No one knows the melodies that were played in this primordial concert hall, which sheltered the humans who first settled Europe. The delicate wind instrument, though, offers evidence of how music pervaded daily life eons before iTunes, satellite radio and Muzak. …the ability to create musical instruments reflects a profound mental awakening that gave these early humans a crucial edge over the more primitive Neanderthal people who lived in the same epoch. “The expansion of modern humans hinged in part on new ways of storing symbolic information that seemed to confer an advantage on these people in competition with Neanderthals,” Dr. Conard says. To Dr. Patel, music-making was a conscious innovation, like the invention of writing or the control of fire. “It is something that we humans invented that then transformed human life,” he says. “It has a profound impact on how individual humans experience the world, by connecting us through space and time to other minds.” If even something as central to our daily lives as music is, in fact the result of technological innovation over time and if technology can, as with music, change the way we think, communicate and build communities, I can’t help but wonder: What will our descendants think thousands of years from now as they look back on the rise of today’s web and social networking technologies? If nothing else, this sense of perspective should make us better appreciate how important the development of communications media really is to the future of the human species. Impossible as it is to predict how that staggeringly complex process will unfold—e.g., will Google make us smarter or stupider?—I’ll just humbly suggest that, rather than try to tinker with the future course of the species by trying to fine-tune public policy today to produce the “right” outcome, we would do better to follow the same principle that has guided the medical profession for 24 centuries: First, Do No Harm. In other words, if we don’t know what the effects of regulatory intervention in new media will be in the long-term, we’d be better off to leave well enough alone.
ON THIS PAGE: You will find some basic information about this disease and the parts of the body it may affect. This is the first page of Cancer.Net’s Guide to B-Cell Leukemia. To see other pages, use the menu on the side of your screen. Think of that menu as a roadmap to this full guide. Leukemia is a cancer of the blood cells. Leukemia begins when normal blood cells change and grow uncontrollably. Blood cells are made in the bone marrow, the spongy tissue inside the larger bones in the body. There are different types of blood cells, including red blood cells that carry oxygen throughout the body, white cells that fight infection, and platelets that help the blood to clot. Types of leukemia are named after the specific blood cell that becomes cancerous, such as the lymphoid cells, which are white blood cells, or the myeloid cells, which are cells of the bone marrow that develop into cells that fight bacterial infections. There are four main types of leukemia in adults: - Acute lymphocytic leukemia (ALL) - Chronic lymphocytic leukemia (CLL) - Acute myeloid leukemia (AML) - Chronic myeloid leukemia (CML) About PLL and HCL There are other, less common types of leukemia, but they are generally subcategories of one of the four main categories. This section focuses on prolymphocytic leukemia (PLL) and hairy cell leukemia (HCL), both of which are chronic B-cell leukemias. B cells are a specific type of lymphocyte that normally make antibodies for the immune system. In PLL, many immature lymphocytes, or prolymphocytes, are found in the blood. This type of leukemia may occur by itself, together with CLL, or CLL may turn into PLL. PLL tends to worsen more quickly than CLL. HCL is a slow-growing form of leukemia. It is called “hairy cell” because the abnormal lymphocytes have projections that look like hair when seen under a microscope. As these cells multiply, they build up in the bone marrow, blood, and spleen. Because these lymphocytes are abnormal, they do not work normally to fight disease and infection, and eventually may crowd out the normal cells. Treatment is usually very effective for HCL. Looking for More of an Overview? If you would like additional introductory information, explore these related items on Cancer.Net. Please note these links take you to other sections on Cancer.Net: - ASCO Answers Fact Sheet: Read a one-page fact sheet (available as a PDF) that offers an easy-to-print introduction to CLL. - Cancer.Net Patient Education Video: View a short video led by an ASCO expert in leukemia that provides basic information and areas of research. To continue reading this guide, use the menu on the side of your screen to select another section.
Tomato Pinworm - Biology and Control Strategies for Greenhouse Tomato Crops Table of Contents The tomato pinworm (TPW) [Keiferia lycopersicella (Walsingham)] is primarily a pest in tropical and subtropical areas of the world. In the United States, it has been reported in several southern states such as California and Florida. Outside of the continental U.S., it has also been reported in places including Hawaii, Haiti, Mexico, Peru, Cuba, and the West Indies. Its first incidence in Canada was reported in 1946. Its most recent occurrence in greenhouse tomatoes in Ontario was during 1991, and it has been observed in a few operations every year since that time. Solanaceous crops such as tomato, potato, and eggplant are the preferred host plants of TPW. Other solanaceous species such as pepper and tobacco do not favour development of TPW. Solanaceous weeds, such as horsenettle (Solanum carolinense L.) may serve as secondary hosts. The tomato pinworm attacks both the leaves and fruits of tomato. Tunnelling or mining by larvae in the leaves is the most common type of injury. Initially, the mine is long and narrow (Figure 1), but it later widens to become blotch-shaped. Older larvae may fold the leaf over itself, or knit 2 leaves together, between which they continue to feed, causing large blotches (Figure 2). In severe infestations, all leaves on a plant are attacked giving the crop a burnt appearance (Figure 3). More direct damage is caused to the crop when the older larvae may penetrate nearby fruits by burrowing under the calyx into the fruit. Very small pinholes are left at the points of entry, which are often marked by the presence of a small amount frass or droppings (Figure 4). Points of entry under the calyx are inconspicuous and can easily be overlooked during packing. Larvae may also bore into the sides of tomato fruits in heavily-infested crops. Figure 1. Larval mines. The TPW passes through 4 stages (egg, larva, pupa and adult), and completes its life cycle in 26 days at 24-26°C, and 100 days at 10-13°C. The TPW is reported to be unable to survive the winter outdoors in Canada. However, they may be able to survive in crop debris left in the field or in other protected locations. The newly-hatched larva moves about on the surface of the leaf for a short while until it finds a suitable place to enter the leaf. Then it eats its way into the leaf between the upper and lower leaf surfaces. The larva feeds within the mine until about half-grown, then emerges and may fold the leaf onto itself, or join 2 leaves by means of threads produced from its mouth. Here, the larva continues to feed between the leaf surfaces and forms large blotch mines. When fruits are present, larvae may enter fruits instead of folding the leaves. The number of larvae that enter fruits increases as the population density increases. The fully-grown larvae usually lower themselves by a thread to the ground to pupate, but pupation can also occur in leaf folds and fruits. The adults emerge from the pupal cells, mate, lay eggs and repeat the cycle. Mating occurs within 24-48 hours after emergence and most eggs are laid within a few days after emergence. The adults hide during the day and may fly erratically for short distances when leaves closest to the ground are disturbed. Flight and egg-laying usually begin at twilight and continue through the night if the temperature is above 16°C. Figure 2. Blotches caused by the tomato pinworm. Figure 3. Infestations give crop burnt appearance. Eggs are laid scattered, or in small groups of 3-7, mainly on the upper leaves, and on both upper and lower leaf surfaces (Figure 5). The egg is oval in shape and very tiny (approximately 0.4 mm long). Its colour is pearly white at first, and then becomes pale yellow before hatching. The egg stage lasts from four to eight days at 22-24°C. Figure 4. Small pinholes. Figure 5. Eggs. The larva molts 4 times. The newly-hatched larva is tiny (about 0.7 mm long), with a black or dark brown head capsule, and a cream-colour body. The fully-grown larva is 6-8 mm long, and has brownish to purplish markings along the body (Figure 6). Tomato pinworm larvae are characteristically very active and wriggle when touched. The larval stage lasts 10 days at 24-26°C. Figure 6. Larval stage. Pupation takes place within a loosely-spun cocoon in several possible locations including under debris on the ground, just under the soil surface, within the folds of leaves, on strings supporting tomato plants, or, rarely, in the fruits. The pupa is spindle-shaped; greenish at first, but soon changes to a dark chestnut brown colour (Figure 7). The pupal stage lasts 8-20 days depending on temperature. Figure 7. Pupal stage. The adult resembles a clothes moth in size and colour. It is greyish-brown in colour and is 6-8 mm long (Figure 8). Adults live for about 7-9 days at 24-26°C, and for about 23 days at 13°C. Use of a combination of techniques is the best approach for managing TPW. Such techniques are as follows: Figure 8. Adult. Figure 9. Traps. Figure 10. Trichogramma species. Figure 11. Light traps. The tomato pinworm (TPW) [Keiferia lycopersicella (Walsingham)] is primarily a pest in tropical and subtropical areas of the world.Its first incidence in Canada was reported in 1946. Its most recent occurrence in greenhouse tomatoes in Ontario was during 1991, and it has been observed in a few operations every year since that time. Diseases and Pests of Vegetable Crops in Canada. Eds. R.J. Howard, J. A. Garland and W. L. Seaman. 1994. The Canadian Phytopathological Society. Evaluation Of Commercially-Produced Trichogramma Spp. (Hymenoptera: Trichogrammatidae) For Control Of Tomato Pinworm, Keiferia Lycopersicella (Walsingham) (Lepidoptera: Gelechiidae) On Greenhouse Tomatoes. Shipp, J. L., K Wang, & G. Ferguson. 1998. Can. Ent.: (submitted) Incidence Of Tomato Pinworm, Keiferia Lycopersicella (Walsingham) (Lepidoptera: Gelechiidae) On Greenhouse Tomato In Southern Ontario And Its Control Using Mating Disruption. Wang, K, G. Ferguson, & J. L. Shipp. 1998. Proc. Ent. Soc. Ont.: (In Press) Life History And Control Of The Tomato Pinworm. Elmore, J. C. & A. F. Howland. 1943. U. S. Dept. Agr. Tech. Bull. 841. The Tomato Pinworm. Neiswander, R. B. 1950. Ohio Agr. Exp. Sta., Wooster, Ohio Research Bull. 702. The Tomato Pinworm. Thomas, C. A. 1936. (Gnorimoschema lycopersicella Busck). Pa. Ag. Expt. Sta. Bull. 337. For more information: Toll Free: 1-877-424-1300
Pressure also has a big impact in the deep ocean. The bottom of the ocean is an inhospitable place. It is dark, cold, and under extreme pressures. Imagine you are swimming in a pool that is ten feet deep, or just about three meters. You dive to the bottom to pick up a pool toy or a penny and your ears pop. You feel the pressure on your body and on your head. Now imagine increasing that pressure over 100 times. With every ten meters of seawater, add one atmosphere of pressure. Now imagine traveling to 1000m, or 100x10m. Since we already exist at 1bar of pressure, the total pressure would be about 101bar. Jason can dive to depths of 6000 meters. The ROV is specially reinforced to deal with the extreme pressures at depth so that it doesn’t implode, or compress explosively. The areas we work in the Lau Basin are at depths of 1700 to 2600m, but the pressures are no less damaging. |Styrofoam cups and heads before their descent to our deep-sea research sites.| To demonstrate the pressures at depth, we sent down several Styrofoam heads that Allie, Joy, Francis and Piper illustrated, and cups that middle school teachers attending a science (STEM) workshop at the University of Nevada illustrated. The cups and heads dropped to depths between 1900m and 2400m. Simply dropping to the bottom of the ocean allowed the pressures of nearly 200bars, or around 2500 pounds per square inch, to do its work. The cups, once a full 10oz and over 3 inches tall, shrank down to around 2 to 3oz and around 2 inches tall. We stuffed wads of paper inside to allow them to keep their shape, but that did not necessarily ensure they were perfectly round at the end. Instead, they crumpled as the pressure increased, crushing them on one side or another depending on which way they were oriented. |The extreme left scale bar was 1 inch. This illustrates nicely how much the cup shrunk!| Styrofoam is made of foam called polystyrene, which is full of air pockets. When put under pressure the air is squeezed out, but the foam retains its shape. This is why Styrofoam makes such a good example of pressure at depth – it does not warp the material past recognition, but it shrinks to a size that seems unrealistic to our 1bar atmosphere. This extreme pressure at deep-sea vents also allows the very hot hydrothermal fluid to remain in a liquid state in most cases, except when the temperature is so high the fluid will boil as we saw in the Mariner blog post. Many of the deep-sea invertebrates; snails, mussels, shrimp, crabs, do not have gas-filled organelles, such as humans or other land-dwellers, which would be compressed under pressure, and so these creatures are not nearly as affected by pressure as we are. |the normal head| |the shrunken head| The bacteria we bring up to our lab are not necessarily barophiles (piezophiles), or pressure-loving, but they do thrive under those conditions nonetheless. Even the fluid samples we bring up show evidence of pressure – there are no bubbles emerging from the vents, but when we remove the fluid from its pressure-tight sampler (the IGT see "hot water" post), it may erupt with bubbles as the gases escape the solution. Much of the deep ocean is as yet unexplored. And how pressure affects life in this deep ocean biosphere will no doubt lead to a new understanding of the extent of life on Earth. Contributed Morgan Haldeman, Nick Rhoades and John Kelley.
ANN ARBOR—Physicists at the University of Michigan say they have devised a more elegant way to fine-tune the behavior of topological insulators—peculiar, two-faced materials whose electrical properties differ markedly between their surface and their interior. Topological insulators could enable advanced electronics that harness more than the charge of electrons, but also their spin and magnetic properties. The outer layer of these exotic substances behaves like a sheet of metal—a conductor that channels electricity extremely efficiently. But the interior acts like a chunk of wood—an insulator that blocks the flow of current. This occurs despite that the material's composition stays the same throughout. Scientists have known about topological properties in materials for some 30 years and discovered topological insulators in 2007. But they're still working on precisely measuring and utilizing their special surface capabilities. By using a technique known as doping, which is common in the semiconductor industry, the U-M researchers developed a new, more controlled way to create topological insulators. Doping involves adding small amounts of impurities to a material in order to change its electrical conductivity. In this case, the physicists doped bismuth telluride, a mirror-like substance known for its ability to convert heat to electricity, with thallium, a poisonous metal. "This is a more elegant approach to making a topological insulator," said Ctirad Uher, the C. Wilbur Peters Collegiate Professor of Physics in the College of Literature, Science and the Arts. "By doping with thallium, we can add electrons to the system and take it across the full spectrum of carrier densities." The "carriers" Uher is referring to are electrons and holes. Holes are places where electrons could be, but aren't. Essentially, they act as positively charged particles and help keep electrons moving in a current. Before a chunk of bismuth telluride is doped, it can conduct electricity relatively efficiently throughout. "By creating an insulating state in the bulk, we can reveal the unique properties of the surface states," Uher said. Researchers have another way to turn the bulk of bismuth telluride into an insulator, but it involves working with hard-to-control vapors, said Hang Chi, a doctoral student in physics who is first author of a paper on the findings recently published online in Physical Review B. With their new approach, the researchers can tweak the properties of the bulk of their material from a so-called "p-type" semiconductor, which conducts electricity because of a deficiency of electrons, to an "n-type," which conducts because of an excess of electrons. And they can lock the material in the middle, where it behaves as an insulator. The researchers analyzed the microstructure of their material with a transmission electron microscope. They examined its chemical composition using a technique called "energy dispersive spectrometry," which shows how the elements emit X-ray light when they are exposed to a beam of high energetic electrons. And they measured the electrical resistivity of their material, which is a measure of how efficiently current flows, at temperatures ranging from -456 to 80 degrees Fahrenheit. They found that at thallium concentrations up to 1 percent, the bulk of the material was conductive. At between 1.6 and 2.4 percent thallium, the interior transformed into an insulator, whose unique resistivity profile seemed to indicate that the surface state dominates the electrical conduction at temperatures below -369 degrees Fahrenheit. The surface state is detectable at higher temperatures, and Uher believes it is likely present at room temperature too. But it's difficult to measure as the temperature rises because of interference from the vibration of the molecules in the solid. "Topological insulators are one of the most exciting fields right now in condensed matter physics," Chi said. "Because they have a bulk that is insulating, and a surface that is conducting, they can be used for quite a lot of applications—perhaps quantum computation, for example. They could also house so-called Majorana fermions." Majorana fermions are elusive matter particles that are their own antiparticle. Force carrier particles such as photons of light serve as their own antiparticle, but such behavior isn't typical of matter. The recently published paper is titled "Low-temperature transport properties of Tl-doped Bi2Te3 single crystals." The research is supported by the U-M Center for Solar and Thermal Energy Conversion, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award # DE-SC-0000957. - Abstract of paper: http://prb.aps.org/abstract/PRB/v88/i4/e045202 - Ctirad Uher: http://www.lsa.umich.edu/physics/directory/faculty/ci.uherctirad_ci.detail
Flashcards in Approaches to health geography Deck (23): What are the 5 approaches to health geography? - Social interactionist - Post- structuralist What are the characteristics of the positivist approach? - scientific approach- often uses natural science methods - relies on accurate measurement - searches for statistical irregularities - tests and explains hypothesis using quantitative/ statistical methods - Views the body from a biomedical perspective- like a machine In a positivist approach is space or place more important? - Space is important- (i.e. location and spatial arrangement) - Place less so - Positivist approaches can be used to map disease data and describe / explain spatial distribution What was the aim of the NYC childhood asthma study? To describe and explain spatial variation in the rates of hospitalisation of children due to asthma. What was the methodology of the NYC childhood asthma study? - Used a map of rates for 2016 neighbourhoods in NYC- and looked for clusters - Explained by using statistical methods to relate the incidence of childhood asthma to housing conditions and household income What is epidemiology? the study of disease incidence among specified populations What is an ecological study? - A study in which individual data are grouped or aggregated to a set of areal units What is ecological fallacy? - making unwarranted influences about individuals on the basis of aggregate or ecological data - Risk in positivist studies if area data is used What are the downsides of the positivist approach? - sees people as rational, abstract objects - people don't follow models - ignores social or political processes - [feminist critique] masculine view of the world- orderly, rational, quantifiable, predictable, abstract, and theoretical What are the characteristics of social interactionist approach? - an emphasis on individual meaning- subjective experience of illness/ health - focus on human beliefs, values, meanings and intentions- researcher describes, interprets and understands these - people are seen as having agency - "lay beliefs" matter as much as those of a health professional - qualitative methods- interviews, focus groups etc Is space or place more important in the social interactionist approach? - experience of place is more important than fixed areal units Describe the smoking Glasgow study? - Aimed to understand how smoking varied between places in Glasgow - Used focus groups to understand the reasons why people smoke - Themes from data included economic insecurity, isolation and stress, community (sharing cigarettes) - gives a voice to lay people What are the downsides of the social interactionist approach? - results are subjective and hard to verify - usually smaller study numbers What is the structuralist approach to health geography? - focus on economic structures and power relations which underpin all areas of human activity, including health - draws impetus from Marxism but there are other perspectives including patriarchy - for some, the legacy of colonialism serves to explain on-going health problems - healthcare is preventative not curative because it's a commodity - poverty caused by capitalism- causes ill health What are some examples of structuralist caused poor health? (x3) - In Haiti HIV maps reflect maps of American neocolonialism and structural adjustment programs- poorer people get HIV more - turn to sex work more - disabled women more likely to be unemployed and have low income -> diabetes and depression - Women doing dangerous work in sweatshops and have little say in unions Describe the First nations study? - The aim was describe and explain health outcomes of First Nations peoples. - Findings: Socio-economic inequalities ‘can determine the health of populations’...’there are many varied and interlaced determinants, most of which are entrenched in unequal power relations and a history of colonization’. What is the structurationist approach? - understands the duality of structure and agency- they shape each other - e.g. parents may find it hard to vaccinate their kids between 9-5, changing opening hours would increase vaccinations and improve health - humans make their own health but not in conditions of their choosing What is the post structuralist approach to health geog? - emphasis on otherness and difference - origins in foucault and the enlightenment tradition- search for truth and order - discusses how knowledge and experience are constructed in the context of power relations - self governance through laws (e.g. seat belt/ smoking) and pressure from society (e.g. to be slim) What is governmentality? Foucault – the ‘governing’ of public health using various assemblages of human and non-human actors (survey, vaccination, clinics, hospitals, health professionals), all coming together to form a ‘network’. Discuss the HIV/AIDS positivist study? - Malawi has very high rates of HIV/ AIDS understanding spatial variation is very important for targeting support - Study shows spatial variation of HIV drivers - Method- very quantitative- used data about HIV in pregnant women from 19 HIV clinics, plotted it spatially and found probabilities - compared HIV to other socioeconomic factors - data displayed in graphs, charts and maps - highlights hotspots, coldspots and clusters - HIV worse in the south, and urban areas - HIV worse away from public transport and where education is poor Discuss the mental health social interactionist study? - interview 48 gay men in USA/ Canada for 45-75 mins - Focus on the experience of being gay, migration and mental health problems - Found that migration gives more freedom but also new stresses of newer communities - provide evidence for the minority stress model - intersectionality increases risk of mhp - issues: focused on only 1 type of migration, big range in migration years, misses root causes due to focus on migration Discuss the HIV/AIDS structuralist study? - "texture of dire affliction is better felt in gritty details of biography" - suffering is not well conveyed in stats of graphs - allows addition of historical aspects - Can weight different forces of suffereing - helps to understand structural violence and the structural causes of HIV/AIDS
The USA entered World War II in response to the Japanese bombing of the American naval base at Pearl Harbor, Hawaii. Prior to the attack, the country had maintained an isolationist policy, although government leaders considered involvement inevitable and had been providing the Allies with arms and other supplies. The US had long-standing friendly relations with Britain and the Japanese assault tipped the country against the Axis powers.Continue Reading Until 1941, the US kept out of the war. In general, Americans considered it a European affair and preferred to follow the traditional American policy of isolationism. However, the country was sympathetic to the Allied cause. The US had fought against Germany during WWI. Communists, leftists and Jews who heard about the oppression occurring in Germany pressed and lobbied the government to intervene. Many people believed that the totalitarian tendencies of European fascism would eventually threaten the US. In addition, the US shared a common heritage and friendly relations with Britain, whose very existence was threatened by the Germans. For these reasons, the US sided with the Allies from the beginning and furnished them with much needed war supplies. The Roosevelt administration anticipated involvement and prepared the nation's arms manufacturing. When the Japanese bombed Pearl Harbor, it was the final component necessary to remove any doubt about going to war.Learn more about World War 2
Certain microbes in the intestine secrete protein that stimulates the proliferation of beta cells in the pancreas during development. After birth, we are colonized by microbes that quickly outnumber all of the other cells in the human body. There is intense interest in how these microbes, which are collectively known as the microbiota, can promote health or contribute to disease, but comparatively little attention has been paid to their role in development. The importance of the microbiota in the development of the gut was first recognized more than 40 years ago, once it became possible to raise gnotobiotic animals (animals for which only certain known microbes are present) under germ-free conditions (Thompson and Trexler, 1971). Could the development of organs that are not in direct contact with the microbiota also be under its control? Until recently, our understanding of the microbiota of vertebrates was limited to microbes that could be cultured in the lab. However, advances in microbial taxonomy and DNA sequencing technology have revealed enormous, and previously underappreciated, diversity in the composition and function of the microbiota (Shokralla et al., 2012). Emerging data suggests that the presence or absence of rare microbes could have profound physiological consequences: some strains can reprogram the metabolism of the entire microbial community (McNulty et al., 2011), whereas others produce factors that act directly on the host (Semova et al., 2012). Zebrafish develop outside of the mother, which makes them an ideal vertebrate system in which to manipulate the composition of the microbiota in order to study its role in development. In addition, the optical transparency of zebrafish and the availability of many tissue-specific transgenic markers provide a window into organ development at cellular resolution. Early studies of germ-free zebrafish highlighted that the microbiota had a conserved role in the growth and maturation of the intestine (Bates et al., 2006; Rawls et al., 2004). Now, in eLife, Karen Guillemin of the University of Oregon and colleagues – Jennifer Hampton Hill (Oregon), Eric Franzosa and Curtis Huttenhower, both of Harvard and the Broad Institute – have used these tools to reveal that microbiota-host interactions could play a role in the development of beta (β) cells in the pancreas (Hill et al., 2016). Small clusters of hormone-producing cells within the pancreas keep blood sugar levels within a narrow range. The β cells, which produce insulin, are of particular importance because they are dysfunctional in patients with diabetes (or have been destroyed). Hill, Franzosa, Huttenhower and Guillemin (who is also at the Canadian Institute for Advanced Research) noticed that a rapid expansion in the number of pancreatic β cells coincided with the newly hatched zebrafish larvae being colonized by microbes. Using the germ-free system, they discovered that the microbiota is required for this expansion, which is driven by proliferation of existing β cells as well as the differentiation of new β cells (Hesselson et al., 2009). Hill et al. then took a collection of bacterial strains that they had previously isolated from the zebrafish gut (Stephens et al., 2015), and determined which of these strains promotes β cell expansion in germ-free zebrafish. Only a subset of the strains, including several Aeromonas isolates, restored normal β cell numbers. This suggested that a strain-specific factor was involved. The Aeromonas activity was tracked to a mixture of proteins that were secreted by the strain in culture, setting off a hunt for the factor (or factors) responsible. Using genomic and proteomic filters Hill et al. identified a single candidate protein, which they named β cell expansion factor A (BefA). When the BefA gene was deleted from the Aeromonas genome, the mutant strain still colonized germ-free zebrafish but it did not restore β cell numbers. In contrast, the addition of purified BefA protein to germ-free zebrafish fully rescued β cell development. These elegant experiments showed that BefA is necessary and sufficient to promote β cell expansion, and that it exerts its effect directly on the host. Additional experiments showed that BefA specifically promotes the proliferation of existing β cells and/or progenitor cells that are already committed to the β cell fate. This discovery in zebrafish could have implications for human health. Homologs of BefA exist in the genomes of bacteria that colonize the human gut. Intriguingly, BefA proteins from the human microbiota also stimulated β cell expansion in zebrafish, suggesting that they share an evolutionarily conserved target. Unfortunately, the sequence of the protein does not provide many clues to its biological function. The intestines and β cells communicate extensively with each other (via hormonal and neuronal signals) to help match food intake with insulin output, and early β cell expansion could be regulated by similar inter-organ signals. Whether BefA acts directly on β cells/progenitors (Figure 1A) or indirectly by rescuing intestinal development or function in germ-free animals (Figure 1B) remains unknown. However, the gnotobiotic zebrafish system is poised to deliver fundamental insights into BefA targets and function. By two years of age, the proliferation of β cells in humans slows markedly (Gregg et al., 2012), which suggests that the proliferation of perinatal β cells may be important for establishing a reserve of β cells to promote lifelong metabolic health (Berger et al., 2015). Susceptible individuals with a suboptimal number of β cells may struggle to meet the increased demand for insulin associated with pregnancy or obesity. Determining whether BefA can act at later stages of development to allow β cell numbers to 'catch up' could lead to novel therapeutic approaches for the prevention of diabetes. Formation of a human β-cell population within pancreatic islets is set early in lifeJournal of Clinical Endocrinology & Metabolism 97:3197–3206.https://doi.org/10.1210/jc.2012-1206 The impact of a consortium of fermented milk strains on the gut microbiome of gnotobiotic mice and monozygotic twinsScience Translational Medicine 3:106ra106.https://doi.org/10.1126/scitranslmed.3002701 Next-generation sequencing technologies for environmental DNA researchMolecular Ecology 21:1794–1805.https://doi.org/10.1111/j.1365-294X.2012.05538.x
For the observation of cold matter in the interstellar medium, astronomers need instruments for the detection of terahertz radiation. Specific high-resolution instruments are based on terahertz quantum-cascade lasers, but operate only at cryogenic temperatures. Physicists of the Paul Drude Institute (PDI) in Berlin, Germany, have now developed a terahertz quantum-cascade laser, which operates at significantly higher temperatures than previously achieved. The new development allows for the use of more compact cooling systems -- also reducing the obstacles for many other applications. The wavelengths of terahertz radiation lie between the microwave and infrared range. It penetrates many materials such as plastics and clothes. At the same time, terahertz radiation is -- due to its small energy -- non-ionizing and not dangerous for people. Applications of terahertz radiation include non-destructive material testing and safety checks at airports. For astronomers, terahertz radiation provides new insights in the investigation of so-called cold matter. This kind of matter does not emit visible light such as the stars, but electromagnetic radiation in the infrared to microwave range. The German Aerospace Center (DLR) measures such emission lines with high precision within the US-German SOFIA project. Due to the Doppler shift of the detected frequencies, the researchers can determine the velocity of the motion of cold matter through the galaxy. To reduce the absorption by water in Earth atmosphere, the measurements are carried out from an airplane. One key element of the detector system is a quantum-cascade laser developed at the PDI. In a joint project funded by the Investitionsbank Berlin, the researchers have developed a compact quantum-cascade laser system. The partners in this project were in addition to the PDI the Ferdinand Braun Institute in Berlin, the Humboldt University in Berlin, and the company eagleyard Photonics located also in Berlin. "One problem of the lasers are the low operating temperatures, which are typically even below the temperature of liquid nitrogen of 77 Kelvin or -196 °C for continuous-wave operation," explains Martin Wienold from the PDI. "We achieved a new record: our lasers operate up to 129 Kelvin (-144 °C) improving the previous record by more than 10 degrees." This is still rather cold, "but, in combination with a significantly reduced power dissipation of the new lasers, it allows for the use of much smaller mechanical coolers. Thereby, we will be able to reduce the size of systems based on terahertz quantum-cascade lasers in the future -- an important point for flight missions such as SOFIA," Wienold emphasizes. The physicists at the PDI achieved the high operating temperatures by developing a semiconductor heterostructure, which requires only a very low driving power. The laser ridge is only about 10-15 microns high and 15 microns wide, while the emission wavelength is about 100 microns. The active region is confined by two metal layers, which are almost perfect mirrors in the terahertz range. This combination results in very low power dissipation and operation at low current densities and voltages. "However, there has been an additional problem," explains Martin Wienold: "We achieved relatively high operating temperatures, but the strong spatial confinement of the light in the laser resulted in an extremely divergent beam profile." The physicists solved the problem by applying a concept from the early days of radio broadcasting. A grating on top of the laser ridge -- a so-called third-order grating -- acts as a directive antenna, which collimates the laser emission. "We are currently working on achieving even higher operating temperatures," says Wienold. "However, room temperature operation will become difficult to achieve because of some physical limits." Quantum-cascade lasers differ from common diode lasers by its structure and the involved physical processes. Typical diode lasers emit light, when electrons from the conduction band recombine with holes from the valence band. Upon recombination, a photon is emitted with the energy of approximately the semiconductors energy gap. Since the energy gap is determined by the used semiconductor material, the wavelength of a diode laser is basically determined by the material. In a quantum-cascade laser, the electron remains in the conduction band, and the laser transitions takes place between two confined subband states within the conduction band. This performance is achieved by alternating extremely thin semiconductor layers, resulting in so-called potential wells in the conduction band. When an electric field is applied, the electrons move from an energetically higher lying potential well to an energetically lower lying potential well via the quantum mechanical tunneling effect. The electrons tumble down from one potential well to the next potential well in such a way, as falling down a staircase. - M. Wienold, B. Rφben, L. Schrottke, R. Sharma, A. Tahraoui, K. Biermann, H. T. Grahn. High-temperature, continuous-wave operation of terahertz quantum-cascade lasers with metal-metal waveguides and third-order distributed feedback. Optics Express, 2014; 22 (3): 3334 DOI: 10.1364/OE.22.003334 Cite This Page:
What do connectionist teachers do? The connectionist approach is well described in Effective Teachers of Numeracy in Primary Schools: Teachers' Beliefs, Practices and Pupils' Learning. In essence, connectionist teachers: - have a conscious awareness of connections and relationships, and use mental mathematics to develop agility with this - believe that students of all levels of attainment need to be challenged in mathematics, and have high levels of expectation for all students - maintain a high degree of teacher–class, teacher–group, teacher–individual and student–student focussed discussion - believe students learn computational skills through modelling, problem-solving and investigations - plan their teaching around connections between ideas. There are different levels of connectedness, which are explained in the article Connected Understanding on the AAMT website.
Signal Processing/Digital Filters< Signal Processing Digital filters are by essence sampled systems. The input and output signals are represented by samples with equal time distance. Finite Implulse Response (FIR) filters are characterized by a time response depending only on a given number of the last samples of the input signal. In other terms: once the input signal has fallen to zero, the filter output will do the same after a given number of sampling periods. The output is given by a linear combination of the last input samples . The coefficients give the weight for the combination. They also correspond to the coefficients of the numerator of the z-domain filter transfer function. The following figure shows an FIR filter of order : For linear phase filters, the coefficient values are symmetric around the middle one and the delay line can be folded back around this middle point in order to reduce the number of multiplications. The transfer function of FIR filters only pocesses a numerator. This corresponds to an all-zero filter. FIR filters typically require high orders, in the magnitude of several hundreds. Thus the choice of this kind of filters will need a great amount of hardware or CPU. Despite of this, one reason to choose a FIR filter implementation is the ability to achieve a linear phase response, which can be a requirement in some cases. Nevertheless, the fiter designer has the possibility to choose IIR filters with a good phase linearity in the passband, such as Bessel filters, or to design an allpass filter to correct the phase response of a standard IIR filter. Moving Average Filters (MA)Edit Moving Average (MA) models are process models in the form: MA processes is an alternate representation of FIR filters. A filter calculating the average of the last samples of a signal It is the simplest form of an FIR filter, with all coefficients being equal. The transfer function of an average filter is given by: The transfer function of an average filter has equally spaced zeroes along the frequency axis. However, the zero at DC is masked by the pole of the filter. Hence, there is a larger lobe a DC which accounts for the filter passband. Cascaded Integrator-Comb (CIC) FiltersEdit A Cascaded integrator-comb filter (CIC) is a special technique for implementing average filters placed in series. The series placement of the average filters enhances the first lobe at DC compared to all other lobes. A CIC filter implements the transfer function of average filters, each calculating the average of samples. Its transfer function is thus given by: CIC filters are used for decimating the number of samples of a signal by a factor of or, in others terms, to resample a signal at a lower frequency, throwing away samples out of . The factor indicates how much of the first lobe is used by the signal. The number of average filter stages, , indicates how well other frequency bands are damped, at the expense of a less flat transfer function around DC. The CIC structure allows to implement the whole system with only adders and registers, not using any multipliers which are greedy in terms of hardware. Downsampling by a factor of allows to increase the signal resolution by bits. Canonical filters implement a filter transfer function with a number of delay elements equal to the filter order, one multiplier per numerator coefficient, one multiplier per denominator coefficient and a series of adders. Similarily to active filters canonical structures, this kind of circuits showed to be very sensitive to element values: a small change in a coefficients had a large effect on the transfer function. Here too, the design of active filters has shifted from canonical filters to other structures such as chains of second order sections or leapfrog filters. Chain of Second Order SectionsEdit A second order section, often referred as biquad, implements a second order transfer function. The transfer function of a filter can be split into a product of transfer functions each associated to a pair of poles and possibly a pair of zeroes. If the transfer function's order is odd, then a first order section has to be added to the chain. This section is associated to the real pole and to the real zero if there is one. The most known biquad structures are - direct-form 1 - direct-form 2 - direct-form 1 transposed - direct-form 2 transposed The direct-form 2 transposed of the following figure is especially interesting in terms of required hardware as well as signal and coefficient quantization. Digital Leapfrog FiltersEdit Digital leapfrog filters base on the simulation of analog active leapfrog filters. The incentive for this choice is to inherit from the excellent passband sensitivity properties of the original ladder circuit. The following 4th order all-pole lowpass leapfrog filter can be implemented as a digital circuit by replacing the analog integrators with accumulators. Replacing the analog integrators with accumulators corresponds to simplify the Z-transform to , which are the two first terms of the Taylor series of . This approximation is good enough for filters where the sampling frequency is much higher than the signal bandwidth. The state space representation of the preceeding filtre can be written as: From this equation set, one can write the A, B, C, D matrices as: In the digital leapfrog filter, the relative values of the coefficients set the shape of the transfer function (Butterworth, Chebyshev, …), whereas their amplitudes set the cutoff frequency. Dividing all coefficients by a factor of two shifts the cutoff frequency down by one octave (also a factor of two). A special case is the Buterworth 3rd order filter which has time constants with relative values of 1, 1/2 and 1. Due to that, this filter can be implemented in hardware without any multiplier, but using shifts instead. Autoregressive Filters (AR)Edit Autoregressive (AR) models are process models in the form: Where u(n) is the output of the model, x(n) is the input of the model, and u(n - m) are previous samples of the model output value. These filters are called "autoregressive" because output values are calculated based on regressions of the previous output values. AR processes can be represented by an all-pole filter. Autoregressive Moving-Average (ARMA) filters are combinations of AR and MA filters. The output of the filter is given as a linear combination of both the weighted input and weighted output samples: ARMA processes can be considered as a digital IIR filter, with both poles and zeros. AR filters are preferred in many instances because they can be analyzed using the Yule-Walker equations. MA and ARMA processes, on the other hand, can be analyzed by complicated nonlinear equations which are difficult to study and model. If we have an AR process with tap-weight coefficients a (a vector of a(n), a(n - 1), ...) an input of x(n), and an output of y(n), we can use the yule-walker equations. We say that σx2 is the variance of the input signal. We treat the input data signal as a random signal, even if it is a deterministic signal, because we do not know what the value will be until we receive it. We can express the Yule-Walker equations as: Where R is the cross-correlation matrix of the process output And r is the autocorrelation matrix of the process output: We can show that: We can express the input signal variance as: Or, expanding and substituting in for r(0), we can relate the output variance of the process to the input variance:
Edward Thorndike (1874-1949), an early twentieth century psychologist, was the first to formulate what he referred to as the Law of Effect. The Law of Effect, simply stated, proposed that any behavior that resulted in pleasant consequences would tend to be repeated, while any behavior that resulted in unpleasant consequences would not. Thorndike later discovered what he termed Spread of Effect. Not only would any behavior that resulted in pleasant consequences be repeated, but also any responses surrounding the reinforced one (Hergenhahn & Olson, 2005, 69). Building on the earlier work of Thorndike, B.F. Skinner (1904-1990) began to elaborate and extend Thorndike’s ideas on learned behavior. Skinner differentiated between what he termed respondent (or reflexive) behavior, and learned (or operant) behavior. Operant behavior could be characterized by “the observable effects it has on the environment. Operant conditioning, therefore, is learning in which the probability of a response is changed by a chance in its environment (PM, n.d.).” Reinforcement and Punishment Two concepts important to an understanding of operant conditioning are reinforcement and punishment. Reinforcers and punishment are specific types of consequences. Reinforcement encourages a behavior, while punishment discourages a behavior. Reinforcement is any consequence of behavior that increases the chances of a behavior being repeated. Reinforcement may be either positive or negative. Positive reinforcement occurs when a stimulus is presented after a response, thus encouraging the response to be repeated. Negative reinforcement occurs when a stimulus is removed after a response, encouraging the response to be repeated. In this context, the terms positive and negative do not refer to good or bad; but rather, to the addition or remove of a stimulus. Punishment is the opposite of reinforcement and is any consequence that decreases the chances of a behavior being repeated. The...
Causes of overpressure Overpressure in stratigraphic layers is fundamentally caused by the inability of connate pore fluids to escape as the surrounding mineral matrix compacts under the lithostatic pressure caused by overlying layers. Fluid escape may be impeded by sealing of the compacting rock by surrounding impermeable layers (such as evaporites and cemented sandstones ). Alternatively, the rate of burial of the stratigraphic layer may be so great that the efflux of fluid is not sufficiently rapid to maintain hydrostatic A common type of situation where overpressure may occur is in a buried river channel filled with coarse sand that is sealed on all sides by impermeable shales. It is extremely important to be able to diagnose overpressured units when drilling through them, as the drilling mud weight (density) must be adjusted to compensate. If it is not, there is a risk that the pressure difference down-well will cause a dramatic decompression of the overpressured layer and result in a blowout at the well-head with possibly disastrous consequences. Because overpressured sediments tend to exhibit better porosity than would be predicted from their depth, they often make attractive hydrocarbon reservoirs and are therefore of important economic interest. || Effects | || 10 Psi ||*Reinforced concrete buildings severely damaged | - Severe heart and lung damage - Limbs can be blown off || 4 Psi ||*Most buildings collapse except concrete buildings | - Injuries universal - Fatalities occur || 2 Psi ||*Residential structures collapse | - Brick walls destroyed - Injuries common - Fatalities may occur
Getting children to love reading can be challenging. Here are some tips to help you foster a love of reading in your child. - Start early: Babies may not understand the words, but they will love the sound of your voice, and it helps them develop their language skills - Make reading a part of your routine - Let your child choose their books - Create a reading-friendly environment: Create a cosy reading corner in your home, filled with books, pillows, and blankets - Make reading fun: Encourage your child to act out the story or make funny voices for the characters - Being a reading role model At Shaping Little Minds, we understand the importance of reading and strive to foster a love of reading in every child who attends. From reading stories to singing nursery rhymes, we create a language-rich environment that promotes a love of reading in young children.
(Latin: a suffix used to form names of zoological groups, classes, and orders) More about Arthropoda Along with the insects, crustaceans, centipedes, and millipedes; spiders are members of that group of animals without backbones referred to as the Arthropoda, literally the "jointed-limbed" animals. Clearly they lack a backbone and instead have an external skeleton, called an "exoskeleton", which has some similarities to a suit of armor; it is tough and fairly rigid and the muscles are attached to it internally. Like the vertebrate skeleton, that of the arthopods is designed as a compromise between rigidity, to provide support and protection for the soft, delicate internal organs, and flexibility, to allow for ease of movement. The color of the body can be yellowish-tan to dark-brown, with the paired claws often a contrasting color. They have two very long pedipalps, or pincers, which strongly resemble the scorpion's claws, but the pseudoscorpion's abdomen is short and rounded at the rear, rather than extending into a segmented tail and sting. The movable part of the pincer contains a venom gland and duct; the poison is used to capture and immobilize their tiny prey. They do not bite. To digest prey, they pour a mildly corrosive fluid over the prey, then ingest the liquefied remains. They spin silk from a gland in their jaws to make disk-shaped cocoons for mating, molting, or enduring cold weather.
Machining is a precise process using rotary cutters to remove material by advancing a cutter into a workpiece. It involves controlling cutter direction, speed, and pressure on multiple axes. This versatile method spans various operations and machines, from crafting small individual parts to managing extensive gang machining tasks. Commonly used for crafting custom parts with exact tolerances, machining employs a range of machine tools. The primary tool is the milling machine, often called a mill. The advent of computer numerical control (CNC) technology transformed these machines into machining centers, equipped with automatic tool changers, CNC capabilities, coolant systems, and enclosures. These centers are categorized as vertical machining centers (VMCs) or horizontal machining centers (HMCs). The integration of machining and turning created multitasking machines (MTMs), tailored for both operations within a single workspace. Choose the appropriate material, often metal or plastic, based on the part's requirements. Secure the workpiece in the machine's work holding device, ensuring it's stable and properly aligned. Select the appropriate cutting tool (e.g., end mill, drill bit) based on the specific machining operation. Install and secure the chosen cutting tool in the machine's spindle. Create a machining program that specifies toolpaths, speeds, feeds, and other parameters. This can be done manually or using computer numerical control (CNC) for precision. - Cutting: The machine advances the cutting tool into the workpiece, removing material to achieve the desired shape. - Drilling: For creating holes or openings. - Turning: Rotating the workpiece against a fixed cutting tool (typically in a lathe) to create cylindrical shapes. - Milling: Moving the cutting tool in various directions to remove material and shape the workpiece.
Mental health is a person’s emotional, psychological, and social wellbeing. It plays a large role in the thoughts we have and our behaviors. Mental health determines how we handle stress, relate to other people, and the choices we make. Mental Health and Mental Illness – what’s the difference? It’s important to distinguish the difference between mental health and mental illness. People can experience periods of poor mental health without having a mental illness. People with mental illness can have good mental health. Mental illness affects an individual’s thoughts, feelings, moods, and behaviors. Examples include depression, schizophrenia, bi-polar disorders, and anxiety. A person’s mental health can change with time and depends on their ability to cope. Major life changes like divorce, job loss, or the death of a loved one can change a person’s mental health. Managing mental health is of primary importance to ensure a person’s wellness. Depression and other mental health issues can lead to health issues, like stroke, diabetes, and heart disease. Unfortunately, mental illness is common throughout our country. The CDC’s statistics illustrate how common mental illness is in our country - More than 50% of people will be diagnosed with some type of mental illness or disorder during their lifetime. - 1 in 5 Americans will experience mental illness in a given year. - 1 in 25 Americans lives with a serious mental illness such as schizophrenia, bipolar disorder, or major depression. There has been tremendous progress in understanding the origins of poor mental health. There are great methods of treatment that prove effective. Pathway Healthcare’s mental health care providers use proven practices to help patients. It’s important for your wellness to take care of your physical and mental health.
VEX IQ is a snap-together robotics system designed from the ground up to provide this opportunity to future engineers of all skill levels. By packaging advanced concepts into an accessible package, the system also naturally encourages inventing, teamwork, and problem solving. Using VEX IQ to teach kids engineering and coding has proven to be fun and effective for students looking beyond the LEGO® 's and wanting more complexity, flexibility, and the ability to use tools. VEX robots are ideal for Grades 5-8. We will follow VEX's educational A-L Units curriculum syllabus. We are continuing to work with VEX to provide feedback for continous improvement of our course materials and their curriculum and robots.Intermediate VEX Robotics:Unit A: It’s Your Future – Learn about STEM, engineering, and roboticsUnit B: Let’s Get Started – Learn about VEX IQ, the Controller, and the Robot BrainUnit C: Your First Robot – Build and test Clawbot IQUnit D: Simple Machines & Motion – Explore Levers, Pulleys, Pendulums, & moreUnit E: Chain Reaction Challenge – Design fun devices using Simple MachinesUnit F: Key Concepts – Explore and apply science and math that engineers useAdvance VEX Robotics:Unit G: Mechanisms – Motors, Gear Ratio, Drivetrains, Object Manipulation & moreUnit H: Highrise Challenge – Build a challenge-ready teleoperated robotUnit I: Smart Machines – Learn how sensors work and the basics of programmingUnit J: Chain Reaction Programming Challenge – Apply sensor and programming knowledge to automate fun devicesUnit K: Smarter Machines – Expand your knowledge of sensors and programmingUnit L: Highrise Programming Challenge – Build a challenge-ready autonomous robot *Robotics Team CompetitionsAI Robotics:Students will build on the knowledge from the Advance robotics class and apply advance software techniques to VEX IQ robots. Machine learning via Artificial Intelligence is explored. Students will explore advance calculations on MATLAB and model adaptive controls. They will take these models and integrate this into their robots to demonstrate the learning capabilities.
Radio Architectures, Pt 3: Intermodulation and Intercept Points Intermodulation and Intercept Points The mixer generates intermediate freqeuency (IF) signals that result from the sum and difference of the LO and RF signals combined in the mixer: These sum and difference signals at the IF port are of equal amplitude, but generally only the difference signal is desired for processing and demodulation so the sum frequency (also known as the image signal: see Fig. 8-11) must be removed, typically by means of IF bandpass or lowpass filtering. A secondary IF signal, which can be called f*IF, is also produced at the IF port as a result of the sum frequency reflecting back into the mixer and combining with the second harmonic of the LO signal. Mathematically, this secondary signal appears as: This secondary IF signal is at the same frequency as the primary IF signal. Unfortunately, differences in phase between the two signals typically result in uneven mixer conversion-loss response. But flat IF response can be achieved by maintaining constant impedance between the IF port and following component load (IF filter and amplifier) so that the sum frequency signals are prevented from re-entering the mixer. In terms of discrete components, some manufacturers offer constant-impedance IF bandpass filters that serve to minimize the disruptive reflection of these secondary IF signals. Such filters attenuate the unwanted sum frequency signals by absorption. Essentially, the return loss of the filter determines the level of the sum frequency signal that is reflected back into the mixer. If a mixer’s IF port is terminated with a conventional IF filter, such as a bandpass or lowpass type, the sum frequency signal will re-enter the mixer and generate intermodulation distortion. One of the main intermodulation products of concern is the two-tone, third-order product, which is separated from the IF by the same frequency spacing as the RF signal. These intermodulation frequencies are a result of the mixing of spurious and harmonic responses from the LO and the input RF signals: But by careful impedance matching of the IF filter to the mixer’s IF port, the effects of the sum frequency products and their intermodulation distortion can be minimized. EXAMPLE: Intermodulation and Intercept Points To get a better understanding of intermodulation products, let’s consider the simple case of two frequencies, say f1 and f2. To define the products, we add the harmonic multiplying constants of the two frequencies. For example, the second order intermodulation products are (f1 +f2); the third order are (2f1 ‘f2); the fourth order are (2f1 +f2); the fifth order are (3f1 ‘f2); etc. If f1 and f2 are two frequencies of 100 kHz and 101 kHz (that is, 1 kHz apart) then we get the intermodulation products as shown in Table 8-1. From the table it becomes apparent that only the odd order intermodulation products are close to the two fundamental frequencies of f1 and f2. Note that one third order product (2f1‘f2) is only 1 kHz lower in frequency than f1 and another (2f2 ‘f1) is only 1 kHz above f2. The fifth order product is also closer to the fundamentals than corresponding even order products. These odd order intermodulation products are of interest in the first mixer state of a superheterodyne receiver. As we have seen earlier, the very function of a mixer stage—namely, forming an intermediate lower frequency from the sum/difference of the input signal and a local oscillatory—results in the production of nonlinearity. Not surprisingly, the mixer stage is a primary source of unwanted intermodulation products. Consider this example: A receiver is tuned to a signal on 1000 kHz but there are also two strong signals, f1on 1020 kHz and f2 on 1040 kHz. The closest signal is only 20 kHz away. Our IF stage filter is sharp with a 2.5-kHz bandwidth, which is quite capable of rejecting the unwanted 1020-kHz signal. However, the RF stages before the mixer are not so selective and the two signals f1 and f2 are seen at the mixer input. As such, intermodulation components are readily produced, including a third order intermodulation component (2f1 ‘f2) at (2-1020’1040)=1000 kHz. This intermodulation product lies right on our input signal frequency! Such intermodulation components or out-of-band signals can easily cause interference within the working band of the receiver. In terms of physical measurements, the two-tone, third-order intermodulation is the easiest to measure of the intermodulation interferences in an RF system. All that is needed is to have two carriers of equal power levels that are near the same frequency. The result of this measurement is used to determine the third-order intermodulation intercept point (IIP3), a theoretical level used to calculate third-order intermodulation levels at any total power level significantly lower than the intercept point.
What is Trachoma and why is it important? Trachoma is the most common cause of infectious blindness (six million people worldwide are blind from trachoma). Additionally, it is estimated that at any given time there are 40 million children and adults who have active trachoma infection that could lead to blindness. Trachoma is caused by the organism Chlamydia trachomatis. The infection spreads from eye to eye through eye seeking flies which are attracted to, and feed on ocular and nasal secretions. Female Musca sorbens flies lay their eggs in waste and faeces, preferring human faeces, but they also breed in faeces from other animals. Trachoma is mainly found in very poor rural communities which lack sanitation and sufficient water for personal hygiene, because of this it tends to be most common in Africa, the Middle East, and parts of Asia. Public health interventions to address Trachoma focus on the “SAFE” strategy which includes four components: i) surgery to reduce risk of blindness, ii) antibiotic treatment for those with current infection, iii) facial cleanliness and iv) environmental changes such as sanitation. A lot is still unknown about the epidemiology of trachoma, and because it is a disease of poverty and water scarcity it has not always attracted much international attention. The World Health Organisation and its partners are aiming to eliminate trachoma as a public health threat by 2020 and as such funding and research into trachoma has increased in recent years. Our key contributions and research in this area: Trachoma control has largely been centred around the “S” and “A” components of the “SAFE” strategy, but in recent years more attention has been paid to the importance of face washing and environmental change to stopping transmission of trachoma. Researchers within theenvironmental health group have previously reviewed the impact of “F” and “E” interventions and our behaviour change model has been applied to trachoma. Our current projects in this area: We have recently partnered with the Fred Hollows Foundation to conduct formative research into “F” and “E” components of trachoma control in Oromia, Ethiopia. This research sought to explore, in a real-world, high-burden setting, what some of the sub-optimal hygiene practices contributing to trachoma transmission may be. Key research questions included: - What are the current practices pertaining to water collection / priorities for use; face washing and wiping, handwashing and bathing; defecation and stool disposal; animal husbandry and faeces disposal; garbage disposal; fly control; sleeping arrangements; and laundry? - Who carries out these behaviours, where, and using what? - How do the social, physical and biological environment influence water use, personal and other hygiene practices, sanitation practices and sleeping arrangements? - How do knowledge of trachoma, rational decision-making processes, different motivations and cues influence practice of the behaviours of interest? - What are the opportunities for intervention, and are potential intervention strategies acceptable to the community and considered feasible? Full list of publications related to trachoma: - Formative Research to inform design of a behaviour change intervention for the “F” and “E” of the SAFE strategy in Oromia, Ethiopia. Greenland, K., White, S et al. 2016 - Research to inform the development of behaviour change interventions for the “F” and “E” of the SAFE strategy in Turkana and Marsabit, Kenya. Danquah L, Rono H, Greenland K, Gilbert C, 2013. - Emerson P.M., CAIRNCROSS S., Bailey R.L., Mabey D.C.W. 2000. A review of the evidence for the “F” and “E” components of the SAFE strategy for trachoma control. Tropical Medicine & International Health 5 (8): 515-527. - CAIRNCROSS S. 1999. Water and trachoma. Journal of Community Eye Health 12 (32) 58-59.
Pediatric Ocular Trauma What causes eye injuries? Injuries to the eye and around the eye can be caused by blunt trauma from a ball or fist, sharp trauma such as a stick or projectile, or chemical trauma such as splash from a caustic substance like a cleaning material or pool supplies. Which part of the eye can be injured? Injuries to the eye can involve the eyelids, the bones surrounding the eye, and the eyeball Sharp trauma from sticks or projectiles during play or while working around the house can injure the eyelids. If the eyelid tissue becomes cut or torn it may involve not only the eyelid but the structures that drain tears from the eye. Lacerations of the eyelid or the torn draining structure require evaluation by an ophthalmologist and may require repair in the operating room using microsurgical techniques. Any injury to the eyelid can also be associated with injury to the eyeball so a complete examination of the eye must be performed to make certain there is no injury deeper than on the surface of the eye. How can the bones of the eye be damaged? Fractures of the bones around the eye usually occur from blunt trauma, such as a sports injury or a fall with injury to the nose and cheekbone (blow-out fracture). Fractures are often detected by x-rays or a CT scan which also help determine if tissues surrounding the eye are trapped in the fractures. These injuries often require prompt surgical treatment to prevent long-term complications such as double vision, loss of vision, and abnormal appearance. What are some common injuries to the eyeball itself? The front, clear surface of the eye called the cornea can be scratched and often causes pain, redness and tearing. The physician usually makes the diagnosis by placing a yellow dye (fluorescein) into the eye, which highlights the scratch. Treatment involves using antibiotic eye drops/ointment and occasionally a pressure patch on the eye. These injuries require close follow up with the ophthalmologist. What if the scratch goes deeper than the surface? Sharp objects (such as a stick, shard of glass, or metallic item) can cut through the surface of the eye causing a laceration. This type of injury places a child at risk for permanent loss of vision. Lacerations require prompt attention (usually surgical intervention) by an ophthalmologist to prevent complications and maximize vision potential. Can being struck with a ball or elbow during play cause damage inside the eye? Yes. Blunt trauma can cause bleeding inside the eye which is called a hyphema. The blood in the eye can cause increased pressure, which can result in permanent vision loss. Trauma associated with swelling of the eyelid, red eye, pain, or discharge should be evaluated by an ophthalmologist promptly. What should happen if a chemical or cleaning solution splashes into a child’s eye? The first thing to do when any abnormal liquid gets into the eye is to immediately flush the eye with water. Rinsing the chemical out of the eye decreases the chance of long-term problems. The next step is to quickly contact your doctor or go to the emergency department for evaluation. It is important to know and tell your doctor the brand name of the chemical or solution to help the doctor determine appropriate treatment. What are the most common causes of eye injuries in children? Pediatric eye trauma most often occurs at school or during play. Approved and tested eye and face protection is essential to prevent injuries. Protective goggles or full-face mask wear at all times for sports such as hockey, baseball, racquet ball, squash, and even baseball will help prevent eye trauma. Do fireworks still cause eye injuries? Each year hundreds of individuals (often children) sustain serious eye injury from fireworks used without appropriate supervision and precautions. Fireworks should only be used if approved for use in the home and children should never have access to either legal or illegal fireworks. What should happen when a child gets an eye injury? A child that sustains an eye injury should seek immediate medical attention at an emergency room or directly from an ophthalmologist to assess the vision function and carefully examine all the structures of the eye. Frequent examinations until the eye is completely healed are often necessary.
Forests in a water limited world under climate change The debate on ecological and climatic benefits of planted forests at the sensitive dry edge of the closed forest belt (i.e. at the ‘xeric limits’) is still unresolved. Forests sequester atmospheric carbon dioxide, accumulate biomass, control water erosion and dust storms, reduce river sedimentation, and mitigate small floods. However, planting trees in areas previously dominated by grassland or cropland can dramatically alter the energy and water balances at multiple scales. The forest/grassland transition zone is especially vulnerable to projected drastic temperature and precipitation shifts and growing extremes due to its high ecohydrological sensitivity. We investigated some of the relevant aspects of the ecological and climatic role of forests and potential impacts of climate change at the dryland margins of the temperate-continental zone using case studies from China, the United States and SE Europe (Hungary). We found that, contrary to popular expectations, the effects of forest cover on regional climate might be limited and the influence of forestation on water resources might be negative. Planted forests generally reduce stream flow and lower groundwater table level because of higher water use than previous land cover types. Increased evaporation potential due to global warming and/or extreme drought events is likely to reduce areas that are appropriate for tree growth and forest establishment. Ecologically conscious forest management and forestation planning should be adjusted to the local, projected hydrologic and climatic conditions, and should also consider non-forest alternative land uses.
Breastfeeding, Bottle-feeding, and Introduction of Solid Foods There is a great deal of cultural variability in terms of acceptable feeding practices and behaviors. Intergenerational factors contribute to patients' and families' eating and feeding practices. These cultural and generational contributing factors are pronounced during infancy and early childhood. It is during this period that decisions are made about breastfeeding or bottle-feeding, parental responses to crying cues in preverbal infants are formulated, and decisions are made regarding the introduction of solid foods. The timing and type of food that is introduced may be affected by the parents' and extended family's beliefs and cultural practices. Pediatricians should inquire about these issues and should try to elicit any dissonance between the parents' and extended family's food-related expectations. Minorities share a disproportionate burden of overweight and obesity. Specifically, the prevalence of overweight and obesity is highest among Hispanic/Latinos, American Indians, and African Americans. While some data do suggest that the obesity epidemic has stabilized in some communities, the health disparity between minorities and non-minorities has persisted. The length of time that immigrant mothers have lived in the United States and their degree of acculturation may adversely affect the rate of childhood obesity. Specifically, as families live for longer periods in the United States and become more acclimated to the American lifestyle, these immigrant families tend to acquire a more sedentary lifestyle, consume less wholesome diets, and purchase more fast foods. In addition to encouraging exercise and low-fat diets, pediatricians should consider the added barriers to healthy lifestyles, such as the availability of affordable fruits and vegetables in minority or inner-city urban populations, the built environment (safe parks and "green spaces"), availability of junk food in school vending machines, and cultural food preparation practices. Pediatricians should also inquire about the preparation method (eg, baked, fried, deep-fried) of culture-specific or traditional foods. Poor, minority, migrant, homeless, and other underserved populations are at risk for being food insecure (having a limited or uncertain supply of food) and the associated child health sequelae. Researchers have reported a relationship between food insecurity and overweight; developmental, behavioral, or academic problems; and other adverse health consequences.1 Immigrant families that lived in poverty in their country of origin may consider buying fast foods or purchasing food in abundance as a newly acquired privilege for themselves and their children. In some instances, the pendulum swings from food insecurity to more abundant but unhealthy foods. Pediatricians should screen all children (with or without weight loss or history of hunger) for the presence of food insecurity as a pediatric risk factor. Prompt referral to Women, Infants, and Children services; Supplemental Nutrition Assistance Program (formerly food stamp program); or other social services is recommended. Body Image Perceptions Parents' and patients' perception of body image and specifically what they consider normal or overweight body size is influenced by their culture. In Hispanic cultures, for example, parents often view overweight or obese babies and children favorably and consider them to be "healthy." Pediatricians should be aware that these perceptions or hunger-related past experiences exist and discuss healthy weight-to-height standards with parents. 1. Rose-Jacobs R, Black MM, Casey PH, et al. Household food insecurity: associations with at-risk infant and toddler development. Pediatrics. 2008;121:65–72 Chapter 3 Resources Resource 3A: Book chapter: "Promoting Healthy Nutrition" Hagan JF, Shaw JS, Duncan PM, eds. Bright Futures: Guidelines for Health Supervision of Infants, Children, and Adolescents. 4th ed. Elk Grove Village, IL: American Academy of Pediatrics; 2017:167-192. Bright Futures provides detailed information on well-child care for health care practitioners. This chapter specifically focuses on food and nutrition behaviors that are influenced by myriad environmental and cultural forces. Resource 3B: "Influence of Race, Ethnicity, and Culture on Childhood Obesity: Implications for Prevention and Treatment: A Consensus Statement of Shaping America's Health and the Obesity Society" Caprio S, Daniels SR, Drewnowski A, et al. Diabetes Care. 2008;31:2211–2221
Vilhjalmur Stefansson, above, was an early 20th Century explorer from Canada who became best known for spending a year living with the indigenous peoples inhabiting Inuit Nunangat, the Arctic regions of Greenland, Canada, and Alaska. He is generally accepted to be the first person to identify the Arctic Pole of Inaccessibility in his writings for the Geographical Review (Vol. 10, No.3 (Sept, 1920), pp.167-172). Drawing circles of 500-mile radius, he defined an area where the arcs intersected and called it a zone of “comparative inaccessibility” and identifying that “any point within it is less accessible than the North Pole”. Drawing Isochronic lines from the position of three ships – Peary, berry and Nansen – he defined the “pole of inaccessibility”, describing the Pole as “the point within the arctic regions most difficult to access for any explorer who first goes as far as he can by ship and then pushes forward by the use of men and dogs hauling sledges.” After Stefansson’s definition, the term became widely used in reference to exploration of the Antarctic (Petrov 1959). Explorers used the same methodology to identify the point in the Antarctic furthest from the sea – the Antarctic Pole of Inaccessibility. In more recent times a study by Daniel Garcia-Castellanos and Umberto Lombardo in the Scottish Geographical Journal used a similar methodology and took to describing Poles of Inaccessibility as the “location furthest from a particular coastline” and as “the places on Earth that are furthest from any Ocean”. They presented an algorithm d to calculate the spots furthest from the sea in major land masses and tabulated data for six of the The Big 7, Antarctica, Africa, North America, South America, Australia, Eurasia as well as Great Britain, Greenland, Iberia, Madagascar and Point Nemo. In 2019, Chris Brown has extended the concept of PIAs even further to define the Point of Inaccessibility to be the point furthest from any border (as opposed to Sea).
History is the continuous study of people in societies, cultures and countries. Our aim in teaching History is that all students will develop an understanding that the society they live in has been shaped by developments in the past. Students at Ashley High School should understand that history is our record of what happened in the past and why. Ashley High School students study History in Years 7 to 9 as a requirement of the National Curriculum. Throughout Key Stage 3, students learn about British, European and World History through themed units: - Year 7: The Romans, The Norman Conquest and Medieval Britain - Year 8: Tudors and Stuarts, Transatlantic Slave Trade and The Industrial Revolution - Year 9: The Twentieth Century World (Focusing on the two World Wars and 20th Century Civil Rights). We finish off by researching Halton Throughout History KS4 students who study History, will follow a programme with content that comprises of the following elements: - One period study (Germany 1890-1945) - One thematic study (Health and Medicine) - One wider world depth study (Conflict and Tension 1918-1939) - One British depth study including the historic environment (The Normans) A variety of resources and teaching techniques are used to encourage students to engage in learning and to encourage students to become independent thinkers. Videos, interactive computer software, historical sources and a variety of textbooks are used both as classroom aids and also in independent research. Assessment is an integral part of the learning process. Students are assessed at the end of each unit and Assessment For Learning is promoted in the classroom. Cross curricular links with other subjects such as Personal Development have been developed to add to students' overall understanding and interest of the units studied. Additionally, there are opportunities for educational visits to places such as the Slave Museum and Speke Hall. In October 2019 Ashley High School was awarded a quality mark gold award from the Historical Association.
Two-factor authentication (2FA) is the least complex version of multi-factor authorization (MFA) and was invented to add an extra layer of security to the – now considered old-fashioned and insecure – simple login procedure using a username and a password. Given the number of leaked login credentials for various websites (Yahoo, LinkedIn, Twitter to name a few), this extra layer is very much needed. One of the most well-known examples will occur when you try to login on a site from a different machine or from a different location (resulting in a different IP). With 2FA-enabled login procedures, you may receive a text message providing you with a verification code. That code is needed to complete the login procedure. By definition 2FA depends on two different methods of identity confirmation of the user. In the example above, the user knows the login credentials and has control over the phone that receives the text. Other factors that are often used are: - Knowing a PIN or TAN code (ATM withdrawals, money transfers) - Having access to an email account (when verification codes are sent by mail) - Secret questions (often frowned upon as they are sometimes easy to guess) - Physical keys (card readers, USB keys) - Biometrics (fingerprint readers, iris scanners) - Mobile devices that can scan barcodes or QR codes and calculate a login code for one time use (Authy, Google Authenticator) There are some alternatives for 2FA that can also be used in combination with 2FA or as one of the factors. Some examples are: - Single Sign On (SSO): this is mostly used as a method to dampen the impact of using 2FA methods, particularly when given an authenticated user access to several resources. The idea is that once the user has been identified and approved, the SSO software provides access to all platforms tied to the SSO. Given the possible impact of a breach the login procedure for a SSO system is usually done by using a MFA procedure. Another consideration when choosing a SSO system is the consequences of a failure. If the SSO software goes offline, will this block the user from all the underlying resources? - Time-based One-time Password (TOTP): this is a special authentication method that uses an algorithm that calculates a one-time login code based on the time. The server and the user that wants to login both run simultaneous calculations with the same seed and time-stamp. If the results match, the user is granted access. Obviously the clocks need to be synchronized, although there usually is some leniency built into the procedure (up to a one minute difference is generally allowed). Since losing the machine that runs the algorithm or any other way that leaks the algorithm could allow access to the wrong person, this method is generally used as one factor in a MFA method. - Token Authentication: besides physical tokens, other tokens can be used as a means of authentication. Consider, for example, apps that run on your smartphone and can show an image to your webcam or play a sound which can be compared to an original. As this is not a very strong authentication method (for now) it is advisable to be used as one of the authentication factors and not the sole one. Although a strong password is still a very effective means of authentication, there have been so many breaches resulting in leaked passwords, that methods have been developed to combine with or replace the use of passwords. The combination of two authentication methods is called 2FA and when we use more than two it’s called MFA.
The Ancient Greeks made pots from clay. Large pots were used for cooking or storing food and small bowls and cups were made for people to eat and drink from. Pots were also used for decoration, and when people died, they were cremated (burned) and their ashes were buried in pots. Just like today, fashions changed in Ancient Greece and so the size, shape and decorations used on pots developed over time. Decorations were quite simple at first, made up of lines and grooves like the jug in the second picture here. This later included more intricate designs, like zigzag patterns and geometric shapes painted around the pot. Over time, people started painting pots with scenes of human figures, nature, sometimes stories from Greek mythology or pictures of battles. The pot pictured in the bottom image has a more complicated design, with a woman (perhaps a goddess) and a deer. The Greeks used iron-rich clay, which turned red when heated in the kiln. Potters from Corinth and Athens used a special watery mixture of clay to paint their pots while the clay was still soft. After it was baked in the kiln, the sections of the pot they had painted with the clay would turn black, while the rest of the pot was red-brown. Sometimes they also did this the other way round. Clay - a kind of fine soil or rock Cremated - when a dead body is burned instead of being buried Geometric -decorative shapes in straight lines or circles Grooves -a track or ridge cut into something Intricate - made up of many details and small parts Kiln - a special, very hot oven where clay is baked
A collaboration between researchers from Cornell, Northwestern University and University of Virgina combined complementary imaging techniques to explore the atomic structure of human enamel, exposing tiny chemical flaws in the fundamental building blocks of our teeth. The findings could help scientists prevent or possibly reverse tooth decay. The team’s paper, “Chemical Gradients in Human Enamel Crystallites,” published July 1 in Nature. Cornell’s contribution was led by Lena Kourkoutis, associate professor in applied and engineering physics. Derk Joester, professor of materials science and engineering at Northwestern, directed the research. The paper’s co-lead authors are Northwestern doctoral student Karen DeRocher and postdoctoral researcher Paul Smeets. Thanks to its high mineral count, tooth enamel is a sturdy substance that can withstand the rigors of chewing, although excessive acid in the mouth can make it vulnerable to decay. While scientists have previously peeked into the crystallites that compose enamel, nanoscale images of its structure and chemical composition have been harder to come by. In one method, scanning transmission electron microscopy, or STEM, a beam of electrons is shot through a sample. But that process has its limits. “Enamel is mechanically a very, very strong material, but when you put it in the electron microscope, it’s very sensitive to the electron beam,” Kourkoutis said. “So compared to the crystalline materials that you find in electronics, for example, you can only put a fraction of the number of electrons into an enamel crystal. Normally, pushing down to the atomic scale means you have to put more electrons into the material. But if it damages the material before you get the information out, then you’re lost.” In recent years, Joester’s Northwestern group has imaged sensitive biological materials with atom probe tomography, a process that essentially strips atoms off a sample’s surface one at a time and reconstructs the structure of the material. At the same time, Cornell researchers at PARADIM (Platform for the Accelerated Realization, Analysis and Discovery of Interface Materials), a National Science Foundation-supported user facility, have advanced a form of low-temperature electron microscopy that can image the atomic structure of radiation-sensitive samples. The technique can also safely map a sample’s chemical composition by measuring how much energy is lost when the electrons interact with the atoms. “When you operate at low temperature, the material becomes more robust against electron beam damage,” said Kourkoutis, who directs PARADIM’s electron microscopy facility. “We are now working at the intersection between the developments in the physical sciences which have pushed electron microscopy to the atomic scale and the developments in the life sciences in the cryogenic field.” The two university groups linked up after Smeets, a member of Joester’s group, attended PARADIM’s summer school on electron microscopy in 2017. There, he learned how PARADIM’s cryogenic electron microscopy capabilities could complement Northwestern’s human enamel project. Smeets worked with Kourkoutis’ doctoral students Berit Goodge and Michael Zachman, Ph.D. ’18, co-authors of the new paper. The group performed cryogenic electron microscopy on enamel samples that were cooled with liquid nitrogen to around 90 kelvins, or minus 298 degrees Fahrenheit. By combining their complementary techniques, the Cornell and Northwestern researchers were able to image an enamel crystallite and its hydroxylapatite atomic lattice. But all was not crystal clear: The lattice contained dark distortions – caused by two nanometric layers with magnesium, as well as sodium, fluoride and carbonate ion impurities near the core of the crystal. Additional modeling confirmed the irregularities are a source of strain in the crystallite. Paradoxically, these irregularities and the enamel’s core-shell architecture may also play a role in reinforcing the enamel, making it more resilient. The researchers say the findings could lead to new treatments for strengthening enamel and combating cavities. “On the foundation of what we discovered, I believe that atom probe tomography and correlative electron microscopy will also have tremendous impact on our understanding of how enamel forms, and how diseases like molar incisor hypomineralization disrupt this process,” Joester said. And mouths aren’t the only beneficiaries of cryogenic electron microscopy. Kourkoutis is also using the process to probe the chemistry in energy systems, such as batteries and fuel cells that contain a mix of soft electrolytes and hard electrode materials. Co-authors include researchers from Northwestern University and the University of Virginia. The research was supported by the National Institutes of Health’s National Institute of Dental and Craniofacial Research, the National Science Foundation and the University of Virginia.
- > Anatomy > - The Brain Quizzes on the anatomy of the brain Each of the quizzes below includes 15 multiple-choice style questions. If you get a question right the next one will appear automatically, but if you get it wrong we'll give you the correct answer. An overall score is given at the end of each quiz. Choose from the following : The main anatomical areas of the brain : The anatomy of the cerebral cortex : For more quizzes on the anatomy and physiology of the nervous system, click here. Or if you fancy something different, try a French Quiz instead! Or how about an Astronomy Quiz? In this section we've added a few alternative study aids to help you along. - Articles - Here you'll find a range of short articles on basic anatomy and physiology topics, complete with a few 'test yourself' questions for each one. - Images and pdf's - Just in case you get tired of looking at the screen we've provided images and pdf files that you can print out and use for 'off-line' practice. - Word Roots - When you learn the word roots, prefixes and suffixes contained within anatomical and medical terms, you can often work out what they mean. This can be a useful skill as you progress in your studies, so we've provided a dictionary to help you! - Games - Finally in the resources section, we've added some simple games to make anatomy and physiology practice a little bit more fun.
Subskill: Reading Comprehension Concept: Facts and Details Grade Level: Upper Elementary How Plants Feed Themselves Have you ever wondered how plants feed themselves? Most plants create meals for themselves in the form of sugar. This process is called photosynthesis. The word ‘photo’ means light, and the word ‘synthesis’ means making. Photosynthesis means making food with light. If we were to put the ingredients for photosynthesis on a recipe card, here is what we would need: water, carbon dioxide from the air, chlorophyll from the cells of green plants, and sunlight for energy. Without these ingredients, plants could not make the sugar they need for food. Here is how the recipe would work: Photosynthesis begins at the roots of the plant. The roots suck up water from the ground, which travels to the leaves through tubes in the stems of the plant called xylem. Then carbon dioxide from the air, which is breathed out by animals, is absorbed into the leaves through tiny pores called the stomata. It is then taken to cells inside the leaves. These cells contain a green pigment called chlorophyll. The chlorophyll absorbs energy from the sunlight. The energy from the sunlight breaks down the water in the leaves and turns it into oxygen and hydrogen. The plant uses some of the oxygen to help it grow, and some of the oxygen is given off into the air. People and animals use oxygen to breathe. The hydrogen blends with the carbon dioxide inside the cells of the leaves to create food in the form of sugar for the plant. The plant stores the sugar in its leaves, stems and roots. The sugar makes the plant grow. Since most living things eat plants, it can be said that photosynthesis is the source of all life. The process of photosynthesis continually regenerates oxygen in our environment. Altogether, the ingredients used in the photosynthesis recipe feed almost all living things on Earth. Here are some questions to ask after listening to the passage: According to this passage, What is photosynthesis? How do plants feed themselves? What are some ingredients for photosynthesis?
|Philosophy Pages||Dictionary||Study Guide||Logic||F A Q s| Life and Works| . . Picture Theory . . Fact and Value . . New Methods . . Language Games . . Private Language Raised in a prominent Viennese family, Ludwig Wittgenstein studied engineering in Germany and England, but became interested in the foundations of mathematics and pursued philosophical studies with Moore at Cambridge before entering the Austrian army during World War I. The notebooks he kept as a soldier became the basis for his Tractatus, which later earned him a doctorate and exerted a lasting influence on the philosophers of the Vienna circle. After giving away his inherited fortune, working as a village schoolteacher in Austria, and designing his sister's Vienna home, Wittgenstein returned to Cambridge, where he developed a new conception of the philosophical task. His impassioned teaching during this period influenced a new generation of philosophers, who tried to capture it in The Blue and Brown Books (dictated 1933-35). From the late 'thirties, Wittgenstein himself began writing the materials which would be published only after his death. In the cryptic Logische-Philosophische Abhandlung (Tractatus Logico-Philosophicus) (1922), the earlier Wittgenstein extended Russell's notion of logical analysis by describing a world composed of facts, pictured by thoughts, which are in turn expressed by the propositions of a logically structured language. On this view, atomic sentences express the basic data of sense experience, while the analytic propositions of logic and mathematics are merely formal tautologies. Anything else is literally nonsense, which Wittgenstein regarded as an attempt to speak about what cannot be said. Metaphysics and ethics, he supposed, transcend the limits of human language. Even the propositions of the Tractatus itself are of merely temporary use, like that of a ladder one can discard after having climbed up it: they serve only as useful reminders of the boundaries of our linguistic ability. This work provided the philosophical principles upon which the logical positivists relied in their development of a narrowly anti-metaphysical standpoint. But just as his theories began to transform twentieth-century philosophy, Wittgenstein himself became convinced that they were mistaken in demanding an excessive precision from human expressions. The work eventually published in the Philosophical Investigations (1953) pursued an different path. In ordinary language, he now supposed, the meaning of words is more loosely aligned with their use in a variety of particular "language games." Direct reference is only one of many ways in which our linguistic activity may function, and the picturing of reality is often incidental to its success. Belief that language can perfectly capture reality is a kind of bewitchment, Wittgenstein now proposed. Thus, philosophy is properly a therapeutic activity, employed to relieve the puzzlement generated by (philosophical) misuses of ordinary language. In particular, the philosophical tradition erred in supposing that simple reports of subjective individual experience are primary sources for human knowledge. Efforts to employ a private language as expressions of interior mental states, for example, Wittgenstein argued to be an avoidable mistake that had caused great difficulties in the philosophy of mind. His views on this issue were a significant influence on Ryle and others. In his later work, Wittgenstein applied this method of analysis to philosophical problems related to epistemology, mathematics, and ethics. Additional on-line information about Wittgenstein includes:
The Earth presents challenges, and life responds. No matter what the extremes, living things find a way to tolerate whatever levels of temperature, rainfall, sunshine, wind, slope, and soil chemistry a habitat may provide. Occasionally, even a generally moderate region will feature a certain corner that presents difficulty as well as opportunity, and plants and animals find a way to make it work for them. The Mediterranean climate is not known for its harshness. Its narrow temperature range and moderate rainfall make it attractive to both humans and a wide variety of flora and fauna suited to the relatively easy life. But this climatic zone can also produce some regular difficulties, and the occasionally rigorous test. In these particular spots, sufficient rainfall and the proper soil profile can combine to provide a unique tableau called a vernal pool. Vernal pools are ephemeral habitats that occur during years of decent rainfall in areas where the soil profile includes a clay pan or other impermeable layer. Unable to percolate deep into the soil, standing water accumulates in relatively flat areas that feature shallow depressions. The pools can be from several feet across, to a number of acres in size. Vernal pools occur in a number of places around the world, commonly in Mediterranean climates, although not exclusively. California is a part of the world relatively rich with them, and seems to be the only place with a large collection of species entirely dependent on that habitat. The pools form in many parts of the state, especially along its coastal terraces, in the Central Valley, and in weathered volcanic areas. These settings frequently develop the clay soil layers that will hold rainwater in pools at the surface. Once the water is thus situated, it diminishes mainly by evaporation. With the end of the rainy season desiccation begins, and these sites become dry, mudcracked land, full of yellowed, dead vegetation, appearing to be entirely unremarkable. Because vernal pools feature both a flooded stage and longer dry periods that can extend for years in drought events, the plants that can live there must be especially adaptable. The group of plants that might grow in a close-by, better drained landscape cannot tolerate both the flooding and the dryness, leaving the space more available for specially adapted vernal pool plants and animals. The pools develop largely as closed systems, fed by rainwater falling in the immediate vicinity rather than by sources flowing in from a distance. As such, nutrients necessary for plant growth are limited to what is already there on the surface. As plants grow in this water that is usually no more than few inches deep, they must be able to deal with the variable water temperatures and pH levels throughout the day and night, affected by sun intensity and photosynthetic activity. Those qualities of active-living flexibility impress, but are nothing when compared to what these flora must endure once spring fades into summer and green turns to brown. Seeds and spores, once produced, must find a safe home under the detritus of the dying vegetation, or partially buried in the dried mud. There they may sit for not only the duration of one summer, fall, and into winter, but perhaps years of bleak dryness. And yet they persist, and when the rain comes again, these stored reproductive entitites, collectively known as propagules, spring to life to create another round of inundated greenery. This process, while remarkable for the plant life, is astonishing when considering that an ecosystem of animals also goes through a similar cycle. Amphibians such as the California tree frog, western spadefoot toad, and various salamanders use vernal pools as temporary safe havens in which to lay their eggs. Their tadpoles must make quick progress to grow legs and be ready for a terrestrial adult life in the time a vernal pool exists. Even more remarkable are the fairy shrimp. Having been around for some 400 million years, they are considered one of the oldest crustacean groups, and their lifecycle is astounding. As their eggs, or cysts, come to life with the winter rains, a number of different fairy shrimp species get to eating, growing, molting, and growing some more. Most are from a few millimeters to a few centimeters in length, and reach maturity in one to three weeks. Time is of the essence, for they must reproduce while there is still water available. This gives them only a few months at best, after which time they die in the dryness. But buried in the drying mud are the cysts of the next generation. They are prolific—in one East African vernal pool they were 9,600 to the square meter. Fairy shrimp need those numbers because they may have to endure both blistering heat and freezing cold not only for one round of seasons, but for years. Or decades. Maybe even centuries. Their cysts have been experimentally subjected to temperatures from well over 250 degrees to near absolute zero, and even sent into the vacuum of space…after which trials they hatched. Though fairy shrimp would seem to be forever tethered to one small location for generations, the cysts can actually secure transportation in the stomachs of birds who unknowingly scoop up the microscopic eggs while feeding. They can withstand digestive juices, and find themselves deposited in a distant vernal pool, there to wait and then bring a new population to a new location. Perhaps even more ingenious in the fairy shrimp life strategy is that not all the cysts burst into life with the onset of rain. Some stay dormant, which saves the species if the first rains are just a brief, weak showing. An insufficiently filled and short-lived vernal pool will see its overeager inhabitants fall short of reproducing. By holding back, the cysts that remain await possibly more abundant rainfall, averting doom for that pool’s population. Other animals such as isopods, copepods, and various insects also inhabit the vernal pools, and their complex interdependencies and natural histories are full of other tales of amazing adaptation. Like so many of nature’s settings, the scene mixes incredible fragility and mind-boggling toughness and adaptability. Left to their own natural cycles, vernal pools would persist forever, marching on through both common yearly cycles and the harshest of climatic swings. Sadly, however, vernal pools cannot sustain themselves if their clay pan is broken. Once the soil is sufficiently disturbed, falling rain will simply sink into the deep earth as it does most everywhere. Due to rampant development, roadbuilding, agriculture, and overgrazing, California has lost 95 percent or more of its original vernal pool habitat, and such places are endangered in other places around the world. In fact, in the region of the Mediterranean Sea itself, vernal pool habitat is rare despite having the weather and soil types that should support them. Thousands of years of grazing may have wiped away such mini-environments there. The Mediterranean climate is where everyone wants to be—there’s just too much development in these zones, and it’s hard to drum up concern over tiny plants and animals that may only be visible for a very few of months the year, if that. Still, the fabulous adaptability of vernal pool biota should inspire us to preserve the natural wonder of their unique abilities. Vernal pools can be found in many parts of California. The Benchmark California Road and Recreation Atlas can guide you all over the state with gorgeous shaded relief mapping. Get one from Maps.com. caption: A San Diego County vernal pool soaked with winter rains. As the pool evaporates, a sequence of different plant species encircling the water will rise in turn. Many grow nowhere else. source: Flickr: Joanna Gilkeson/USFWS (CC by 2.0 Generic) caption: A ranger and biologist search for fairy shrimp, an animal that can survive the potentially long dry periods between the rains that fill the pools. source: Flickr: Joanna Gilkeson/USFWS (CC by 2.0 Generic) caption: A vernal pool depression in summer or drought is barely recognizable. Hard to believe that life is in there, just waiting for water. source: Camp Pendleton USMC website: Lance Corporal Asia Sorenson (Public domain) caption: Fairy shrimp swim on their backs. Ducks love them, but by living in vernal pools, they are safe from predation by fish. Habitat loss is their main worry—many species are threatened or endangered. source: Wikimedia Commons: Heide Couch/USAF (Public domain) caption: Various amphibians, like the rare tiger salamander, depend upon vernal pools as a place to lay their eggs, and provide a home for hatchlings to develop. source: Travis Air Force Base website: Heide Couch/USAF (Public domain)
Sever’s disease, also known as calcaneal apophysitis is a common bone disorder that occurs during childhood. The disease is defined as an inflammation of the growth plate in the heel. When a child has a growth spurt, his heel bone grows faster than the muscles, tendons, and ligaments in his leg. This disease is a result of overuse. The people who are most likely to be affected by this disease are children who are in a growth spurt, especially boys who are from the ages of 5 to 13 years old. 60% of children with Sever’s disease have both heels involved. Symptoms of this disease are heel pain that intensifies during running and jumping activities. The pain is typically localized to the posterior part of the heel. Symptoms may be severe, and they can easily interfere with daily activities. Children who play soccer, baseball, and basketball are more likely to develop Sever’s disease. Your doctor will diagnose your child based on his or her symptoms, x-rays are generally not helpful in diagnosing this disease. Your doctor may examine both heels and ask your child questions about his or her activity level in sports. Your doctor may then use the squeeze test on your child’s heel to see if there is any pain. Nevertheless, some doctors might still use x-rays to rule out any other issues such as fractures, infections, and tumors. Sever’s disease can be prevented by maintaining good flexibility while your child is growing. Another prevention method is to wear good-quality shoes that have firm support and a shock-absorbent sole. Sever’s disease can be treated by ceasing any activity that causes heel pain. You should apply ice to the injured heel for 20 minutes 3 times a day. Additionally, orthotics should be used for children who have high arches, flat feet, or bowed legs. If you suspect your child has Sever’s disease, you should make an appointment with your podiatrist to have his or her foot examined. Your doctor may recommend nonsteroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen or naproxen to relieve pain. In more severe cases, your child may need a cast to rest his or her heel. Fortunately, Sever’s disease does not cause long-term foot problems. After treatment, your child should start to feel better within two weeks to two months.
ASVAB Arithmetic Reasoning Subtest: Completing a Number Sequence The Arithmetic Reasoning (AR) subtest of the ASVAB often includes questions that test your ability to name what comes next in a sequence of numbers. Generally, these problems are the only AR questions that aren’t word problems. However, sequence questions do test your ability to do arithmetic and to reason, because you have to determine how the numbers relate to each other. And to do this, you must be able to perform mathematical operations quickly. Suppose you have a sequence of numbers that looks like this: 1, 4, 7, 10, ? Each new number is reached by adding 3 to the previous number: 1 + 3 = 4, 4 + 3 = 7, and so on. So the next number in the sequence is 10 + 3 = 13, or 13. But of course, the questions on the ASVAB aren’t quite this simple. More likely, you’ll see something like this: 2, 4, 16, 256, ? In this case, each number is being multiplied by itself, so 2 × 2 = 4, 4 × 4 = 16, and so on. The next number in the sequence is 256 × 256, which equals 65,536 — the correct answer. You may also see sequences like this: 1, 2, 3, 6, 12, ? In this sequence, the numbers are being added together: 1 + 2 = 3, and 1 + 2 + 3 = 6. The next number is 1 + 2 + 3 + 6 = 12. So the next number would be 24. Finding the pattern To answer sequence questions correctly, you need to figure out the pattern as quickly as possible. Some people, blessed with superior sequencing genes, can figure out patterns instinctively. The rest of the population has to rely on a more difficult, manual effort. Finding a pattern in a sequence of numbers requires you to think about how numbers work. For instance, seeing the number 256 after 2, 4, 16 should alert you that multiplication is the operation, because 256 is so much larger than the other numbers. On the other hand, because the values in 1, 2, 3, 6, 12 don’t increase by much, you can guess that the pattern requires addition. Dealing with more than one operation in a sequence Don’t forget that more than one operation can occur in a sequence. For example, a sequence may be “add 1, subtract 1, add 2, subtract 2.” That would look something like this: 2, 3, 2, 4, ? Because the numbers in the sequence both increase and decrease as the sequence continues, you should suspect that something tricky is going on. Make sure to use your scratch paper! Jot down notes while you’re trying to find the pattern in a sequence. Writing your work down helps you keep track of which operations you’ve tried.
Instructions: For this investigative phenomenon, you will need to determine the percent yield of magnesium oxide from the given reaction to determine if it is a useful commercial process. Record your data and calculations in the lab report below. You will submit your completed report. Percent Yield Lab The objective of this lab is to examine the reaction between magnesium metal and oxygen gas. No hypothesis needed for this lab. Your theoretical yield calculation serves as your prediction for what you expect the lab to produce, and that will be determined later in the lab. 1.Select and weigh a clean, dry crucible to find the mass of the crucible and lid. 2.Record the mass of the empty crucible and lid in your data table. 3.Cut a small piece of metal from the magnesium ribbon. 4.With wool, remove the oxidized magnesium. 5.Roll the magnesium strip into a loose coil and place it inside the crucible. 6.Weigh the mass of the crucible, lid and magnesium inside. 7. Record the mass of the crucible, lid and magnesium inside. 8. Place the cubical with the magnesium inside on the burner for 10 minutes. 9. Turn off the burner and wait 5 minutes. 10. Weigh the cubical again and record the data. Controlled variables: scale, heating time These are the controlled variables because they do not change throughout the experiment. Independent variable: size of magnesium strip This is the independent variable because it is the factor being changed in the experiment. Dependent variable: weight This is the dependent variable because it is the factor that is the result of the change in the experiment. Type the data in the data table below. Don’t forget to record measurements with the correct number of significant figures. |Data||Trial 1||Trial 2| |Mass of empty crucible with lid||26.698 g||26.687 g| |Mass of Mg metal, crucible, and lid||27.060 g||27.046 g| |Mass of MgO, crucible, and lid||27.291 g||27.273 g| Show your calculations for each of the following. Remember, calculations should follow rules for significant figures. - Write the balanced chemical equation for the reaction you are performing. 2 MG(s) + O2(g) = 2 MGO(s) 2. Calculate the mass of magnesium metal used in each trial. - Trial 1: .362 g - Trial 2: .359 g 3. Calculate the actual yield of magnesium oxide for each trial. - Trial 1: .593 g - Trial 2: .586 g 4. Magnesium is the limiting reactant in this experiment. Calculate the theoretical yield of MgO for each trial. - Trial 1: .6 g - Trial 2: .596 g 5. Determine the percent yield of MgO for your experiment for each trial. - Trial 1: 98.8% - Trial 2: 98.3% 6. Determine the average percent yield of MgO for the two trials. Questions and Conclusions: - Describe the process that was used in this lab to create magnesium oxide, specifically identifying the type of chemical reaction. Explain why the product had a higher mass than the reactant, and how this relates to conservation of matter. Mg is weighed in a crucible and then heated to create a reaction with O₂ to produce MgO. This is an oxidation reaction because the magnesium gained O and oxidized to create MgO. The reason that the product had a higher mass than the reactant is because Mg bonded to O to form MgO, so the product had a higher mass because of the gain of an O atom. This relates to the conversation of matter because the law of conversation of matter states that the matter goes not destroyed or created from nothing, but it can convert from one form to other. - What sources of error may have contributed to the percent yield not being 100 percent? Think about things that may have led to inaccurate measurements or where mass of the product could have been lost if this experiment was conducted in a physical laboratory. During the step where the Mg strip is brushed with the wool, depending on the thickness of the strip and how much it is brushed, could have led to errors in measurement and weighing. - When conducting this experiment, some procedures call for heating the substance several times and recording the mass after each heating, continuing until the mass values are constant. Explain the purpose of this process and how it might reduce errors. This process could reduce errors because it would provide a more reliable measurement of weight. - Your company currently uses a process with a similar cost of materials that has an average percent yield of 91 percent. If the average percent yield of this process is higher than that, this could save the company money. What is your recommendation to the company? Please support your recommendation using your data, calculations, and understanding of stoichiometry gathered from this lab. My recommendation to the company would be to use lower costing materials and purchase more in order to increase production and yield. This is supported by the information in my lab because the average percent yield increased when more Mg was added.
Hepatitis B infection is an established cause of acute and chronic hepatitis and cirrhosis. It is spread primarily through unsafe sex or intravenous drug use. However if a mother has a HepB infection, the fetus can be infected. The Hepatitis B vaccine is administered to all USA infants at birth, 4 months, and 6 months because the vaccine can help prevent an infected infant from developing Hepatitis B disease. In 2007 there was only a 1 in 480 risk of a live birth to a HepB-positive mother according to report ed cases (the CDC estimated that due to potential unreported cases, the risk could be as high as 1 in 216). The HepB vaccine introduced in 1991 contained the known neurotoxins thimerosal (mercury preservative) and an aluminum adjuvant, and HepB vaccines today still contain an aluminum adjuvant (see Does Aluminum cause Vaccine-Injury?). The introduction of the HepB vaccine is aligned from a timing perspective with the shart increase in autism in the early 1990’s (see Autism Prevalence and Vaccine Introduction). In 2009, a scientific study indicated that boys who received HepB vaccine had a 3x greater risk of developing autism, and a 9x greater risk of needing special education services. A 2011 study found that the HepB vaccine changes gene expression, including the expression of seven genes that are biomarkers for liver injury (HERE). In 2000 due to concern over mercury-containing HepB vaccines to new-born infants, the CDC temporarily stopped recommending the HepB vaccine for all infants. Instead, they recommended that hospitals test the pregant women for HepB and only administer the HepB vaccine to the very small number of infants born to HepB-positive mothers. This sound policy, which matched what the schedules of many developed countries (such as the UK, Denmark, Netherlands, Switzerland, Sweden, Norway, Finland, Ireland, Iceland, and Japan) , was ended by 2001 when the vaccine was routinely recommended again. As discussed in Why Vaccinate a Baby for HepB?, the Hepatitis B vaccine may not benefit a newborn whose mother is not HepB-positive since immunity from the vaccine often wanes prior to the teen years . A case could be made that the USA should again match the schedules of many other developed countries by no longer recommending Hepatitis B vaccines for infants unless the mother is HepB-positive. A new-born baby that is only a few hours old may not have an immune system that is ready to handle the aluminum adjuvant and other components of this vaccine. Pregnant women in the USA could request to be tested for Hepatitis B, and then to have the HepB vaccine administered to an infant only if the HepB test results are positive. This vaccine might instead be considered for administration during the teen years prior when an individual might be engaging in the activities that are risk factors (unsafe sex and illegal intravenous drug use). WHO (2009). Progress towards global immunization goals. Geneva: WHO. Centre for Disease Control and Prevention. Global progress toward universal childhood hepatitis B vaccination, 2003. MMWR 2003;52:868-70.
Technology and Medicine ||Download a printable version of Rome Lesson 7: Technology and Medicine (PDF 388K) Requires free Adobe Acrobat. Students will learn about Roman architecture, technology, and medicine by becoming teachers for a day. Students will participate in a class discussion about Rome's contributions in these areas and then work in small groups to become experts in one aspect of Roman technology or medicine. They will then share this knowledge with their classmates by teaching what they have learned and having their classmates participate in an activity where they will have to apply what they have learned. Students will then practice their evaluation skills by reviewing one another's performance. A final class discussion about the technological and medical contributions of the ancient Romans will summarize the ideas learned. World History, Social Studies, Science, Math, Engineering Education, and Communication Arts Grade Level: 6-12 Relevant National Standards: - View video clips illustrating the importance of education and learning in ancient Rome and sharing these discoveries with others. - Participate in a class discussion about the pursuit of knowledge and technology in ancient Rome. - Work as a class to create a scoring guide that will be used as an evaluation tool by both the teacher and students' peers. - Work in small groups to conduct research and become experts on an assigned topic related to the technology or medicine in ancient Rome. - Work in small groups to design a lesson that they will use to teach their classmates about the topic they have researched. - Be teachers for a day and teach their classmates about a topic related to ancient Roman technology or medicine. - Participate in assorted classroom activities that require them to demonstrate their learning about the topics presented by each group. - Evaluate the effectiveness of their classmates using a scoring guide created by the class. - Participate in a class discussion about the technological and medical contributions of the ancient Romans and their impact. McREL Compendium of K-12 Standards Addressed: Standard 9: Understands how major religious and large-scale empires arose in the Mediterranean Basin, China, and Indian from 500 BCE to 300 CE. Standard 11: Understands major global trends from 1000 BCE to 300 CE. Standard 2: Understands the historical perspective. Standard 1: Uses a variety of strategies in the problem-solving process. Standard 2: Understands and applied basic and advanced properties of the concepts of numbers. Standard 3: Uses basic and advanced procedures while performing the processes of computation. Standard 13: Understands the scientific enterprise. Standard 4: Gathers and uses information for research purposes. Standard 5: Uses the general skills and strategies of the reading process. Standard 7: Uses reading skills and strategies to understand and interpret a variety of informational texts. Listening and Speaking Standard 8: Uses listening and speaking strategies for different purposes. Thinking and Reasoning Standard 1: Understands the basic principles of presenting an argument. Standard 3: Effectively uses mental processes that are based on identifying similarities and differences. Standard 6: Applied decision-making techniques. Working with Others Standard 1: Contributes to the overall effort of a group. Standard 4: Displays effective interpersonal communication skills. Standard 5: Demonstrates leadership skills. This should take four 90-minute class periods or seven to eight 50-minute class periods, plus additional time for extension activities. Note: The amount of time needed will vary depending on the number of groups and the length of their presentations. - Video clips necessary to complete the lesson plan are available on The Roman Empire in the First Century Web site. If you wish to purchase a copy of the program, visit the PBS Shop for Teachers [Purchase DVD or Video]. - Teachers for a Day handout [Download PDF here (219k)], part of this lesson plan. - Access to Internet and other primary library resources for conducting research. - Access to word processing and multimedia presentation software (such as Power Point). - Assorted art and craft supplies. 1. Begin by explaining to students that while the Romans were not great inventors of machines and tools (because they had so much slave labor), they are well known for their use of technology in their architecture as well as their medical system. In addition, they gave the world Roman numerals and the upper classes held education and the pursuit of knowledge in high regard. This can be seen by having students view the clips Episode 4: Pliny the Elder and Pompeii [watch clip, duration 1:32]. 2. Discuss the importance of the pursuit of knowledge and use of various technologies in ancient Rome using questions like: 3. Using content from The Roman Empire in the First Century, including Baths, as well as the Related Resources in this lesson plan, explain to students that they will become teachers for a day. They will work as a small group to instruct other students about a specific aspect of Roman technology or medicine. Distribute the Teachers for a Day handout [Download PDF here (219k)] and review the requirements for completion of the project. - How was Pliny the Elder's pursuit of knowledge supported by the Emperor Vespasian? - How did Pliny the Elder's quest for knowledge lead to his death with the eruption of Mt. Vesuvius? - What sorts of discoveries were made by Pliny the Elder, and how scientific do you think these discoveries were, based on what you saw in the video clips? - From the video clips, how do you know that learning about the world around them was important to the ancient Romans? 4. Assign students to groups and have each group draw a number between one and seven. Groups will be assigned their teaching topic for the day by matching their number with the corresponding topic on the Teacher for a Day topics list. 5. Provide students with class time to complete their research and develop their lessons. Assign each group a specific day to "teach" their classmates about what they have learned. Stress the importance of having a hands-on activity for students to practice and demonstrate their learning. 6. Have students teach the class and grade their classmates' performance on the practice activities. Students should complete a scoring guide to evaluate the group's effectiveness in presenting what they have learned. - Accuracy of the information presented - Inclusion of all lesson planning elements listed above - Participation of all group members - Organization and preparedness - Quality of presentation and materials - Overall effectiveness - Did students really learn from your class? 7. As a closing activity, facilitate a class discussion about the impact of the technology and medicine developed by the ancient Romans using questions such as: - When you look at the characteristics of Roman architecture, how are these still used in modern construction and why are they important elements? - How could adopting an ancient Roman point of view about diet, exercise, and caring for the body benefit the American public if we made it common practice today? - When looking at the design of traditional Roman cities and how they compare to cities in America today, what are the similarities that can be drawn between the two? - Clean water was critically important to the Roman Empire, as it is today. Discuss how the developing countries of the world could use the basic ideas learned and practiced by the ancient Romans to provide clean water sources for their people. - Of the topics you studied, which do you think had the greatest impact on the Roman people? The world? Which still continues to impact us today? - Students could receive participation grades for class discussion activities and being attentive during group presentations. - Completion grades could be assigned for each scoring guide that is completely finished. - Time on task or group work grades could be given in the form of points or participation grades for the completion of all aspects of the group teaching project. - All groups will receive a completed scoring guide from the teacher and their classmates evaluating their performance. These grades could be averaged and recorded in terms of points or percentages. - Students could receive a completion grade for doing all activities assigned by "student" teachers. 1. Think about the technological advances that have taken place in the world over the past ten years. Make a list of items that are common today but were not typically part of the American lifestyle ten years ago. Discuss how these advances have changed our lives in both positive and negative ways. Look at the list and decide which of these inventions will be considered the most significant when students 200 years from now are studying history. 2. Working as a class, construct a scale model of a Roman city. It could be a fictional city or one from history. In it, include all of the architectural elements you learned about in class. Be sure to use Roman numerals when labeling anything requiring numbers. Be sure the layout of the city is consistent with that of a Roman city. NOVA Online: Baths of Caracalla [http://www.pbs.org/wgbh/nova/lostempires/roman/day.html] provides a detailed tour and a description of all areas of the bath house. NOVA: Roman Bath [http://www.pbs.org/wgbh/nova/lostempires/roman/] describes the construction of a Roman bath. There is also information on aqueducts, including a game where students can construct an aqueduct. Ancient Roman Architecture [http://www.geocities.com/SoHo/Workshop/5220/ancient/roman.html] provides information about typical Roman design features as well as pictures of some of Rome's most famous structures. Nova Roma: On Roman Numerals [http://www.novaroma.org/via_romana/numbers.html] describes the number system developed by the ancient Romans. It also provides a conversion feature that allows you to type in a number and see it represented as a Roman numeral. Ask Dr. Math: Roman Numerals [http://mathforum.org/dr.math/faq/faq.roman.html] explains how Roman numerals are read and used to work out a variety of math problems. It also explains the use of an abacus to calculate complex math problems. Teacher Net [http://members.aol.com/TeacherNet/AncientRome.html] has a complete listing of resources related to Rome and various aspects of Roman technology and life. The Medicine in Ancient Rome Web pages [http://www.historylearningsite.co.uk/medicine_in_ancient_rome1.htm]on the History Learning Site [http://www.historylearningsite.co.uk] explore various medical practices. They also show what the Romans did to prevent disease when designing cities and caring for themselves. (Require free Adobe Acrobat.) ||Download a printable version of Rome Lesson 7: Technology and Medicine (PDF 388K) ||Download a printable version of Teachers for a Day handout (PDF 219K)
The growing use of technology in education is forcing us to rethink our definition of literacy. The Cambridge Dictionary defines literacy as the ability to read and write and as a basic skill or knowledge of a subject. However, technological advances are radically changing the way students access content, interact with content, and share it with others, requiring a whole set of skills beyond the traditional practices of reading and writing. As the researchers Coiro, Knobel, Lankshear, and Leau explain in their 2008 book, Handbook of Research on New Literacies, as technology alters the literacy experience, students will have to be able to adapt to new technologies while at the same time learning how to learn effectively with these new technologies. This is forcing educators to develop new teaching approaches and to expand their understanding of what it means to be literate in the digital age. What Is Digital Literacy? Like many new concepts in the education field, the term digital literacy is so wide-ranging that it can be confusing. Also, educators don’t agree on the use of this term, and often concepts such as Internet literacy and 21st century skills are used interchangeably with digital literacy, even though they don’t mean exactly the same thing. A team from North Carolina State University, led by Hiller Spires, a professor of literacy and technology, has developed a very straightforward definition of digital literacy. They see it as having three different practices that learners are using repetitively: - Finding and consuming digital content - Creating digital content - Sharing and communicating digital content In order to do these practices effectively, learners must be able to critically evaluate the digital content they are viewing. If they are unable to do this, learners can become overwhelmed by the mass of digital content available, which may prevent them from making good use of the technology. Rather than identifying and exploring the digital content, the search process and technology become the focus. So knowing how to critically evaluate digital content at the same time you are locating, consuming, creating, and communicating it is key to digital literacy. Also, locating, consuming, creating and communicating digital content for educational purposes requires students to have specific skills and knowledge. What Skills Do Students Need To Be Digitally Literate? As mentioned earlier, with the continuing advances in educational technology, students are going to be in an ongoing process of learning new technologies and how to apply them to their learning. However, there are some basic skill sets related to the three practices that Spires’ team outlined that must be taught to students no matter what technology a student uses. Researching in an Online Environment Students must be prepared for researching in an online environment. They must be taught skills that are critical to finding and using digital content. These include: - Domain name knowledge - Working knowledge of how to use search engines and browsers - How to use punctuation to get better search results - Understanding how Google creates its search lists - General knowledge of available digital resources that have been curated by reliable sources, such as the Smithsonian and History Channel digital databases. - Knowing how to find who is publishing particular site content Evaluating Online Information Students need to learn how to evaluate the reliability and veracity of the information they find online. They need to be able to answer the following questions: - Is the site legitimate, or is it a hoax? - Is the author an expert or a non-expert? - Is the information current or dated? - Is the data neutral or biased? Reading Online Information Digital text that is read through the Internet is interactive. It can contain hyperlinks, audio clips, images, interactive buttons, share features, and comments. This forces readers to interact with the text differently than they would with a book or journal. As readers review the information, they have to decide how they want to explore the information and how deep they want to go. Students have to learn how to navigate this interactivity effectively and get the information they need efficiently without getting lost. Creating and Communicating Digital Content Creating digital content is a big part of digital literacy. Unlike traditional writing, which is more of a personal activity, digital content is created with the sense of being shared. Students can now use a variety of digital writing tools that are easy to use, participatory and collaborative, and facilitate creating online communities. This includes social networking sites like Facebook and Twitter that require the student to understand and use information from many sources and in multiple formats. Students can also use tools like YouTube, SlideShare, and podcasts to create video and audio and share it. Learning Appropriate Online Behavior Students also need to learn that there is a responsibility that comes with using digital tools to create things and share them with the world. They need to learn how to be good digital citizens and what constitutes appropriate online behavior in relation to: - Cyber bullying - Legality of online material used - Privacy and safety while traveling the digital world Where do we go from here? As technology continues to evolve, a student’s ability to create, share, and understand meaning and knowledge in a digital world will become more critical, not only for their success in education but in all aspects of their life. No matter how you define digital literacy or what you call it, education technology will continue to change pedagogy and learning and impact how students learn and respond to content. We will all need to be able to adapt to this changing environment and develop skills to navigate it effectively. And digital learning affects the development of products for the education market. There are opportunities for publishers to build “how-to” strategies for the best use of technology as well as best practices in using technology to develop responsible digital citizens.
Silvology is the biological science of studying forests, incorporating the understanding of natural forest ecosystems, and the effects and development of silvicultural practices. The term compliments silviculture, which deals with the art and practice of forest management. silvology, silvologist n. silvological, silvologic, silvologous adj. Silvology is based on the Latin silva (forests, woods) and Ancient Greek ology (study of). A forest ecosystem is an area of forest consisting of the biotic elements (e.g. plants, mammals, insects, fungi, bacteria) and abiotic elements (e.g. soil, water, carbon, nutrients, sunlight). Trees and shrubs are just one part of the forest ecosystem. Every living creature, from the tiniest spider to the largest carnivorous mammal, is linked through the food chain, and dependent on the plants, fungi and bacteria in the forest. All these living elements are connected to and affected by the physical environment such as rainfall, wildfire, and temperature. Any intervention by man, including silviculture, may have an affect on the forest ecosystem. Silviculture is the practice (note: not study) of managing the establishment, growth, composition, health, quality, and outputs of forests to meet diverse needs and values. Silviculture has been described as an art and science; the latter is sometimes described as ‘applied forest ecology’. Silvology is a more appropriate term. Silvicultural systems are designed to ensure sustainable forest management, which is defined formally as: the stewardship and use of forests and forest lands in a way, and at a rate, that maintains their biodiversity, productivity, regeneration capacity, vitality and their potential to fulfil, now and in the future, relevant ecological, economic and social functions, at local, national, and global levels, and that does not cause damage to other ecosystems (MCPFE) In contrast to other disciplines no terminology has developed to distinguish the practice of silviculture from its scientific counterpart. For example, agronomy for agricultural science and technology; dendrology for the study of trees; and, ecology for the study of organisms and their interactions with environments. In a paper I wrote in 2018 with Jens Peter Skovsgaard, we argue that silvology is the appropriate term for the scientific discipline dealing with such activities and consequently a uniting term for qualitative and quantitative aspects of forest ecology and the practice of silviculture. Hemery, G., and J.P. Skovsgaard. 2018. “Silvology: Redefining the Biological Science for the Study of Forests.” Quarterly Journal of Forestry 112 (April) (2): 128–31. Accessible here.
DROUGHT - GLOBAL WARMING - Global warming consequences, which are currently affecting the hydrological basins water resources are: - Diminishing water quality - increasing temperature - Climate change will increase the fresh water gap between the have and have-not regions as shown in the world map - Forest fires: - 3,981 fires were registered in 2018 from January through August - 629,531 Acres were destroyed because of drought and high temperatures - 5 times larger than the 5 year averages (128,578 Acres) CAPE TOWN DROUGHT - Cape Town-South Africa - 3.8 million residents - Restricted to 50 liters a day/person - 7 times less than average U.S. citizens - Daily water usage based on the Cape Town government calculation for the Think Water campaign (in liters) |Shower (2 min)||Toilet + Hygiene||Drinking + Cooking||Dishes + Cleaning||Total| - Abnormal dryness or drought are currently affecting approximately 28,239,000 people in California, which is about 76% of the state’s population. - Town of Paskenta (near Sacramento) with population of 112 ran out of water because their creek dried up. - Town of Davenport (near Santa Cruz) with population of 408 ran out of water because of extreme weather and pipe damage. - In East Porterville (near Los Angeles) with population of 5000, Office of Emergency Services connected storage tanks (9000 L capacity) to each home’s water line - Current urban population is 55% - Urban population expected to reach 68% by 2050 - Available urban water per capita trend is inverse to population’s growth - Water demands to increase by 50% within 30 years - 2.1 trillion gallons of purified drinking water are lost annually because of aging infrastructure in the U.S. - Drinking water and wastewater plants account for 30 to 40 percent of total energy consumed GLOBAL DROUGHT UPDATE What Climate Models Get Wrong About Future Water Availability By Emily Underwood April, 05th 2019 An article summarizing new and old studies predicting the rainfall around the globe and how the models developed in each study are aligned in some areas of the world and not matching in many others. ‘Why are you crying, Mami?’ In Venezuela, the search for water is a daily struggle. By Arelis R. Hernández and Mariana Zuñiga April, 04th 2019 Around two thirds of the population of Venezuela have suffered shortages or lost water completely. People are searching everywhere for whatever available water resource including urban wells, surging diseases such like diarrhea and typhoid fever. All of this is because of the power outages affecting pumps of the public water system. Who keeps buying California’s scarce water? Saudi Arabia By Lauren Markham March, 25th 2019 A Saudi food production company is farming water-intensive crop in Blythe, California, to feed its cows in Saudi Arabia, pressurizing the already drying water resources of the state. This article highlights how the state is managing its water resources in Blythe, the farmers reaction and how these farms are affecting the current water scarcity situation in that area. England could run short of water within 25 years By Damian Carrington March, 18th 2019 This article highlights the fact that England is facing the point where water demand from the country’s rising population surpasses the falling supply resulting from climate change. In order to avoid this situation, people have to cut water use by a third, water companies has to control and reduce their pipes leakage by 50% and additional reservoirs, desalination plants should be built. A-t-on assez d’eau pour nourrir la planète ? An article drawing attention to the water problem facing the food production industry. In addition, the article highlights the causes of this deficit, it suggests solutions to overcome the shortfall and to ensure the future needs of the planet growing population.
Photovoltaic solar panels absorb sunlight as a source of energy to generate direct current electricity. A photovoltaic (PV) module is a packaged, connected assembly of photovoltaic solar cells available in different voltages and wattages. Photovoltaic modules constitute the photovoltaic array of a photovoltaic system that generates and supplies solar electricity in commercial and residential applications. - 1 Theory and construction - 2 History - 3 Efficiencies - 4 Technology - 5 Smart solar modules - 6 Performance and degradation - 7 Maintenance - 8 Recycling - 9 Production - 10 Price - 11 Mounting and tracking - 12 Standards - 13 Connectors - 14 Applications - 15 Limitations - 16 Gallery - 17 See also - 18 References Theory and constructionEdit Photovoltaic modules use light energy (photons) from the Sun to generate electricity through the photovoltaic effect. Most modules use wafer-based crystalline silicon cells or thin-film cells. The structural (load carrying) member of a module can be either the top layer or the back layer. Cells must be protected from mechanical damage and moisture. Most modules are rigid, but semi-flexible ones based on thin-film cells are also available. The cells are connected electrically in series, one to another to a desired voltage, and then in parallel to increase amperage. The wattage of the module is the mathematical product of the voltage and the amperage of the module. A PV junction box is attached to the back of the solar panel and functions as its output interface. External connections for most photovoltaic modules use MC4 connectors to facilitate easy weatherproof connections to the rest of the system. Also, a USB power interface can be used. Module electrical connections are made in series to achieve a desired output voltage or in parallel to provide a desired current capability (amperes) of the solar panel or the PV system. The conducting wires that take the current off the modules are sized according to the ampacity and may contain silver, copper or other non-magnetic conductive transition metals. Bypass diodes may be incorporated or used externally, in case of partial module shading, to maximize the output of module sections still illuminated. Some special solar PV modules include concentrators in which light is focused by lenses or mirrors onto smaller cells. This enables the use of cells with a high cost per unit area (such as gallium arsenide) in a cost-effective way. Solar panels also use metal frames consisting of racking components, brackets, reflector shapes, and troughs to better support the panel structure. In 1839, the ability of some materials to create an electrical charge from light exposure was first observed by Alexandre-Edmond Becquerel. Though the premiere solar panels were too inefficient for even simple electric devices they were used as an instrument to measure light. The observation by Becquerel was not replicated again until 1873, when Willoughby Smith discovered that the charge could be caused by light hitting selenium. After this discovery, William Grylls Adams and Richard Evans Day published "The action of light on selenium" in 1876, describing the experiment they used to replicate Smith's results. In 1881, Charles Fritts created the first commercial solar panel, which was reported by Fritts as "continuous, constant and of considerable force not only by exposure to sunlight but also to dim, diffused daylight." However, these solar panels were very inefficient, especially compared to coal-fired power plants. In 1939, Russell Ohl created the solar cell design that is used in many modern solar panels. He patented his design in 1941. In 1954, this design was first used by Bell Labs to create the first commercially viable silicon solar cell. In 1957, Mohamed M. Atalla developed the process of silicon surface passivation by thermal oxidation at Bell Labs. The surface passivation process has since been critical to solar cell efficiency. Each module is rated by its DC output power under standard test conditions (STC). Power typically ranges from 100 to 365 Watts (W). The efficiency of a module determines the area of a module given the same rated output – an 8% efficient 230 W module will have twice the area of a 16% efficient 230 W module. Some commercially available solar modules exceed 24% efficiency. Depending on construction, photovoltaic modules can produce electricity from a range of frequencies of light, but usually cannot cover the entire solar range (specifically, ultraviolet, infrared and low or diffused light). Hence, much of the incident sunlight energy is wasted by solar modules, and they can give far higher efficiencies if illuminated with monochromatic light. Therefore, another design concept is to split the light into six to eight different wavelength ranges that will produce a different color of light, and direct the beams onto different cells tuned to those ranges. This has been projected to be capable of raising efficiency by 50%. A single solar module can produce only a limited amount of power; most installations contain multiple modules adding voltages or current to the wiring and PV system. A photovoltaic system typically includes an array of photovoltaic modules, an inverter, a battery pack for energy storage, charge controller, interconnection wiring, circuit breakers, fuses, disconnect switches, voltage meters, and optionally a solar tracking mechanism. Equipment is carefully selected to optimize output, energy storage, reduce power loss during power transmission, and conversion from direct current to alternating current. Scientists from Spectrolab, a subsidiary of Boeing, have reported development of multi-junction solar cells with an efficiency of more than 40%, a new world record for solar photovoltaic cells. The Spectrolab scientists also predict that concentrator solar cells could achieve efficiencies of more than 45% or even 50% in the future, with theoretical efficiencies being about 58% in cells with more than three junctions. Currently, the best achieved sunlight conversion rate (solar module efficiency) is around 21.5% in new commercial products typically lower than the efficiencies of their cells in isolation. The most efficient mass-produced solar modules[disputed ] have power density values of up to 175 W/m2 (16.22 W/ft2). Research by Imperial College, London has shown that solar panel efficiency is improved by studding the light-receiving semiconductor surface with aluminum nanocylinders, similar to the ridges on Lego blocks. The scattered light then travels along a longer path in the semiconductor, absorbing more photons to be converted into current. Although these nanocylinders have been used previously (aluminum was preceded by gold and silver), the light scattering occurred in the near infrared region and visible light was absorbed strongly. Aluminum was found to have absorbed the ultraviolet part of the spectrum, while the visible and near infrared parts of the spectrum were found to be scattered by the aluminum surface. This, the research argued, could bring down the cost significantly and improve the efficiency as aluminum is more abundant and less costly than gold and silver. The research also noted that the increase in current makes thinner film solar panels technically feasible without "compromising power conversion efficiencies, thus reducing material consumption". - Solar panel efficiency can be calculated by MPP (maximum power point) value of solar panels. - Solar inverters convert the DC power to AC power by performing the process of maximum power point tracking (MPPT): solar inverter samples the output Power (I-V curve) from the solar cell and applies the proper resistance (load) to solar cells to obtain maximum power. - MPP (Maximum power point) of the solar panel consists of MPP voltage (V mpp) and MPP current (I mpp): it is a capacity of the solar panel and the higher value can make higher MPP. Micro-inverted solar panels are wired in parallel, which produces more output than normal panels wired in series, with the output of the series determined by the lowest performing panel. This is known as the "Christmas light effect". Micro-inverters work independently to enable each panel to contribute its maximum possible output for a given amount of sunlight. Most solar modules are currently produced from crystalline silicon (c-Si) solar cells made of multicrystalline and monocrystalline silicon. In 2013, crystalline silicon accounted for more than 90 percent of worldwide PV production, while the rest of the overall market is made up of thin-film technologies using cadmium telluride, CIGS and amorphous silicon Emerging, third generation solar technologies use advanced thin-film cells. They produce a relatively high-efficiency conversion for the low cost compared to other solar technologies. Also, high-cost, high-efficiency, and close-packed rectangular multi-junction (MJ) cells are preferably used in solar panels on spacecraft, as they offer the highest ratio of generated power per kilogram lifted into space. MJ-cells are compound semiconductors and made of gallium arsenide (GaAs) and other semiconductor materials. Another emerging PV technology using MJ-cells is concentrator photovoltaics ( CPV ). In rigid thin-film modules, the cell and the module are manufactured in the same production line. The cell is created on a glass substrate or superstrate, and the electrical connections are created in situ, a so-called "monolithic integration". The substrate or superstrate is laminated with an encapsulant to a front or back sheet, usually another sheet of glass. The main cell technologies in this category are CdTe, or a-Si, or a-Si+uc-Si tandem, or CIGS (or variant). Amorphous silicon has a sunlight conversion rate of 6–12% Flexible thin film cells and modules are created on the same production line by depositing the photoactive layer and other necessary layers on a flexible substrate. If the substrate is an insulator (e.g. polyester or polyimide film) then monolithic integration can be used. If it is a conductor then another technique for electrical connection must be used. The cells are assembled into modules by laminating them to a transparent colourless fluoropolymer on the front side (typically ETFE or FEP) and a polymer suitable for bonding to the final substrate on the other side. Smart solar modulesEdit Several companies have begun embedding electronics into PV modules. This enables performing MPPT for each module individually, and the measurement of performance data for monitoring and fault detection at module level. Some of these solutions make use of power optimizers, a DC-to-DC converter technology developed to maximize the power harvest from solar photovoltaic systems. As of about 2010, such electronics can also compensate for shading effects, wherein a shadow falling across a section of a module causes the electrical output of one or more strings of cells in the module to fall to zero, but not having the output of the entire module fall to zero. Performance and degradationEdit This section possibly contains original research. (August 2013) (Learn how and when to remove this template message) Module performance is generally rated under standard test conditions (STC): irradiance of 1,000 W/m2, solar spectrum of AM 1.5 and module temperature at 25°C. The actual voltage and current output of the module changes as lighting, temperature and load conditions change, so there is never one specific voltage, current, or wattage at which the module operates. Performance varies depending on time of day, amount of solar insolation, direction and tilt of modules, cloud cover, shading, temperature, geographic location, and day of the year. For optimum performance a solar panel needs to be made of similar modules oriented in the same direction perpendicular towards direct sunlight. The path of the sun varies by latitude and day of the year and can be studied using a sundial or a sunchart and tracked using a solar tracker. Differences in voltage or current of modules may affect the overall performance of a panel. Bypass diodes are used to circumvent broken or shaded panels to optimize output. Electrical characteristics include nominal power (PMAX, measured in W), open circuit voltage (VOC), short circuit current (ISC, measured in amperes), maximum power voltage (VMPP), maximum power current (IMPP), peak power, (watt-peak, Wp), and module efficiency (%). Nominal voltage refers to the voltage of the battery that the module is best suited to charge; this is a leftover term from the days when solar modules were only used to charge batteries. Nominal voltage allows users, at a glance, to make sure the module is compatible with a given system. Open circuit voltage or VOC is the maximum voltage that the module can produce when not connected to an electrical circuit or system. VOC can be measured with a voltmeter directly on an illuminated module's terminals or on its disconnected cable. The peak power rating, Wp, is the maximum output under standard test conditions (not the maximum possible output). Typical modules, which could measure approximately 1 by 2 metres (3 ft × 7 ft), will be rated from as low as 75 W to as high as 350 W, depending on their efficiency. At the time of testing, the test modules are binned according to their test results, and a typical manufacturer might rate their modules in 5 W increments, and either rate them at +/- 3%, +/-5%, +3/-0% or +5/-0%. The ability of solar modules to withstand damage by rain, hail, heavy snow load, and cycles of heat and cold varies by manufacturer, although most solar panels on the U.S. market are UL listed, meaning they have gone through testing to withstand hail. Many crystalline silicon module manufacturers offer a limited warranty that guarantees electrical production for 10 years at 90% of rated power output and 25 years at 80%. Potential induced degradation (also called PID) is a potential induced performance degradation in crystalline photovoltaic modules, caused by so-called stray currents. This effect may cause power loss of up to 30%. The largest challenge for photovoltaic technology is said to be the purchase price per watt of electricity produced. New materials and manufacturing techniques continue to improve the price to power performance. The problem resides in the enormous activation energy that must be overcome for a photon to excite an electron for harvesting purposes. Advancements in photovoltaic technologies have brought about the process of "doping" the silicon substrate to lower the activation energy thereby making the panel more efficient in converting photons to retrievable electrons. Chemicals such as boron (p-type) are applied into the semiconductor crystal in order to create donor and acceptor energy levels substantially closer to the valence and conductor bands. In doing so, the addition of boron impurity allows the activation energy to decrease 20 fold from 1.12 eV to 0.05 eV. Since the potential difference (EB) is so low, the boron is able to thermally ionize at room temperatures. This allows for free energy carriers in the conduction and valence bands thereby allowing greater conversion of photons to electrons. Solar panel conversion efficiency, typically in the 20% range, is reduced by dust, grime, pollen, and other particulates that accumulate on the solar panel. "A dirty solar panel can reduce its power capabilities by up to 30% in high dust/pollen or desert areas", says Seamus Curran, associate professor of physics at the University of Houston and director of the Institute for NanoEnergy, which specializes in the design, engineering, and assembly of nanostructures. Paying to have solar panels cleaned is often not a good investment; researchers found panels that had not been cleaned, or rained on, for 145 days during a summer drought in California, lost only 7.4% of their efficiency. Overall, for a typical residential solar system of 5 kW, washing panels halfway through the summer would translate into a mere $20 gain in electricity production until the summer drought ends—in about 2 ½ months. For larger commercial rooftop systems, the financial losses are bigger but still rarely enough to warrant the cost of washing the panels. On average, panels lost a little less than 0.05% of their overall efficiency per day. There may also be occupational hazards of solar panel installation and maintenance. Most parts of a solar module can be recycled including up to 95% of certain semiconductor materials or the glass as well as large amounts of ferrous and non-ferrous metals. Some private companies and non-profit organizations are currently engaged in take-back and recycling operations for end-of-life modules. Recycling possibilities depend on the kind of technology used in the modules: - Silicon based modules: aluminum frames and junction boxes are dismantled manually at the beginning of the process. The module is then crushed in a mill and the different fractions are separated - glass, plastics and metals. It is possible to recover more than 80% of the incoming weight. This process can be performed by flat glass recyclers since morphology and composition of a PV module is similar to those flat glasses used in the building and automotive industry. The recovered glass for example is readily accepted by the glass foam and glass insulation industry. - Non-silicon based modules: they require specific recycling technologies such as the use of chemical baths in order to separate the different semiconductor materials. For cadmium telluride modules, the recycling process begins by crushing the module and subsequently separating the different fractions. This recycling process is designed to recover up to 90% of the glass and 95% of the semiconductor materials contained. Some commercial-scale recycling facilities have been created in recent years by private companies. For aluminium flat plate reflector: the trendiness of the reflectors has been brought up by fabricating them using a thin layer (around 0.016 mm to 0.024 mm) of Aluminum coating present inside the non-recycled plastic food packages. |Top Module Producer||Shipments in 2014 (MW)| In 2010, 15.9 GW of solar PV system installations were completed, with solar PV pricing survey and market research company PVinsights reporting growth of 117.8% in solar PV installation on a year-on-year basis. With over 100% year-on-year growth in PV system installation, PV module makers dramatically increased their shipments of solar modules in 2010. They actively expanded their capacity and turned themselves into gigawatt GW players. According to PVinsights, five of the top ten PV module companies in 2010 are GW players. Suntech, First Solar, Sharp, Yingli and Trina Solar are GW producers now, and most of them doubled their shipments in 2010. The basis of producing solar panels revolves around the use of silicon cells. These silicon cells are typically 10-20% efficient at converting sunlight into electricity, with newer production models now exceeding 22%. In order for solar panels to become more efficient, researchers across the world have been trying to develop new technologies to make solar panels more effective at turning sunlight into energy. The price of solar electrical power has continued to fall so that in many countries it has become cheaper than ordinary fossil fuel electricity from the electricity grid since 2012, a phenomenon known as grid parity. Average pricing information divides in three pricing categories: those buying small quantities (modules of all sizes in the kilowatt range annually), mid-range buyers (typically up to 10 MWp annually), and large quantity buyers (self-explanatory—and with access to the lowest prices). Over the long term there is clearly a systematic reduction in the price of cells and modules. For example, in 2012 it was estimated that the quantity cost per watt was about US$0.60, which was 250 times lower than the cost in 1970 of US$150. A 2015 study shows price/kWh dropping by 10% per year since 1980, and predicts that solar could contribute 20% of total electricity consumption by 2030, whereas the International Energy Agency predicts 16% by 2050. Real-world energy production costs depend a great deal on local weather conditions. In a cloudy country such as the United Kingdom, the cost per produced kWh is higher than in sunnier countries like Spain. According to U.S. Energy Information Administration, prices per megawatt-hour are expected to converge and reach parity with conventional energy production sources during the period 2020-2030. According to EIA, the parity can be achieved without the need for subsidy support and can be accomplished through organic market mechanisms, namely production price reduction and technological advancement. Following to RMI, Balance-of-System (BoS) elements, this is, non-module cost of non-microinverter solar modules (as wiring, converters, racking systems and various components) make up about half of the total costs of installations. For merchant solar power stations, where the electricity is being sold into the electricity transmission network, the cost of solar energy will need to match the wholesale electricity price. This point is sometimes called 'wholesale grid parity' or 'busbar parity'. Some photovoltaic systems, such as rooftop installations, can supply power directly to an electricity user. In these cases, the installation can be competitive when the output cost matches the price at which the user pays for his electricity consumption. This situation is sometimes called 'retail grid parity', 'socket parity' or 'dynamic grid parity'. Research carried out by UN-Energy in 2012 suggests areas of sunny countries with high electricity prices, such as Italy, Spain and Australia, and areas using diesel generators, have reached retail grid parity. Mounting and trackingEdit Ground-mounted photovoltaic systems are usually large, utility-scale solar power plants. Their solar modules are held in place by racks or frames that are attached to ground-based mounting supports. Ground based mounting supports include: - Pole mounts, which are driven directly into the ground or embedded in concrete. - Foundation mounts, such as concrete slabs or poured footings - Ballasted footing mounts, such as concrete or steel bases that use weight to secure the solar module system in position and do not require ground penetration. This type of mounting system is well suited for sites where excavation is not possible such as capped landfills and simplifies decommissioning or relocation of solar module systems. Roof-mounted solar power systems consist of solar modules held in place by racks or frames attached to roof-based mounting supports. Roof-based mounting supports include: - Rail mounts, which are attached directly to the roof structure and may use additional rails for attaching the module racking or frames. - Ballasted footing mounts, such as concrete or steel bases that use weight to secure the panel system in position and do not require through penetration. This mounting method allows for decommissioning or relocation of solar panel systems with no adverse effect on the roof structure. - All wiring connecting adjacent solar modules to the energy harvesting equipment must be installed according to local electrical codes and should be run in a conduit appropriate for the climate conditions Solar trackers increase the amount of energy produced per module at a cost of mechanical complexity and need for maintenance. They sense the direction of the Sun and tilt or rotate the modules as needed for maximum exposure to the light. Alternatively, fixed racks hold modules stationary as the sun moves across the sky. The fixed rack sets the angle at which the module is held. Tilt angles equivalent to an installation's latitude are common. Most of these fixed racks are set on poles above ground. Panels that face West or East may provide slightly lower energy, but evens out the supply, and may provide more power during peak demand. Standards generally used in photovoltaic modules: - IEC 61215 (crystalline silicon performance), 61646 (thin film performance) and 61730 (all modules, safety) - ISO 9488 Solar energy—Vocabulary. - UL 1703 from Underwriters Laboratories - UL 1741 from Underwriters Laboratories - UL 2703 from Underwriters Laboratories - CE mark - Electrical Safety Tester (EST) Series (EST-460, EST-22V, EST-22H, EST-110). Outdoor solar panels usually includes MC4 connectors. Automotive solar panels also can include car lighter and USB adapter. Indoor panels (including solar pv glasses, thin films and windows) can integrate microinverter (AC Solar panels). There are many practical applications for the use of solar panels or photovoltaics. It can first be used in agriculture as a power source for irrigation. In health care solar panels can be used to refrigerate medical supplies. It can also be used for infrastructure. PV modules are used in photovoltaic systems and include a large variety of electric devices: Pollution and energy in productionEdit Solar panel has been a well-known method of generating clean, emission free electricity. However, it produces only direct current electricity (DC), which is not what normal appliances use. Solar photovoltaic systems (solar PV systems) are often made of solar PV panels (modules) and inverter (changing DC to AC). Solar PV panels are mainly made of solar photovoltaic cells, which has no fundamental difference to the material for making computer chips. The process of producing solar PV cells (computer chips) is energy intensive and involves highly poisonous and environmental toxic chemicals. There are few solar PV manufacturing plants around the world producing PV modules with energy produced from PV. This measure greatly reduces the carbon footprint during the manufacturing process. Managing the chemicals used in the manufacturing process is subject to the factories' local laws and regulations. Impact on electricity networkEdit With the increasing levels of rooftop photovoltaic systems, the energy flow becomes 2-way. When there is more local generation than consumption, electricity is exported to the grid. However, electricity network traditionally is not designed to deal with the 2- way energy transfer. Therefore, some technical issues may occur. For example in Queensland Australia, there have been more than 30% of households with rooftop PV by the end of 2017. The famous Californian 2020 duck curve appears very often for a lot of communities from 2015 onwards. An over-voltage issue may come out as the electricity flows from these PV households back to the network. There are solutions to manage the over voltage issue, such as regulating PV inverter power factor, new voltage and energy control equipment at electricity distributor level, re-conducting the electricity wires, demand side management, etc. There are often limitations and costs related to these solutions. When electric networks are down, such as during the October 2019 California power shutoff, solar panels are often insufficient to fully provide power to a house or other structure, because they are designed to supply power to the grid, not directly to homes. Implication onto electricity bill management and energy investmentEdit There is no silver bullet in electricity or energy demand and bill management, because customers (sites) have different specific situations, e.g. different comfort/convenience needs, different electricity tariffs, or different usage patterns. Electricity tariff may have a few elements, such as daily access and metering charge, energy charge (based on kWh, MWh) or peak demand charge (e.g. a price for the highest 30min energy consumption in a month). PV is a promising option for reducing energy charge when electricity price is reasonably high and continuously increasing, such as in Australia and Germany. However for sites with peak demand charge in place, PV may be less attractive if peak demands mostly occur in the late afternoon to early evening, for example residential communities. Overall, energy investment is largely an economical decision and it is better to make investment decisions based on systematical evaluation of options in operational improvement, energy efficiency, onsite generation and energy storage. - Battery (electricity) - Daisy chain (electrical engineering) - Digital modeling and fabrication - Domestic energy consumption - Grid-tied electrical system - Growth of photovoltaics - List of photovoltaics companies - MC4 connector - Rooftop photovoltaic power station - Sky footage - Solar charger - Solar cooker - Solar oven - Solar roadway - Solar still - Li, Wei; Rubin, Tzameret H; Onyina, Paul A (2013). "Comparing Solar Water Heater Popularization Policies in China, Israel and Australia: The Roles of Governments in Adopting Green Innovations". Sustainable Development. 21 (3): 160–70. doi:10.1002/sd.1547. - "Metal Stamped Parts for Solar Paneling | American Industrial". American Industrial. Retrieved 14 March 2018. - "April 25, 1954: Bell Labs Demonstrates the First Practical Silicon Solar Cell". APS News. American Physical Society. 18 (4). April 2009. - Christian, M. "The history of the invention of the solar panel summary". Engergymatters.com. Energymatters.com. Retrieved 25 January 2019. - Adams, William Grylls; Day, R. E. (1 January 1877). "IX. The action of light on selenium". Philosophical Transactions of the Royal Society of London. 167: 313–316. doi:10.1098/rstl.1877.0009. ISSN 0261-0523. Retrieved 7 September 2018. - Meyers, Glenn (31 December 2014). "Photovoltaic Dreaming 1875--1905: First Attempts At Commercializing PV". cleantechnica.com. Sustainable Enterprises Media Inc. CleanTechnica. Retrieved 7 September 2018. - Ohl, Russell (27 May 1941). "Light-sensitive electric device". Google. Retrieved 7 September 2018. - Black, Lachlan E. (2016). New Perspectives on Surface Passivation: Understanding the Si-Al2O3 Interface (PDF). Springer. p. 13. ISBN 9783319325217. - Lojek, Bo (2007). History of Semiconductor Engineering. Springer Science & Business Media. pp. 120 & 321-323. ISBN 9783540342588. - Black, Lachlan E. (2016). New Perspectives on Surface Passivation: Understanding the Si-Al2O3 Interface (PDF). Springer. ISBN 9783319325217. - Ulanoff, Lance (2 October 2015). "Elon Musk and SolarCity unveil 'world's most efficient' solar panel". Mashable. Retrieved 9 September 2018. - da Silva, Wilson (17 May 2016). "Milestone in solar cell efficiency achieved". ScienceDaily. Retrieved 9 September 2018. A new solar cell configuration developed by engineers at the University of New South Wales has pushed sunlight-to-electricity conversion efficiency to 34.5% -- establishing a new world record for unfocused sunlight and nudging closer to the theoretical limits for such a device. - Orcutt, Mike. "Managing Light To Increase Solar Efficiency". MIT Technology Review. Retrieved 14 March 2018. - KING, R.R., et al., Appl. Phys. Letters 90 (2007) 183516. - "SunPower e20 Module". - "HIT® Photovoltaic Module" (PDF). Sanyo / Panasonic. Retrieved 25 November 2016. - "Improving the efficiency of solar panels". The Hindu. 24 October 2013. Retrieved 24 October 2013. - "Micro Inverters for Residential Solar Arrays". Retrieved 10 May 2017. - Photovoltaics Report, Fraunhofer ISE, 28 July 2014, pages 18,19 - "nominal voltage Definition and Meaning". www.dictionaryofengineering.com. Retrieved 4 September 2017. - "First Solar – FS-377 / FS-380 / FS-382 / FS-385 Datasheet" (PDF). Retrieved 4 June 2012. - "TSM PC/PM14 Datasheet" (PDF). Archived from the original (PDF) on 29 October 2013. Retrieved 4 June 2012. - "LBS Poly 260 275 Data sheet" (PDF). Retrieved 9 January 2018. - "Are Solar Panels Affected by Weather? - Energy Informative". Energy Informative. Retrieved 14 March 2018. - "CTI Solar sales brochure" (PDF). cti-solar.com. Retrieved 3 September 2010. - "Solarplaza Potential Induced Degradation: Combatting a Phantom Menace". www.solarplaza.com. Retrieved 4 September 2017. - (www.inspire.cz), INSPIRE CZ s.r.o. "What is PID? — eicero". eicero.com. Retrieved 4 September 2017. - "How Solar Cells Work". HowStuffWorks. Retrieved 9 December 2015. - "Bonding in Metals and Semiconductors". 2012books.lardbucket.org. Retrieved 9 December 2015. - Crawford, Mike (October 2012). "Self-Cleaning Solar Panels Maximize Efficiency". The American Society of Mechanical Engineers. ASME. Retrieved 15 September 2014. - Patringenaru, Ioana (August 2013). "Cleaning Solar Panels Often Not Worth the Cost, Engineers at UC San Diego Find". UC San Diego News Center. UC San Diego News Center. Retrieved 31 May 2015. - Lisa Krueger "Overview of First Solar's Module Collection and Recycling Program" (PDF). Brookhaven National Laboratory p. 23. Retrieved 17 March 2017. - Wambach, K. "A Voluntary Take Back Scheme and Industrial Recycling of Photovoltaic Modules" (PDF). Brookhaven National Laboratory p. 37. Retrieved 17 March 2017. - Cynthia, Latunussa (9 October 2015). "Solar Panels can be recycled - BetterWorldSolutions - The Netherlands". BetterWorldSolutions - The Netherlands. Retrieved 29 April 2018. - Latunussa, Cynthia E.L.; Ardente, Fulvio; Blengini, Gian Andrea; Mancini, Lucia (2016). "Life Cycle Assessment of an innovative recycling process for crystalline silicon photovoltaic panels". Solar Energy Materials and Solar Cells. 156: 101–11. doi:10.1016/j.solmat.2016.03.020. - Wambach. 1999. p. 17 - Krueger. 1999. p. 23 - Wambach. 1999. p. 23 - Sonu, Mishra (21 December 2017). "Enhanced radiation trapping technique using low-cost aluminium flat plate reflector a performance analysis on solar PV modules". 2nd International Conference for Convergence in Technology (I2CT). - "First Breakthrough In Solar Photovoltaic Module Recycling, Experts Say". European Photovoltaic Industry Association. Archived from the original on 12 May 2013. Retrieved 1 January 2011. - "3rd International Conference on PV Module Recycling". PV CYCLE. Archived from the original on 10 December 2012. Retrieved 1 October 2012. - Harford, Tim (11 September 2019). "Can solar power shake up the energy market?". Retrieved 24 October 2019. - "Solar Power Plant Report". - "PVinsights announces worldwide 2010 top 10 ranking of PV module makers". www.pvinsights.com. Retrieved 6 May 2011. - "What are solar panels made of?". www.solarpowerworldonline.com. - "Grand Challenges Make Solar Energy Economical". www.engineeringchallenges.org. - "SolarCity Press Release". 2 October 2015. Retrieved 20 April 2017. - Giges, Nancy (April 2014). "Making Solar Panels More Efficient". ASME.org. Retrieved 9 September 2018. - "Top 10 solar module suppliers in 2018". PV Tech. Retrieved 24 October 2019. - "Swanson's Law and Making US Solar Scale Like Germany". Greentech Media. 24 November 2014. - Morgan Baziliana; et al. (17 May 2012). Re-considering the economics of photovoltaic power. UN-Energy (Report). United Nations. Archived from the original on 16 May 2016. Retrieved 20 November 2012. - ENF Ltd. (8 January 2013). "Small Chinese Solar Manufacturers Decimated in 2012 | Solar PV Business News | ENF Company Directory". Enfsolar.com. Retrieved 29 August 2013. - Harnessing Light. National Research Council. 1997. p. 162. - Farmer, J. Doyne; Lafond, François (2016). "How predictable is technological progress?". Research Policy. 45 (3): 647–65. arXiv:1502.05274. doi:10.1016/j.respol.2015.11.001. - MacDonald, A. E., Clack, C. T., Alexander, A., Dunbar, A., Wilczak, J., & Xie, Y. (2016). Future cost-competitive electricity systems and their impact on US CO 2 emissions. Nature Climate Change, 6(5), 526. - MacDonald, A. E., Clack, C. T., Alexander, A., Dunbar, A., Wilczak, J., & Xie, Y. (2016). Future cost-competitive electricity systems and their impact on US CO 2 emissions. Nature Climate Change, 6(5), 526. - "Solar Photovoltaics competing in the energy sector – On the road to competitiveness" (PDF). EPIA. Archived from the original (PDF) on 26 February 2013. Retrieved 1 August 2012. - SolarProfessional.com Ground-Mount PV Racking Systems March 2013 - Massachusetts Department of Energy Resources Ground-Mounted Solar Photovoltaic Systems, December 2012 - "A Guide To Photovoltaic System Design And Installation". ecodiy.org. Retrieved 26 July 2011. - Shingleton, J. "One-Axis Trackers – Improved Reliability, Durability, Performance, and Cost Reduction" (PDF). National Renewable Energy Laboratory. Retrieved 30 December 2012. - Mousazadeh, Hossain; et al. "A review of principle and sun-tracking methods for maximizing" (PDF). Renewable and Sustainable Energy Reviews 13 (2009) 1800–1818. Elsevier. Retrieved 30 December 2012. - "Optimum Tilt of Solar Panels". MACS Lab. Retrieved 19 October 2014. - Perry, Keith (28 July 2014). "Most solar panels are facing the wrong direction, say scientists". The Daily Telegraph. Retrieved 9 September 2018. - Miller, Wendy; Liu, Aaron; Amin, Zakaria; Wagner, Andreas (2018). "Power Quality and Rooftop-Photovoltaic Households: An Examination of Measured Data at Point of Customer Connection". Sustainability. 10 (4): 1224. doi:10.3390/su10041224. - Martin, Chris (10 October 2019). "Californians Learning That Solar Panels Don't Work in Blackouts". Bloomberg. New York NY: Bloomberg LP. - L. Liu, W. Miller, and G. Ledwich. (2017) Solutions for reducing facilities electricity costs. Australian Ageing Agenda. 39-40. Available: https://www.australianageingagenda.com.au/2017/10/27/solutions-reducing-facility-electricity-costs/ - Miller, Wendy; Liu, Lei Aaron; Amin, Zakaria; Gray, Matthew (2018). "Involving occupants in net-zero-energy solar housing retrofits: An Australian sub-tropical case study". Solar Energy. 159: 390–404. doi:10.1016/j.solener.2017.10.008. |Wikimedia Commons has media related to Photovoltaics.|
Antimatter is rare in this Universe, but the Universe is a pretty big place, so even small quantities can add up fast. In our galaxy alone, there's a steady bath of radiation that indicates positrons are constantly running into their electron anti-partners and annihilating them. Over something the size of a galaxy, that means there are lots of the positrons around. Estimates have it that 9.1 trillion kilograms of antimatter are being destroyed each second. Where's it all coming from? We don't really know, but candidates have included everything from dark matter particles to supermassive black holes. A new paper suggests a relatively unexciting source: a specific class of supernova that produces lots of radioactive titanium, which decays by releasing a positron. While positrons are produced by radioactivity here on Earth, they run into normal electrons almost instantly, a collision that annihilates both and releases an energetic photon. The interstellar material in space is so sparse, however, that it's thought that positrons typically travel for over 100,000 years before running into anything. That's long enough to blur out any individual sources and turn a single burst of positron production into a slow background of annihilations. So even if there are objects that produce positrons, we'd have a hard time spotting them. There seemed to be an excess of positrons near the bulge at the center of our galaxy. Since the bulge has less of our galaxy's stars than the galactic disk, that implied that stars probably weren't involved in their production. That's one of the reasons dark matter annihilations seemed to be an appealing explanation. But the photons that reach us from the annihilations don't have much additional energy beyond that produced by the annihilation itself. This implies that the positrons are relatively low-energy, which would seem to rule out dark matter collisions, as well as a variety of other exotic sources. But, according to the authors of the new paper, the ESA's INTEGRAL mission suggested there are more positrons coming from the disk than we thought. And lots of the positrons coming from the galactic bulge appear to be generated at its central black hole. So, it appears that the production of positrons by the disk and bulge are roughly in proportion to the number of stars there. So a stellar source is back on the table. How can a star that's filled with matter start producing antimatter? By blowing up. Supernovae produce lots of heavier elements, some of which are radioactive. And certain types of radioactive decays release positrons. In fact, three different elements known to be produced in supernovae can do so: 56Ni, 44Ti, and 26Al. 56Ni has the shortest half life of these decays with a half life of only six days. Because of this rapid decay, however, most of the positrons are released while the supernova debris are still relatively dense. As a result, they end up annihilated within the debris, contributing to the brightness of the supernova. So that's off the table. 26Al, in contrast, has a half life of over 10,000 years, enough to get it well clear of the supernova. Because it typically decays away from other radiation sources, we can actually detect the photons produced as it decays, which gives us a measure of how many positrons must be produced. The number we get is only 10 percent of the positron annihilation rate, so aluminum is off the table as well. That leaves us with 44Ti, which has a half life of 60 years. That's long enough for supernova debris to thin out before the positrons are produced. And we can spot its decay in the debris of supernovae produced by collapsed stars. Again, however, we can track its decay using the photons released, and there isn't enough titanium in these supernova remnants to account for the galaxy's positrons. It would appear titanium has been eliminated as well. But the authors find a way to put it back in play. There's a relatively rare class of supernova (called SN 1991bg-like) that can produce unusually high amounts of titanium. These supernovae occur when two moderate sized stars end up close enough to share a common envelope. One ends up with enough material to turn into a carbon-oxygen white dwarf; the second ends up as a ball of nearly pure helium. When the two stars later collide, models suggest that the resulting explosion should produce a lot of intermediate mass atoms, like calcium, chromium, and—critically—titanium. We've seen titanium decay in the aftermath of these explosions, and it's estimated that the rate of these explosions are enough to supply a steady stream of radioactive titanium. The key question is whether it's enough to provide the 90 percent of the positrons that aren't produced by aluminum. Based on simplified models of fusion reaction chains, it could. But the authors call for people who run sophisticated supernova models to run simulations of this particular type of explosion. This isn't the final word on our galaxy's antimatter, but it's a plausible explanation that's amenable to future testing, through both computer models and observations. In the meantime, an explanation involving physics we already know about has got to be preferred over anything that's purely theoretical.
Cleft lip is the vertical tearing of the upper lip. It can be partial or reach to the base of the nose (which sometimes looks flat). Sometimes the cleft continues to the upper gums. This cleft, called the ”cleft palate”, extends from the tooth to the nasal cavity. Cleft palate usually occur as a ”extended” cleft lip, but in some cases they may appear as a special malformation. Tearing of the palate makes it difficult to eat and swallow. Almost every 1000th child is born with one or both malformations. Sometimes the disorder is inherited, and sometimes affects several children in the same family. In most cases, the causes of both conditions are unknown. However, in rare cases, the condition is a consequence of disorders resulting from abnormalities of the chromosome. If not treated, cleft lip creates mental disturbances due to appearance, and cleft palate causes heavy speech disturbances. The operation is performed when the infant is older. By then, the treatment varies according to the degree of malformation. Meanwhile, many children do not need treatment because they eat and drink very well. In some cases, children who are fed to the bottle will need a larger opening on the bottle or will have to start eating with a teaspoon. In severe cases of cleft palate, it will be necessary to set a special tile on a palate whenever a child has to eat. A special prosthesis can also be adjusted on the upper gums, if the gums in the cleft area are not aligned. Cleft lip operation is performed when the infant reaches a weight of 4.5 kg (about 12 weeks). Flat nose requires another, later surgery. Cleft palate operation is performed after the first year of life, before the child starts to speaks. In each operation, a child gets general anesthesia and usually stays in the hospital for about a week. The results of the operation are usually excellent, the appearance is aesthetically improved and allows the development of normal speech. If, however, a child has a problems a speech, logopedics can help him/her.
Anger is one of the most dynamic and forceful emotions that can be felt by a human being. In fact, it can ─ and does ─ move a person’s very powerfully. Anger is an emotional response quite common among humans. It is a reaction to a perceived threat towards oneself. It can also be solicited from a perceived threat towards people important to oneself in the near future. This perceived threat may appear to be real or imagined. This emotional response can be taken from a perception (A Bit of Buddhist Psychology, 2003) due to a possible physical conflict, prejudice, carelessness, disgrace, or betrayal among other contentions. Emotions are not good or bad; they are simply part of life and it is what people choose to do with them that help decide whether they are labeled positive or negative. If angry feelings are suppressed, they may never become recognized or named, and this could lead to mental health problems (Identifying Anger, n.d.). Anger turned inwards can also be the cause of many relationship problems. Moreover, anger may trigger anxiety, which in turn may anger a person and cause varying complications (Cornelius, 1993, p. 128).
In this section Download the asthma care plan (PDF). A nebulizer is a type of inhaler that sprays a fine, liquid mist of medication. This is done through a mask, using oxygen or air under pressure, or an ultrasonic machine (often used by persons who cannot use a metered-dose inhaler, such as infants and young children, and persons with severe asthma). A mouthpiece is connected to a machine via plastic tubing to deliver the medication. The medications used in nebulizers help your child by loosening the mucus in the lungs so it can be coughed out more easily, and by relaxing the airway muscles so that more air can move in and out of the lungs. Breathing the medication straight into the lungs works better and faster than taking the medication by mouth. Nebulizer treatments take about 15 to 20 minutes to give the medication. Giving a treatment: The following steps are recommended when giving a treatment to your child. However, always consult your child's physician for specific instructions. - Gather supplies needed, including: - Medication to be nebulized - Nebulizer set (nebulizer cup, mouthpiece or mask, tubing to connect to nebulizer machine) - Find a quiet activity to do while your child sits up for the treatment (i.e., reading a book or playing a quiet game). - Place the nebulizer on a flat surface (i.e., table or the floor). - Plug the unit into a wall outlet - Connect the air tubing to the nebulizer machine - Put the medication into the nebulizer cup and screw the cap on securely - Connect the other end of the air tubing to the nebulizer cup - Connect the mouthpiece or face mask to the nebulizer cup - Turn the machine on - Check to make sure a fine mist of medication is coming through the face mask or mouthpiece - Place the mouthpiece in the child's mouth with the lips sealed around the mouthpiece - Encourage your child to take slow deep breaths in and out of their mouth. The mist should "disappear" with each breath. - Face mask: - Place the mask over your child's mouth and nose. The adjustable elastic band may be used to hold the mask in place. - Encourage your child to take deep breaths in and out for the duration of the treatment - Encourage your child to continue slow, deep breaths until all the medication in the nebulizer cup is gone. You may need to tap the sides of the nebulizer cup to ensure all medication is given. - Turn the nebulizer off - If the child's treatment plan orders peak flow measurements, obtain these before the treatment starts and after the treatment is completed. After each treatment: - Disconnect the nebulizer cup from the tubing. - Open the cup and wash all pieces in a mild dish soap and water. (Do not wash or rinse the tubing.) - Rinse all pieces. - Air dry on a clean towel. - Store the dried nebulizer cup and tubing in a plastic bag. - Once a week rinse the nebulizer cup in a vinegar/water solution, as directed by your physician, after washing. Notes for parents: - Stay with your child throughout the nebulizer treatment. - If your child should vomit or have a severe coughing spell during the treatment, stop the treatment, let the child rest for a few minutes, then resume the treatment. - Check the filter on the nebulizer machine once a week. When it becomes discolored, replace with a new filter. - Always keep a spare nebulizer kit at home. When you are down to your last two kits, contact your medical equipment company to deliver more.
As with cancer of the female breast, the cause of cancer of the male breast has not been fully characterized, but both environmental influences and genetic (inherited) factors likely play a role in its development. The following risk factors for the development of male breast cancer have been identified. Exposure to ionizing radiation has been associated with an increased risk of developing male breast cancer. Men who have previously undergone radiation therapy to treat malignancies in the chest area (for example, Hodgkin's lymphoma) have an increased risk for the development of breast cancer. Hyperestrogenism (High Levels of Estrogen) Men normally produce small amounts of the female hormone estrogen, but certain conditions result in abnormally high levels of estrogen in men. The term gynecomastia refers to the condition in which the male breasts become abnormally enlarged in response to elevated levels of estrogen. High levels of estrogens also can increase the risk for development of male breast cancer. The majority of breast cancers in men are estrogen receptor-positive (meaning that they grow in response to stimulation with estrogen). Two conditions in which men have abnormally high levels of estrogen that are commonly associated with breast enlargement are Klinefelter's syndrome and cirrhosis of the liver. Obesity is also associated with elevated estrogen levels and breast enlargement in men. Klinefelter's syndrome is an inherited condition affecting about one in 1,000 men. A normal man has two sex chromosomes (X and Y). He inherited the female X chromosome from his mother and the male Y chromosome from his father. Men with Klinefelter's syndrome have inherited an extra female X chromosome, resulting in an abnormal sex chromosome makeup of XXY rather than the normal male XY. Affected Klinefelter's patients produce high levels of estrogen and develop enlarged breasts, sparse facial and body hair, small testes, and the inability to produce sperm. Some studies have shown an increase in the risk of developing breast cancer in men with this condition. Their risk for development of breast cancer is markedly increased, up to 50 times that of normal men. Cirrhosis (scarring) of the liver can result from chronic alcohol abuse, chronic viral hepatitis, or rare genetic conditions that result in accumulation of toxic substances within the liver. The liver produces important binding proteins that affect the transport and delivery of male and female hormones via the bloodstream. With cirrhosis, liver function is compromised, and the levels of male and female hormones in the bloodstream are altered. Men with cirrhosis of the liver have higher blood levels of estrogen and have an increased risk of developing breast cancer. Epidemiologic studies have shown that men who have several female relatives with breast cancer also have an increased risk for development of the disease. In particular, men who have inherited mutations in the breast cancer-associated BRCA-2 gene have a dramatically increased (about 80-fold) risk for developing breast cancer, with a lifetime risk of about 5%-10% for development of breast cancer. BRCA-2 is a gene on chromosome 13 that normally functions in suppression of cell growth. Mutations in this gene lead to an increased risk for development of breast, ovarian, and prostate cancers. About 15% of breast cancers in men are thought to be attributable to BRCA-2 mutation. The role of the BRCA-1 gene, which has been associated with inherited breast cancers in women, is not as clearly defined for male breast cancers. This answer should not be considered medical advice...This answer should not be considered medical advice and should not take the place of a doctor’s visit. Please see the bottom of the page for more information or visit our Terms and Conditions. Archived: March 20, 2014 Thanks for your feedback. 6 of 6 found this helpful Read the Original Article: Male Breast Cancer
Thus far, the political systems I’ve investigated in this series have been constitutional monarchies: systems that divided their powers of state and government between a figurehead monarch and a prime minister, respectively. In France, the power is again divided between two primary political figures – but in this case, neither of the high offices are ceremonial. The French government exists within a semi-presidential framework. Essentially, this means that the country divvies its highest political influence between a popularly-elected President and the Government, which is in turn headed by the prime minister. For Americans and those in the United Kingdom, this system might seem redundant; why would a nation need both a president and a prime minister? Wouldn’t having two major leaders share power make governing more complicated, rather than less? In practice, no. France’s model of government works within a system of checks and balances which ensures that its leaders and lawmakers remain accountable to the nation’s government and don’t overstep their authority in pursuit of their own agendas. Let’s unpack this idea by taking a closer look at the branches of government. Branches of Government The executive branch consists of an elected president, who serves as head of state, and his appointed prime minister, who acts as head of government. Unlike those heads of states in nominally monarchical systems, however, the president holds considerable political authority within the government and must be elected to the role. After his inauguration, the French president is responsible for appointing a prime minister, lower ministers, ministers-delegate, and secretaries. However, an elected president cannot simply choose his political allies to serve in these influential offices. Given that the National Assembly has the ability to force the resignation of government, the President must select a government that reflects and will uphold the interests of the parliamentary majority. Thus, a president of a different party than the parliamentary majority will find his power somewhat handicapped; by that same rule, though, a president of the same party will have considerable sway in determining and carrying out the national agenda. The executive branch is responsible for ensuring that the nation’s armed forces, civil service, and governmental agencies operate productively. The Parliament of France serves as the nation’s legislative branch and is responsible for passing legislation and setting the nation’s budget. This body is divided into two disparate houses: the National Assembly and the Senate. The National Assembly is considered to be the principle of the two; it consists of 577 elected deputies who each serve five-year terms. As stated earlier, the National Assembly has the ability to pass a motion of censure and force the government to resign; however, this hardly ever happens in practice. The Senate is the lesser house of Parliament. Comprised of 346 senators elected via an electoral college system to serve nine-year terms, the Senate has considerably less influence than the National Assembly. While the Senate may argue over legislation brought to the parliamentary floor, the National Assembly is given the final say on disputed matters. The Judicial branch is two-part. The judicial subsection handles high criminal and civil matters, while the administrative manages appeals against the executive branch. Both of these divisions have their own independent court of appeal. Moreover, while France is a unitary nation with national laws, its ruling government is forbidden from intruding into the decisions of smaller administrative departments. As a hybrid presidential-parliamentary system, France might seem somewhat confusing to those accustomed to one system or the other. However, its careful allocation of power serves a checks-and-balances system which ensures that no single branch of government holds too much power. As only the surface of the French governing system in covered in this this post, I highly recommend that any interested in its history and details conduct further research on the subject – there’s far more to learn!
Today is Independence Day in several countries: North Korea, South Korea, India, and Congo. That marks this date as of major significance in what has come to be called post-colonialism, the time of liberation of colonial nations from their imperial overlords. The Second World War was the great watershed event. After the war, Britain, reluctantly, started divesting itself of its imperial holdings, and Japan did so forcibly. Japan gave up Korea on this date, because this is the date Japan surrendered to the allies (or yesterday depending on your time zone). Today is called V-J (victory over Japan) Day in Britain, similar to V-E Day earlier in the year when Germany surrendered. V-J Day was very important because British and Commonwealth forces were still fighting in the Pacific after Germany surrendered, but the celebrations were more muted in Britain because the nation was not under imminent threat from Japan in the way it had been from Germany. The National Liberation Day of Korea is celebrated annually on August 15th in both North and South Korea (the only shared national holiday). It commemorates the day when U.S. and Soviet forces ended the decades-long Japanese occupation of Korea. In South Korea it is known as Gwangbokjeol (광복절; literally, “the day the light returned”), and in North Korea it is known as Chogukhaebangŭi nal (조국해방의 날; literally, “Liberation of the Fatherland Day”). After the Korean Peninsula was liberated by the Allies in 1945, independent Korean governments were created three years later, on August 15, 1948, when the pro-U.S. Syngman Rhee was elected first President of South Korea and pro-Soviet Kim Il-sung was made first Leader of North Korea. In South Korea, many activities and events happen during the day, including an official ceremony with the president in attendance that takes place at the Independence Hall of Korea in Cheonan or at the Sejong Center for the Performing Arts. All buildings and homes are encouraged to display the South Korean national flag Taegukgi. Not only are most public museums and places open free of charge to the descendants of independence activists on the holiday, but they can also travel on both public transport and intercity trains for free. The official “Gwangbokjeol song” (광복절 노래) is sung at official ceremonies. The song’s lyrics were written by Jeong Inbo (정인보) and the melody by Yoon Yongha (윤용하). The lyrics speak of “to touch the earth again” and how “the sea dances”, how “this day is the remaining trace of 40 years of passionate blood solidified” and to “guard this forever and ever.” The government traditionally issues special pardons on Gwangbokjeol. Independence Day is annually celebrated on 15th August, as a national holiday in India commemorating the nation’s independence from the United Kingdom on 15th August 1947, the UK Parliament passed the Indian Independence Act 1947 transferring legislative sovereignty to the Indian Constituent Assembly. India still retained King George VI as head of state until its transition to a full republican constitution. India attained independence following the Independence Movement noted for largely non-violent resistance and civil disobedience led by the Indian National Congress (INC). Independence coincided with the partition of India, in which the British India was divided along religious lines into the Dominions of India and Pakistan. The partition was accompanied by violent riots and mass casualties, and the displacement of nearly 15 million people due to religious violence. Millions of Muslim, Sikh and Hindu refugees trekked the newly drawn borders in the months surrounding independence. In Punjab, where the borders divided the Sikh regions in halves, massive bloodshed followed; in Bengal and Bihar, where Mahatma Gandhi’s presence assuaged communal tempers, the violence was mitigated. In all, between 250,000 and 1,000,000 people on both sides of the new borders died in the violence. While the entire nation was celebrating Independence Day, Gandhi stayed in Calcutta in an attempt to stem the carnage. On 14th August 1947, the Independence Day of Pakistan, the new Dominion of Pakistan came into being; Muhammad Ali Jinnah was sworn in as its first Governor General in Karachi. On 15th August 1947, the first Prime Minister of India, Jawaharlal Nehru raised the Indian national flag above the Lahori Gate of the Red Fort in Delhi. Independence Day is one of the three national holidays in India and is observed in all Indian states and union territories, as well as the Indian diaspora. On the eve of Independence Day, the President of India delivers the “Address to the Nation.” On 15th August, the Prime Minister hoists the Indian flag on the ramparts of the historical site of Red Fort in Delhi. A 21 gun salute is fired in honor of the occasion. In his speech, the Prime Minister highlights the past year’s achievements, raises important issues and calls for further development. He also pays tribute to the leaders of the Indian independence movement. The Indian national anthem, “Jana Gana Mana”, is sung. The speech is followed by march past of divisions of the Indian Armed Forces and paramilitary forces. Parades and pageants showcase scenes from the independence struggle and India’s diverse cultural traditions. Similar events take place in state capitals where the Chief Ministers of individual states unfurl the national flag, followed by parades and pageants. Flag hoisting ceremonies and cultural programs take place in governmental and non-governmental institutions throughout the country. Schools and colleges conduct flag hoisting ceremonies and cultural events. Major government buildings are often adorned with strings of lights. In Delhi and some other cities, kite flying adds to the occasion. National flags of different sizes are used abundantly to symbolize allegiance to the country. Citizens adorn their clothing, wristbands, cars, household accessories with replicas of the tricolor. Over time, the celebration has changed emphasis from nationalism to a broader celebration of all things Indian. Today is Independence Day in the Republic of Congo, marking independence from France on 15th August 1960. The Republic of Congo is also informally called Congo or Congo-Brazzaville. It is located on both sides of the equator, and its neighbors are Gabon , Cameroon , the Central African Republic , the Democratic Republic of Congo (from which it is separated, in part, by the Congo River and the Ubangi), and Cabinda ( Angola ). The Republic of Congo is often called “Congo-Brazzaville” to distinguish it from the other Congo, officially named “Democratic Republic of Congo,” informally called “Congo-Kinshasa”. French involvement in Congo began in the 1870s with Pierre Savorgnan de Brazza. He reached the Congo in 1879 going up the course of the Ogoué, to the mouth of the present island of Mbamou. In 1880, he signed a treaty of sovereignty with Makoko, the king, Tékés in Mbé (100 km north of Brazzaville), and founded the post of Mfoa, named after the river that serves the city. Later it was renamed Brazzaville . At the same time, Lieutenant Cordier explored the region of Kouilou and Niari, and signed a treaty with king Maloango that recognized the sovereignty of France over the Kingdom of Loango, and he, in turn, founded Pointe-Noire in 1883. In 1885, Congo became one of the four states of French Equatorial Africa, with Brazzaville as the capital. The colony of French Congo was created in 1891, with the current Gabonese territory part of it until 1904. From 1899, the territory was ceded to concession companies, which paid tax to the French administration. These companies mainly exploited rubber on thirty-years contracts for huge tracts ranging between 200,000 and 14 million hectares. These companies paid 15% of their profits as taxes to the French government. Apart from rubber, the companies exploited sugar, ivory, and precious woods. The main defender of this economic system was Eugène Étienne, then Under-Secretary of State for Colonies. Another Under-Secretary of State for the colonies, Théophile Delcassé , secretly granted, without official publication of the contracts, a concession of 11 million hectares (that is one-fifth the area of France), located in Haut-Ogooué . Then, from March to July 1899 , the Colonial Minister Guillain granted, by decree, 40 more concessions. Many dealer companies were in the hands of numerous shareholders, including Leopold II of Belgium who bought shares under a false name. This fact, discovered after the death of the king, shocked the French authorities of the time, who did not realize that their colony was being exploited by a foreign country. It’s a general rule: mobsters don’t like other mobsters horning in on their turf. In 1926 , André Matsoua founded a “friendly” group to help skirmishers (veterans who participated alongside the French army in the First World War) in their fight for independence from France. Because of the harsh conditions of exploitation of the colony, nationalism had rapidly spread in the Congo. This friendly group soon developed into a protest movement. The colonial administration was concerned, and incarcerated Matsoua, who died in prison in 1942, under suspicious circumstances. The movement then turned into a church that recruited members from indigenous people. Congolese nationalism took firmer shape after the Second World War. On October 21st, 1945 Congolese elected the first Congolese deputy, Jean-Félix Tchicaya, to the Constituent Assembly in Paris. In 1946, he founded the Congolese Progressive Party (PPC), the Congolese section of the African Democratic Rally (GDR). Tchicaya was opposed by Jacques Opangault, but both were challenged by Father Fulbert Youlou, founder of the Democratic Union for the Defense of African Interests (UDDIA). Youlou won in the municipal elections of 1956. In 1958 a referendum on the French Community got a 99% “yes” vote for independence in the Middle Congo. The Congo became an autonomous republic, with Youlou as prime minister. In 1959, unrest erupted in Brazzaville and the French army intervened. Then on August 15th, 1960 Congo gained independence from France as the Republic of Congo, with Youlou elected as the first president. For a recipe for today you could choose Korean, Indian, or Congolese. Within Indian cuisine alone you have a mountain of choices; Korean also. Because I have been a bit light on African recipes I will give you Congolese saka-saka (boiled cassava leaves), and to express my independence from the tyranny of conventional recipes, I’ll talk you through it. Start with enough cassava leaves to fill a big pot. Remove the stems and cut or tear them into pieces. Traditionally the leaves would be mashed and crushed in a large mortar. You can improvise with a rolling pin or a wooden mallet, but do not use a food processor. Place the greens in a large pot, top with water, and bring to a rolling boil. Cook for at least an hour, preferably two. Meanwhile prepare the other ingredients. Peel and chop an onion and a clove of garlic. Deseed and chop a green bell pepper. Peel and eggplant, remove the seeds, dice, and cover with salt in a ceramic bowl. You will also need a piece of dried or smoked fish, and a few tablespoons of oil. Palm oil is traditional, but if you cannot find palm oil from sustainable sources, use vegetable oil. Add all the remaining ingredients to the greens and bring to a boil, then reduce the heat and simmer for several hours. Do not stir. Simmer until the water is mostly gone and the greens are cooked to a pulp. Serve with rice, and a meat dish if you wish.
Given three numbers A,B and C. Find roots of quadratic equation Ax2 + Bx + C = 0. (A not equal to 0) First line of the input contains an integer T which denotes the number of test cases. Then T test cases follow. Each test case contains a single line containing three space separated integers A,B and C. For each test case, Print two roots of quadratic equation(space separated) in ascending order. For complex roots,print COMPLEX. 1 <= T< = 100 Note: A is not equal to 0. 2 3 4 3 5 1 1 -4 4 If you have purchased any course from GeeksforGeeks then please ask your doubt on course discussion forum. You will get quick replies from GFG Moderators there.
Since the first exoplanet was confirmed in 1992, over 4,000 planets have been discovered around other stars. The Exoplanet Revolution Observing planets orbiting other stars was until quite recently seen by astronomers as a fruitless endeavour. Planets are billions of times less bright than their parent star, and orbit within 1/10,000th of a degree on the sky for even the closest stars, rendering direct observation of exoplanets an extremely challenging endeavour. Fortunately, astronomers have developed a suite of indirect methods to detect exoplanets via the way they influence their host star. An early successful method for detecting exoplanets is the radial velocity method. As a planet orbits a star, its gravitational pull causes the star to wobble back and forth. This motion causes absorption signatures in the star’s light to become redder, then bluer, due to the Doppler effect: By measuring how far absorption lines shift, astronomers can use the laws of orbital mechanics to work out a lower limit on the mass of an exoplanet. This powerful technique led to the discovery of the first hot Jupiter in 1995 and super-Earths from 2005. However, while the radial velocity method lets us detect and weigh a planet, it doesn’t give information on the size of the planet. Starting in 2000, astronomers demonstrated an independent method for detecting and studying exoplanets: the transit method. This technique involves watching the light from a star ‘dip’ as a planet passes in front of it: By measuring the depth of the observed dip, astronomers can figure out the size of planet (bigger planets block more light than smaller planets). Since the launch of NASA’s Kepler space telescope in 2009, thousands of planets have been discovered via this method. As of 2018, the successor mission to Kepler, the Transiting Exoplanet Survey Satellite (TESS) has begun a full-sky search expected to find tens of thousands of new exoplanets over the next few years. With the mass of a planet measured from its radial velocity, and the size measured from its transit, the planet’s density (=mass/volume) can be calculated. Comparing the density to that of gases, liquids, rocks, and metals, astronomers can then infer basic facts about what materials an exoplanet is made of. Density measurements are now revealing rocky planets in the habitable zone of nearby stars, such as the seven planets in the TRAPPIST-1 system 40 light years away. However, density alone cannot tell us what these planets are really like (e.g. Earth and Venus have similar densities, but the later is a quite hostile place). To glimpse the true nature of exoplanets, we need to peer into their atmospheres. One of the most powerful tools in the astronomer’s arsenal is spectroscopy - the splitting of light into its individual colours (wavelengths). On Earth, we all notice that the sky is blue even though sunlight is white, which is caused by molecules of air interacting with higher energy (blue) light more strongly than low energy (red) light and thus causing it to scatter in different directions. Similarly, different colours of light are treated differently by the gases making up exoplanet atmospheres, with some colours absorbed while others can pass through: Astronomers have developed many clever ways to use spectroscopy to study what exoplanet atmospheres are made of. One of the most successful (and the focus of much of my research) is transmission spectroscopy. This involves watching an exoplanet transit in front of its star at many different wavelengths to probe how strongly star light is absorbed at each wavelength. When an atom or molecule strongly absorbs a given wavelength, the planet will appear to be slightly larger (as the atmosphere is opaque), while the planet will appear to be smaller at wavelengths where the atmosphere is transparent: By measuring the amount of star light blocked by the planet as a function of wavelength, astronomers can then make a plot of the size of an exoplanet as a function of the wavelength - this is called a transmission spectrum. Wherever a bump appears in the spectrum, this tells us that something in the atmosphere is absorbing or scattering starlight, and hence stopping it reaching us: Transmission spectra measured using visible light have already revealed atoms, such as sodium (Na) and potassium (K), while space-based infrared observations have revealed molecules like water (H2O) in many exoplanet atmospheres. Future telescopes, such as NASA’s James Webb Space Telescope (JWST) and ESA’s Atmospheric Remote-sensing Infrared Exoplanet Large-survey (ARIEL) will use this technique to peer into atmospheres in unprecedented detail, revealing new molecules and insights into the composition of exoplanetary atmospheres. For a detailed overview of the current state of the art in exoplanet atmosphere science, including other techniques used to study these exotic worlds, I have recorded the following video: Using modern analysis tools, we can go beyond detecting gases in exoplanet atmospheres to measure how much of each chemical resides in the atmosphere. Extracting this detailed information from an observed spectrum is called atmospheric retrieval. By measuring the quantity of each gas making up an atmosphere, we gain important insights into how these planets formed, the conditions in their atmospheres, and even their potential habitability. Atmospheric retrieval is a tricky endeavour though, as one needs to consider millions of potential combinations of gases, atmospheric temperatures, clouds, and other factors to figure out the composition of even one exoplanet atmosphere. You can find out more in this video: I am the lead developer of POSEIDON - an efficient atmospheric retrieval code designed to extract the composition of exoplanet atmospheres from ground and space-based transmission spectra. This code has already been applied to multiple hot Jupiter exoplanets, revealing evidence of new chemistry and atmospheric phenomena. A summary of my research findings to date can be found on the next page: Image and video credits: ESO / NASA Goddard / Ryan MacDonald
Communicating with children can be a difficult task on its own, but it becomes even tougher with a nonverbal autistic child. However, it is possible to not only find ways to effectively communicate with them, you can also help them develop their verbal skills in a way that is comfortable to them. With that being said, it’s important to remember not all autistic children are the same. While one strategy may work perfectly for one situation, that’s not to say it will be right for another. In hopes of improving communication skills, we have put together this list of ways you can help a nonverbal autistic child communicate. If you are in need of childcare that doesn’t leave behind children with special needs, call Darlene’s Wee Care 4 Kids. -Play and Social Interaction Much of any child’s education comes as a product of playing, including learning language and communication skills. Find games that your child enjoys playing to engage them. Include activities that encourage social interaction, such as singing and reciting nursery rhymes. While you’re interacting with your child, keep yourself in front of them and close to eye level, as that will make it easier for them to see and hear you. -Mimic Your Child Pay attention to the sounds and behaviors your child makes while interacting. You can help them be more vocal by imitating the sounds they make, while encouraging them to copy you and take turns going back and forth with sounds. Only imitate their behavior when it is a positive action. For instance, if they roll a toy car, you roll yours as well. But, if they throw their car, don’t follow suit. This is an opportunity to show them good behavior first hand. Small things like gestures and eye contact can help an autistic child form the foundation for future language and communication skills. Exaggerate your gestures to get your communicate your feelings to them. Use your voice and body while communicating, such as extending your hand and saying the word “look”. Observe your child’s gestures and respond to them in kind. When they see you communicating like this, they will be more likely to use this technique to communicate with you and others. -Give Them Space to Talk When talking to your child, it is easy to feel like you need to help them along when they don’t respond right away. This can make them more shy about answering since they feel like you’ll answer for them. When you ask them a question, give them ample time to answer while watching expectantly. Keep an eye out for any sounds they make or nonverbal gestures, as this may be the only way they can respond at that time. Then, respond to that gesture promptly to help them establish the flow of communication. That wraps up part one of our look at some ways you can help a nonverbal autistic child communicate. We’ll be diving into part two in a couple of weeks, but we hope these strategies help you with your little one. If you’re looking for childcare services in Upper Darby that will give your special needs child the attention and care they deserve, call Darlene’s Wee Care 4 Kids. We’re here to help!
Savannas are ecosytems with a continuous grass layer and scattered trees or shrubs. These lands occupy nearly a third of the earth's land surface and are an important resource not only in world economies but also as repositories of biodiversity. Because savannas are generally thought of as tropical ecosystems, most reviews of the literature have tended to disregard savannas found in temperate zones. Yet these ecosystems are both extensive and diverse in North America, ranging from longleaf pine habitats along the Atlantic coastal plain to xeric pi\u00f1on-juniper communities of the Great Basin-ecosystems seemingly disparate, yet similar enough to merit study as savannas. This book provides an overview of the patterns and processes shared by these ecosystems and offers substantive ideas regarding future management and research efforts. It describes the composition geographic distribution, climate, soils, and uses of savannas throughout North America, summarizing and integrating a wide array of literature. While discussing these ecological patterns and processes. McPherson develops a framework for implementing management practices and safeguarding the future of these important wildland ecosystems. Ecology and Management of North American Savannas takes a major step toward establishing the science of savanna ecology for North America. It encourages constructive debate and relevant research on these important systems and will also serve as a useful resource in biogeography, plant ecology, and rangeland management.
Anna is looking for the gym in her new apartment building. She meets Pete and he gives her directions. Anna finds many different places in the apartment building. Finally, she finds the gym. Watch the video and practice the new words and learn about using prepositions. You can also download the worksheet and practice with a friend. In this video, you learn about how Americans greet each other in informal situations. You will also learn how to ask clarification questions by beginning your sentence with a statement, then making your voice go up at the end of the sentence to form a question. What are some of the rooms in your house? Write to us in the Comments section. Tell us what you do in the rooms. You can also download the worksheet. Practice writing the names of rooms in an apartment building. Learning Strategies are the thoughts and actions that help make learning easier or more effective. The learning strategy for this lesson is Ask Questions to Clarify. In the video you see Anna ask Pete about the gym. She uses a statement and a question word together to clarify Pete's directions to the gym. Pete says,"The gym is across from the lounge." Later, Anna asks him, "The gym is across from … what?" See how well you understand the lesson by taking this quiz. Each question has a video. Play the video and choose the correct answer. across from – prep. on the opposite side from (someone or something) behind - prep. in or to a place at the back of or to the rear of (someone or something) elevator – n. a machine used for carrying people and things to different levels in a building every – adj. used to describe how often some repeated activity or event happens or is done gym – n. a room or building that has equipment for sports activities or exercise lobby – n. a large open area inside and near the entrance of a public building (such as a hotel or theater) lounge – n. a room with comfortable furniture for relaxing mailroom – n. a room in which mail is processed and sorted next to – prep. at the side of (someone or something) parking garage – n. a building in which people usually pay to park their cars, trucks, etc. rooftop – n. the cover or top of a building or vehicle work out – phrasal verb to perform athletic exercises in order to improve your health or physical fitness Download the VOA Learning English Word Book for a dictionary of the words we use on this website. Each Let's Learn English lesson has an Activity Sheet for extra practice on your own or in the classroom. In this lesson, you can use it to talk about the location of rooms in an apartment building. Grammar focus: prepositions: next to, behind, across from Topics: Informal greetings; Asking questions and clarifying information about location; Naming places; Rooms and services in an apartment Learning Strategy: Ask Questions to Clarify Speaking & Pronunciation focus: using prepositions, asking for clarifying information; informal greetings